diff --git a/spaces/101-5/gpt4free/g4f/.v1/gui/README.md b/spaces/101-5/gpt4free/g4f/.v1/gui/README.md
deleted file mode 100644
index c0406216bd922ab43eb2241496a816e7b747d0de..0000000000000000000000000000000000000000
--- a/spaces/101-5/gpt4free/g4f/.v1/gui/README.md
+++ /dev/null
@@ -1,78 +0,0 @@
-# gpt4free gui
-
-This code provides a Graphical User Interface (GUI) for gpt4free. Users can ask questions and get answers from GPT-4 API's, utilizing multiple API implementations. The project contains two different Streamlit applications: `streamlit_app.py` and `streamlit_chat_app.py`.
-
-In addition, a new GUI script specifically implemented using PyWebIO has been added and can be found in the pywebio-gui folder. If there are errors with the Streamlit version, you can try using the PyWebIO version instead
-
-Installation
-------------
-
-1. Clone the repository.
-2. Install the required dependencies with: `pip install -r requirements.txt`.
-3. To use `streamlit_chat_app.py`, note that it depends on a pull request (PR #24) from the https://github.com/AI-Yash/st-chat/ repository, which may change in the future. The current dependency library can be found at https://github.com/AI-Yash/st-chat/archive/refs/pull/24/head.zip.
-
-Analytics Disclaimer
------
-The streamlit browser app collects heavy analytics even when running locally. This includes events for every page load, form submission including metadata on queries (like length), browser and client information including host ips. These are all transmitted to a 3rd party analytics group, Segment.com.
-
-Usage
------
-
-Choose one of the Streamlit applications to run:
-
-### streamlit\_app.py
-
-This application provides a simple interface for asking GPT-4 questions and receiving answers.
-
-To run the application:
-
-run:
-```arduino
-streamlit run gui/streamlit_app.py
-```
-
-
-
-
-
-
-
-preview:
-
-
-
-
-### streamlit\_chat\_app.py
-
-This application provides a chat-like interface for asking GPT-4 questions and receiving answers. It supports multiple query methods, and users can select the desired API for their queries. The application also maintains a conversation history.
-
-To run the application:
-
-```arduino
-streamlit run streamlit_chat_app.py
-```
-
-
-
-
-
-
-
-
-preview:
-
-
-
-Contributing
-------------
-
-Feel free to submit pull requests, report bugs, or request new features by opening issues on the GitHub repository.
-
-Bug
-----
-There is a bug in `streamlit_chat_app.py` right now that I haven't pinpointed yet, probably is really simple but havent had the time to look for it. Whenever you open a new conversation or access an old conversation it will only start prompt-answering after the second time you input to the text input, other than that, everything else seems to work accordingly.
-
-License
--------
-
-This project is licensed under the MIT License.
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Blaupunkt TravelPilot DX 2013 - 2014 The Best Navigation System for Europe[1].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Blaupunkt TravelPilot DX 2013 - 2014 The Best Navigation System for Europe[1].md
deleted file mode 100644
index 5c496507754f7333eef6054e1438f9ee43f69231..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Blaupunkt TravelPilot DX 2013 - 2014 The Best Navigation System for Europe[1].md
+++ /dev/null
@@ -1,145 +0,0 @@
-
-
Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-
If you are looking for a reliable and convenient navigation system for your car, you might want to check out Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013. This is a digital map that covers all the countries in Europe and provides you with accurate and up-to-date information on roads, traffic, landmarks, and more. In this article, we will tell you everything you need to know about this navigation system, including how to download and install it, how to use it, what are its advantages and disadvantages, and how it compares with other navigation systems. Let's get started!
-
How to Download and Install Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-
The first step to use this navigation system is to download the torrent file from a reliable source. You can find many websites that offer this file for free or for a small fee. However, make sure that you choose a reputable and secure site that does not contain any viruses or malware. You can use a torrent client software such as uTorrent or BitTorrent to download the file.
-
Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
Once you have downloaded the torrent file, you need to extract the files and copy them to an SD card. You can use a software such as WinRAR or 7-Zip to unzip the files. You should see a folder named "TeleAtlas" that contains several subfolders and files. Copy this folder to your SD card. Make sure that your SD card has enough space (at least 4 GB) and is formatted in FAT32.
-
The next step is to insert the SD card into your Blaupunkt Dx device and update the navigation software. To do this, you need to turn on your device and go to the main menu. Then, select "Settings" and then "System Update". The device will detect the SD card and ask you if you want to update. Confirm by pressing "Yes". The update process may take several minutes, so do not turn off your device or remove the SD card until it is finished. When it is done, you will see a message that says "Update Successful". Congratulations! You have successfully installed Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013.
-
Download Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Torrent
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Iso File
-How to Install Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 on Car
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Free Download
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Crack
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Update
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Review
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Compatibility
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Map Coverage
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Serial Number
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Activation Code
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 User Manual
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Features
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Price
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Discount
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Online Purchase
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Delivery Time
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Warranty
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Customer Service
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Troubleshooting
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Error Codes
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Software Version
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Hardware Requirements
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 System Settings
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Voice Control
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Speed Camera Alerts
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Traffic Information
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Route Planning
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Destination Input
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Point of Interest Search
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Favorites Management
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Map Display Options
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Sound Settings
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Language Settings
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Time Settings
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Units Settings
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Security Settings
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Backup and Restore
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Factory Reset
-Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 Test Mode
-Compare Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 with Other Models
-Benefits of Using Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-Drawbacks of Using Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-Alternatives to Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-Tips and Tricks for Using Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-Frequently Asked Questions about Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-Customer Reviews of Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-Expert Opinions on Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-Blog Posts about Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-Videos about Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-
How to Use Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-
Now that you have installed this navigation system, you can start using it right away. To access the main menu, press the "Menu" button on your device. You will see several options, such as "Navigation", "Media", "Phone", etc. Select "Navigation" to enter the map mode.
-
In the map mode, you can select your desired destination by using one of these methods:
-
-
Enter an address: Press the "Address" button and type in an address or a postcode using the keyboard on the screen. You can also select a country, a city, a street name, or a house number from a list.
-
Select a point of interest: Press the "POI" button and choose a category such as "Gas Stations", "Restaurants", "Hotels", etc. You can also search for a specific name or keyword using the keyboard on the screen.
-
Select a location from history: Press the "History" button and choose a location that you have previously entered or visited.
-
Select a location from favorites: Press the "Favorites" button and choose a location that you have saved as a favorite.
-
Select a location from coordinates: Press the "Coordinates" button and enter the latitude and longitude values using the keyboard on the screen.
-
-
Once you have selected your destination, press the "Start" button to begin navigation. The device will calculate the best route for you based on your current location and preferences. You can also change your preferences by pressing the "Settings" button on your device. You can adjust things such as:
-
-
Route type: Choose between fastest, shortest, economical, or easy routes.
-
Avoidances: Choose whether to avoid toll roads, highways, ferries, unpaved roads, etc.
-
Voice guidance: Choose whether to enable or disable voice guidance and select a language and a volume level.
-
Map view: Choose between 2D or 3D view and select a day or night mode.
-
Map details: Choose whether to display points of interest, traffic information, speed limits, etc.
-
-
While navigating, you can follow the voice guidance and visual cues on your screen. The device will tell you when to turn left or right, when to enter or exit a highway, when to change lanes, etc. You can also see information such as distance remaining, time remaining, speed limit, current speed, etc. on your screen.
-
If you encounter any traffic jams, road closures, or other hazards along your route, the device will alert you and suggest an alternative route if available. You can also press the "Traffic" button on your device to see more details about traffic conditions in your area. You can also press the "Detour" button if you want to manually change your route.
-
Advantages of Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-
Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 is one of the best navigation systems for drivers in Europe because it offers many advantages, such as:
-
-
Accurate and up-to-date maps: This navigation system provides you with detailed and updated maps of all European countries. It covers more than 10 million kilometers of roads and more than 5 million points of interest. It also includes information about speed limits, lane guidance, junction views, cross-border planning, etc.
-
Various points of interest: This navigation system offers you various points of interest, such as gas stations, hotels, museums, parks, etc. You can easily find and navigate to any place you want using the POI search function. You can also see ratings and reviews of some places from other users.
-
Enhanced driving experience and safety: This navigation system enhances your driving experience and safety by providing you with real-time information on traffic, weather, speed cameras, etc. You can avoid delays and hazards and drive more smoothly and confidently. You can also use the hands-free function to make or receive calls using your device.
-
-
Disadvantages of Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-
However, Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 also has some disadvantages that you should be aware of, such as:
-
-
Compatibility issues: This navigation system may not be compatible with some older models of Blaupunkt Dx devices. You should check the compatibility list before downloading and installing it. You may also need to update your device's firmware to make it work properly.
-
Internet connection requirement: This navigation system may require a high-speed internet connection to download and update. The torrent file is about 3.5 GB in size, so it may take a long time to download depending on your connection speed. You may also incur additional data charges if you use a mobile network.
-
Potential errors or glitches: This navigation system may have some errors or glitches in some areas or routes. For example, some roads or POIs may be missing or outdated, some voice commands or directions may be incorrect or unclear, some features or functions may not work properly, etc. You should always use this navigation system with caution and common sense.
-
-
Comparison with Other Navigation Systems
-
Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 is not the only navigation system available for drivers in Europe. There are other popular navigation systems, such as Garmin, TomTom, etc. How does Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 compare with them? Here is a table that summarizes the pros and cons of each system:
-
-
-
System
-
Pros
-
Cons
-
-
-
Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013
-
- Accurate and up-to-date maps of Europe - Various points of interest - Enhanced driving experience and safety - Free or low-cost download
-
- Compatibility issues with some devices - Internet connection requirement - Potential errors or glitches
-
-
-
Garmin
-
- High-quality maps of Europe - Advanced features such as lane assist, junction view, photoReal, etc. - Lifetime map updates - Compatible with most devices
-
- Expensive purchase - Limited points of interest - Slow performance and updates - Occasional errors or glitches
-
-
-
TomTom
-
- Detailed maps of Europe - Innovative features such as IQ Routes, HD Traffic, Map Share, etc. - Regular map updates - User-friendly interface and design
-
- Costly purchase and subscription - Compatibility issues with some devices - Privacy concerns - Frequent errors or glitches
-
-
-
Conclusion
-
In conclusion, Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 is a great navigation system for drivers in Europe who want accurate and up-to-date maps, various points of interest, and enhanced driving experience and safety. It is also free or low-cost to download and easy to install and use. However, it also has some drawbacks, such as compatibility issues with some devices, internet connection requirement, and potential errors or glitches. Therefore, you should weigh the pros and cons carefully before deciding whether to use this navigation system or not.
-
If you decide to use Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013, here are some tips and recommendations for getting the most out of it:
-
-
Check the compatibility list before downloading and installing it.
-
Use a reliable and secure source to download the torrent file.
-
Use a high-speed internet connection to download and update the file.
-
Use a software such as WinRAR or 7-Zip to extract the files.
-
Use an SD card with enough space (at least 4 GB) and formatted in FAT32.
-
Update your device's firmware if necessary.
-
Adjust your settings and preferences according to your needs.
-
Use the hands-free function to make or receive calls safely.
-
Follow the voice guidance and visual cues carefully.
-
Avoid traffic jams, road closures, and other hazards using the real-time information.
-
Use caution and common sense when using this navigation system.
-
Share your feedback with other users.
-
-
We hope that this article has helped you understand more about Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 and how to use it effectively. If you have any questions or comments, please feel free to contact us. We would love to hear from you!
-
FAQs
-
Here are some frequently asked questions about Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013:
-
-
What is Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013? Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 is a digital map that covers all the countries in Europe and provides you with accurate and up-to-date information on roads, traffic, landmarks, and more. It is compatible with most models of Blaupunkt Dx devices.
-
How do I download and install Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013? You need to download the torrent file from a reliable source using a torrent client software such as uTorrent or BitTorrent. Then, you need to extract the files using a software such as WinRAR or 7-Zip and copy them to an SD card formatted in FAT32. Finally, you need to insert the SD card into your device and update the navigation software from the main menu.
-
How do I use Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013? You need to access the main menu on your device and select "Navigation" to enter the map mode. Then, you need to select your destination by entering an address, selecting a point of interest, choosing a location from history or favorites, or entering coordinates. Then, you need to press "Start" to begin navigation. You can follow the voice guidance and visual cues on your screen and adjust your settings and preferences as needed. You can also avoid traffic jams, road closures, and other hazards using the real-time information.
-
What are the advantages of Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013? Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 offers many advantages, such as accurate and up-to-date maps of Europe, various points of interest, enhanced driving experience and safety, free or low-cost download, etc.
-
What are the disadvantages of Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013? Torrent Sd Navigation Blaupunkt Dx Teleatlas Europe 20122013 also has some disadvantages, such as compatibility issues with some devices, internet connection requirement, potential errors or glitches, etc.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/El Silabario Salvadoreno Pdf Download La obra ilustrada que ensea el habla checa y otras.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/El Silabario Salvadoreno Pdf Download La obra ilustrada que ensea el habla checa y otras.md
deleted file mode 100644
index 4b79f11178427561c1ec851f860c38b5c1ed6882..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/El Silabario Salvadoreno Pdf Download La obra ilustrada que ensea el habla checa y otras.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
El Silabario Salvadoreño: A Classic Book for Learning to Read and Write in Spanish
-
If you are looking for a simple, effective, and fun way to learn Spanish, you might want to check out El Silabario Salvadoreño. This is a classic book that has been used by generations of children and adults in El Salvador and other Latin American countries to learn the basics of reading and writing in Spanish. In this article, we will tell you everything you need to know about El Silabario Salvadoreño, including its history, content, benefits, and tips for using it. Whether you are a beginner or an intermediate Spanish learner, you will find this book useful and enjoyable.
-
History of El Silabario Salvadoreño
-
El Silabario Salvadoreño was created by Adrián Dufflocq Galdames, a Chilean educator who dedicated his life to teaching literacy. He developed a phonetic-sensorial-objective-synthetic method that aimed to teach reading and writing through sounds, images, objects, and words. He published his first silabario (syllabary) in 1894, which was later adapted and improved by other authors. His silabarios were widely used in Chile and other Latin American countries throughout the 20th century.
One of the most popular versions of his silabarios was El Silabario Salvadoreño, which was published in 1960 by Editorial Dufflocq. This version was specially designed for El Salvador, taking into account its culture, geography, history, and vocabulary. It was also updated with new illustrations, exercises, and texts. El Silabario Salvadoreño became a staple in many Salvadoran schools and homes, helping millions of people learn to read and write in Spanish.
-
Content of El Silabario Salvadoreño
-
El Silabario Salvadoreño consists of 84 pages that cover the Spanish alphabet and syllables. Each page has a letter or a syllable at the top, followed by a word that starts with that letter or syllable, an image that represents that word, a sentence that uses that word, and some exercises that reinforce the learning. For example, the page for the letter A has the word "árbol" (tree), an image of a tree, the sentence "El árbol es verde" (The tree is green), and some exercises that ask the reader to identify the letter A in different words.
-
The book follows a logical progression from simple to complex sounds and words. It starts with the vowels (A, E, I, O, U), then moves on to consonants (B, C, D, F, G, H...), then to syllables (BA, BE, BI...), then to words (BANCO, BELLO...), then to sentences (EL BANCO ES DE MADERA...). The book also introduces some special sounds (CH, LL...) and some diacritical marks (Á...). By the end of the book, the reader should be able to read and write simple texts in Spanish.
-
El Silabario Salvadoreno Pdf Free Download
-How to Download El Silabario Salvadoreno Pdf
-El Silabario Salvadoreno Pdf Online
-El Silabario Salvadoreno Pdf Book
-El Silabario Salvadoreno Pdf Gratis
-El Silabario Salvadoreno Pdf Descargar
-El Silabario Salvadoreno Pdf Full
-El Silabario Salvadoreno Pdf Version
-El Silabario Salvadoreno Pdf File
-El Silabario Salvadoreno Pdf Document
-El Silabario Salvadoreno Pdf Ebook
-El Silabario Salvadoreno Pdf Reader
-El Silabario Salvadoreno Pdf Format
-El Silabario Salvadoreno Pdf Print
-El Silabario Salvadoreno Pdf Copy
-El Silabario Salvadoreno Pdf Scan
-El Silabario Salvadoreno Pdf Edit
-El Silabario Salvadoreno Pdf Convert
-El Silabario Salvadoreno Pdf Share
-El Silabario Salvadoreno Pdf Link
-El Silabario Salvadoreno Pdf Zip
-El Silabario Salvadoreno Pdf Torrent
-El Silabario Salvadoreno Pdf Google Drive
-El Silabario Salvadoreno Pdf Dropbox
-El Silabario Salvadoreno Pdf Mega
-El Silabario Salvadoreno Pdf Mediafire
-El Silabario Salvadoreno Pdf 4shared
-El Silabario Salvadoreno Pdf Rapidshare
-El Silabario Salvadoreno Pdf Zippyshare
-El Silabario Salvadoreno Pdf Uploaded
-El Silabario Salvadoreno Pdf Download Site
-El Silabario Salvadoreno Pdf Download Page
-El Silabario Salvadoreno Pdf Download Link
-El Silabario Salvadoreno Pdf Download Button
-El Silabario Salvadoreno Pdf Download Code
-El Silabario Salvadoreno Pdf Download Password
-El Silabario Salvadoreno Pdf Download Crack
-El Silabario Salvadoreno Pdf Download Keygen
-El Silabario Salvadoreno Pdf Download Serial Number
-El Silabario Salvadoreno Pdf Download License Key
-El Silabario Salvadoreno Pdf Download Activation Key
-El Silabario Salvadoreno Pdf Download Generator
-El Silabario Salvadoreno Pdf Download Software
-El Silabario Salvadoreno Pdf Download Program
-El Silabario Salvadoreno Pdf Download Application
-El Silabario Salvadoreno Pdf Download Tool
-El Silabario Salvadoreno Pdf Download Review
-El Silabario Salvadoreno Pdf Download Rating
-El Silabario Salvadoreno Pdf Download Feedback
-
Benefits of Using El Silabario Salvadoreño for Spanish Learners
-
El Silabario Salvadoreño has many benefits for anyone who wants to learn Spanish. Here are some of them:
-
-
Simplicity: The book is easy to follow and understand. It uses clear images, short words, simple sentences, and engaging exercises. It does not require any prior knowledge of Spanish or any other language. It is suitable for children and adults alike.
-
Effectiveness: The book teaches reading and writing through a proven method that focuses on sounds, images, objects, and words. It helps develop phonetic awareness, vocabulary acquisition, comprehension skills, spelling skills, and writing skills. It also exposes the reader to authentic Spanish texts from different sources.
-
Availability: The book is widely available online in PDF format. You can download it for free from various websites or buy it for a low price from online stores. You can also print it or use it on your computer or mobile device.
-
-
Tips and Tricks for Using El Silabario Salvadoreño
-
If you want to make the most out of El Silabario Salvadoreño, here are some tips and tricks for using it:
-
-
Practice: The key to learning anything is practice. Try to use El Silabario Salvadoreño regularly and consistently. Set a goal for yourself (for example, one page per day) and stick to it. Review what you have learned frequently.
-
Supplement: While El Silabario Salvadoreño is a great resource for learning Spanish, it is not enough by itself. You should also use other resources and methods for learning Spanish. For example, you can listen to podcasts or songs in Spanish; watch videos or movies in Spanish; read books or articles in Spanish; speak with native speakers or other learners; use apps or websites that teach grammar or vocabulary; etc.
-
Review: To measure your progress and improvement with El Silabario Salvadoreño, you should test yourself periodically. You can use the exercises at the end of each page or create your own tests based on what you have learned. You can also ask someone else to check your work or give you feedback.
-
-
Conclusion
-
In conclusion,
-
-
El Silabario Salvadoreño is a classic book that has been used by generations of people in El Salvador and other Latin American countries to learn to read and write in Spanish.
-
The book covers the Spanish alphabet and syllables through sounds, images, objects, words.
-
The book has many benefits for anyone who wants to learn Spanish such as simplicity effectiveness availability.
-
To make the most out of El Silabario Salvadoreño one should practice supplement review.
-
-
If you are interested in learning Spanish with El Silabario Salvadoreño we encourage you to download it today start using it have fun!
-
Frequently Asked Questions
-
-
What is El Silabario Salvadoreño?
-
El Silabario Salvadoreño is a classic book that teaches reading writing in Spanish through sounds images objects words.
-
Who created El Silabario Salvadoreño?
-
El Silabario Salvadoreño was created by Adrián Dufflocq Galdames a Chilean educator who developed a phonetic-sensorial-objective-synthetic method for teaching literacy.
-
How many pages does El Silabario Salvadoreño have?
-
El Silabario Salvadoreño has 84 pages that cover the Spanish alphabet syllables.
-
Where can I download El Silabario Salvadoreño?
-
You can download El Silabario Salvadoreño online in PDF format from various websites or buy it from online stores.
-
How can I use El Silabario Salvadoreño effectively?
-
You can use El Silabario Salvadoreño effectively by practicing regularly supplementing with other resources reviewing your progress.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Unlimited Vpn For Windows 10 Crack.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Unlimited Vpn For Windows 10 Crack.md
deleted file mode 100644
index bfd8cac58c808277bcc7072a3b9e7ba17ec1d30c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Unlimited Vpn For Windows 10 Crack.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
Free Unlimited VPN for Windows 10 Crack: Is It Worth It?
-
If you are looking for a free unlimited VPN for Windows 10 crack, you may be tempted by the promises of some websites that offer cracked versions of popular VPN software. However, before you download and install any of these programs, you should be aware of the risks and limitations involved. In this article, we will explain why you should avoid free unlimited VPN for Windows 10 crack and what are the best alternatives for your online security and privacy.
-
What is a VPN and why do you need one?
-
A VPN (Virtual Private Network) is a service that creates a secure and encrypted connection between your device and a remote server. By using a VPN, you can hide your real IP address and location, bypass geo-restrictions and censorship, access blocked websites and streaming services, protect your data from hackers and snoopers, and enjoy a faster and more stable internet connection.
There are many reasons why you may need a VPN for your Windows 10 PC. For example, you may want to:
-
-
Watch Netflix, Hulu, BBC iPlayer, or other streaming platforms that are not available in your country.
-
Download torrents or use P2P file-sharing without exposing your identity or activity to your ISP or authorities.
-
Use public Wi-Fi networks without worrying about your personal information being stolen or intercepted.
-
Access websites or apps that are blocked by your school, workplace, or government.
-
Protect your online privacy and anonymity from advertisers, trackers, hackers, or anyone who wants to spy on you.
-
-
What is a free unlimited VPN for Windows 10 crack?
-
A free unlimited VPN for Windows 10 crack is a modified version of a paid VPN software that claims to offer the same features and benefits without any cost or limitations. These cracks are usually distributed by third-party websites that host illegal downloads of various software programs.
-
Some of the most common free unlimited VPN for Windows 10 cracks are:
-
-
Betternet VPN Premium Crack
-
Turbo VPN Crack
-
KeepSolid VPN Unlimited Crack
-
-
What are the risks and limitations of using a free unlimited VPN for Windows 10 crack?
-
While using a free unlimited VPN for Windows 10 crack may seem like a good idea at first glance, it actually comes with many drawbacks and dangers. Here are some of the main reasons why you should avoid using a free unlimited VPN for Windows 10 crack:
-
-
It may contain malware or viruses: The websites that offer cracked VPN software are often shady and unreliable. They may infect your PC with malware or viruses that can damage your system, steal your data, or hijack your resources. You may also expose yourself to phishing scams, ransomware attacks, or identity theft.
-
It may not work properly: The cracked VPN software may not function as intended or advertised. It may have bugs, errors, glitches, or compatibility issues that can affect your user experience and performance. It may also lack important features or updates that are available in the official version.
-
It may compromise your security and privacy: The cracked VPN software may not provide the same level of encryption, protection, or anonymity as the original one. It may leak your IP address, DNS requests, or traffic data to third parties. It may also log your online activity or sell your information to advertisers or hackers.
-
It may violate the law: The cracked VPN software may infringe the intellectual property rights of the original developer. By downloading and using it, you may be breaking the law and risking legal consequences. You may also face fines, lawsuits, or even jail time.
-
-
What are the best alternatives to a free unlimited VPN for Windows 10 crack?
-
The best alternatives to a free unlimited VPN for Windows 10 crack are either reputable free VPNs or premium VPNs with money-back guarantees. These options are safer, more reliable, and more trustworthy than any cracked VPN software.
-
-To be considered for opportunities with Saudi Aramco, suppliers must first register with us. READ MORE. Existing suppliers. Our existing suppliers can manage ... 1fdad05405
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Audi Navigation Plus Rns D Bg Map Download [WORK].md b/spaces/1gistliPinn/ChatGPT4/Examples/Audi Navigation Plus Rns D Bg Map Download [WORK].md
deleted file mode 100644
index 93a8c1a43e2b084262354ddfdadb99e2b2ec3635..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Audi Navigation Plus Rns D Bg Map Download [WORK].md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
First of all downloadAudi A6 MMI 3GP Navigation Maps Disc Europe iso file. Insert empty disc to computer and burn the iso file to disc using Nero at speed 4x. Once completed go to the car, turn ignition on and insert the disc. Next you have to enter in engineering menu bypressing theCARbutton and immediately after that theBACKbutton. Hold both buttons pressed for a few seconds. Now press the Update option usingMMI Control Panel. A new menu that asking to choose sources appear, choose CD/DVD. Select the map big pressing ok and now just wait. From now the maps are automatically installing and activating.
-
Hello and welcome to our website. If you own an Audi A4 and your maps are outdated or you dont have them installed then we are happy to announce the new maps has just arrived.Audi A4 MMI 2G Navigation DVD Western Europe can be downloaded free and any Audi A4 owner can now update his GPS navigation maps. This DVD contain only Western Europe countries, you can see a list with them below. If you need Eastern Europe here they are: Eastern Europe Maps Audi A4
If are you choosing to download and update your maps is very important to know the countries available. Here is the list:Albania, Bosnia and Herzegovina, Bulgaria, Denmark, Germany, Estonia, Finland, France, Greece, Italy, Croatia, Latvia, Liechtenstein, Lithuania, Macedonia, Montenegro, Norway, Austria, Poland, Romania, San Marino, Sweden, Switzerland, Serbia, Slovenia, Slovakia, Czech Republic, Hungary, Vatican City,Great Britain, Andorra, Austria, Belgium, France, Germany, Gibraltar, Great Britain, Ireland, Liechtenstein, Luxembourg, Monaco, Netherlands, Portugal, Spain, Switzerland.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/DownloadEbookFisikaDasarTipler [WORK].md b/spaces/1gistliPinn/ChatGPT4/Examples/DownloadEbookFisikaDasarTipler [WORK].md
deleted file mode 100644
index 766875b9b2791670e6c251a7a0e51772519bde5e..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/DownloadEbookFisikaDasarTipler [WORK].md
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
How to Download Ebook Fisika Dasar Tipler for Free
-
If you are looking for a free ebook on physics, you might be interested in downloading Ebook Fisika Dasar Tipler. This ebook is based on the popular textbook Physics for Scientists and Engineers by Paul A. Tipler and Gene Mosca. It covers topics such as mechanics, thermodynamics, electromagnetism, optics, relativity, and quantum physics.
Click on the button "Download Now" and enter your email address.
-
Check your inbox for a confirmation email and click on the link provided.
-
Enjoy reading Ebook Fisika Dasar Tipler on your device of choice.
-
-
By downloading Ebook Fisika Dasar Tipler, you will benefit from:
-
-
A comprehensive and updated introduction to physics.
-
A clear and engaging writing style that makes physics accessible and interesting.
-
A variety of examples, exercises, and problems that test your understanding and challenge your creativity.
-
A digital format that allows you to read anywhere and anytime.
-
-
Don't miss this opportunity to download Ebook Fisika Dasar Tipler for free. It is a valuable resource for students, teachers, and anyone who wants to learn more about physics. Download it today and start exploring the wonders of the physical world.
-
-
-
Ebook Fisika Dasar Tipler is based on the textbook Physics for Scientists and Engineers by Paul A. Tipler and Gene Mosca. This textbook is widely used in universities around the world for teaching physics to science and engineering students. It has been translated into several languages, including Indonesian.
-
The textbook covers all the major topics of physics, from classical mechanics to modern physics. It explains the concepts and principles of physics with clarity and rigor, using examples and applications from various fields of science and technology. It also provides numerous exercises and problems that help students practice and master their skills.
-
Ebook Fisika Dasar Tipler is a digital version of the textbook that can be downloaded for free from the website www.ebookfisikadasartipler.com. By downloading Ebook Fisika Dasar Tipler, you will get access to the following features:
-
-
A complete and updated content of the textbook, with high-quality graphics and illustrations.
-
A searchable and interactive interface that allows you to navigate through the chapters and sections easily.
-
A bookmark and highlight function that lets you mark and save important points and notes.
-
A quiz and review function that tests your understanding and gives you feedback.
-
A link to online resources and references that supplement your learning.
-
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1line/AutoGPT/autogpt/memory/__init__.py b/spaces/1line/AutoGPT/autogpt/memory/__init__.py
deleted file mode 100644
index 3d18704c70dfc287642b1923e6f2e1f72a5f2a62..0000000000000000000000000000000000000000
--- a/spaces/1line/AutoGPT/autogpt/memory/__init__.py
+++ /dev/null
@@ -1,99 +0,0 @@
-from autogpt.memory.local import LocalCache
-from autogpt.memory.no_memory import NoMemory
-
-# List of supported memory backends
-# Add a backend to this list if the import attempt is successful
-supported_memory = ["local", "no_memory"]
-
-try:
- from autogpt.memory.redismem import RedisMemory
-
- supported_memory.append("redis")
-except ImportError:
- # print("Redis not installed. Skipping import.")
- RedisMemory = None
-
-try:
- from autogpt.memory.pinecone import PineconeMemory
-
- supported_memory.append("pinecone")
-except ImportError:
- # print("Pinecone not installed. Skipping import.")
- PineconeMemory = None
-
-try:
- from autogpt.memory.weaviate import WeaviateMemory
-
- supported_memory.append("weaviate")
-except ImportError:
- # print("Weaviate not installed. Skipping import.")
- WeaviateMemory = None
-
-try:
- from autogpt.memory.milvus import MilvusMemory
-
- supported_memory.append("milvus")
-except ImportError:
- # print("pymilvus not installed. Skipping import.")
- MilvusMemory = None
-
-
-def get_memory(cfg, init=False):
- memory = None
- if cfg.memory_backend == "pinecone":
- if not PineconeMemory:
- print(
- "Error: Pinecone is not installed. Please install pinecone"
- " to use Pinecone as a memory backend."
- )
- else:
- memory = PineconeMemory(cfg)
- if init:
- memory.clear()
- elif cfg.memory_backend == "redis":
- if not RedisMemory:
- print(
- "Error: Redis is not installed. Please install redis-py to"
- " use Redis as a memory backend."
- )
- else:
- memory = RedisMemory(cfg)
- elif cfg.memory_backend == "weaviate":
- if not WeaviateMemory:
- print(
- "Error: Weaviate is not installed. Please install weaviate-client to"
- " use Weaviate as a memory backend."
- )
- else:
- memory = WeaviateMemory(cfg)
- elif cfg.memory_backend == "milvus":
- if not MilvusMemory:
- print(
- "Error: Milvus sdk is not installed."
- "Please install pymilvus to use Milvus as memory backend."
- )
- else:
- memory = MilvusMemory(cfg)
- elif cfg.memory_backend == "no_memory":
- memory = NoMemory(cfg)
-
- if memory is None:
- memory = LocalCache(cfg)
- if init:
- memory.clear()
- return memory
-
-
-def get_supported_memory_backends():
- return supported_memory
-
-
-__all__ = [
- "get_memory",
- "LocalCache",
- "RedisMemory",
- "PineconeMemory",
- "NoMemory",
- "MilvusMemory",
- "WeaviateMemory",
-]
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Agar.io Mod Macro Download Enhance Your Gameplay with Agar Tool M PELEA.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Agar.io Mod Macro Download Enhance Your Gameplay with Agar Tool M PELEA.md
deleted file mode 100644
index 147453e463d3b52a37ef1695683f78b712262669..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Agar.io Mod Macro Download Enhance Your Gameplay with Agar Tool M PELEA.md
+++ /dev/null
@@ -1,146 +0,0 @@
-
-
Download Agar io Mod Macro: How to Enhance Your Gameplay Experience
-
If you are a fan of online multiplayer games, you might have heard of or played Agar io, a simple but addictive browser game where you control a cell and try to eat other cells to grow bigger. But did you know that you can also download and install a mod macro for Agar io, which can give you more features and advantages in the game? In this article, we will explain what Agar io is, what a mod macro is, how to download and install it, and how to use it effectively. By the end of this article, you will be able to enjoy Agar io with a new level of fun and excitement.
Agar io is a massively multiplayer online action game that was released in 2015 by a Brazilian developer named Matheus Valadares. The game is inspired by the biological phenomenon of agar, which is a gelatinous substance used to culture bacteria. In the game, players control a cell that can move around a map and eat smaller cells, while avoiding being eaten by larger cells. The goal is to become the largest cell in the map and dominate the leaderboard.
-
The basic gameplay of Agar io
-
The gameplay of Agar io is very simple and intuitive. You can use your mouse to move your cell around the map, and use the spacebar to split your cell into two smaller cells, which can help you escape from predators or catch prey. You can also use the W key to eject some mass from your cell, which can be used to feed other cells, either as an act of kindness or as a bait. You can also interact with various objects on the map, such as viruses, which can split larger cells into smaller pieces, or pellets, which are small food particles that can increase your mass.
-
The popularity and challenges of Agar io
-
Agar io quickly became one of the most popular online games in 2015, attracting millions of players from all over the world. The game is praised for its simplicity, accessibility, and addictiveness, as well as its social aspect, as players can chat with each other and form teams or alliances. However, the game also poses some challenges for players, such as lagging, hacking, teaming, or trolling, which can affect the fairness and enjoyment of the game. Moreover, some players may find the game too repetitive or boring after a while, as there is no end goal or progression system in the game.
-
What is a mod macro?
-
A mod macro is a modification or extension that adds new features or functions to a game or software. A mod macro can enhance the performance, functionality, or appearance of a game or software, as well as provide some advantages or conveniences for the user. A mod macro can be created by the original developer or by third-party developers or users.
-
The definition and benefits of a mod macro
-
A mod macro for Agar io is a user script that modifies or extends the original game code to provide new features or functions for the player. A mod macro can offer various benefits for Agar io players, such as:
-
-
Zooming in or out of the map to see more or less details
-
Ejecting mass faster or slower with different keys
-
Splitting into multiple cells with one key
Changing the skin or color of your cell
-
Showing the coordinates, mass, or speed of your cell
-
Showing the leaderboard, chat, or statistics of the game
-
Using bots or scripts to automate some actions or movements
-
-
A mod macro can make Agar io more fun, easy, or challenging, depending on your preference and play style. However, a mod macro can also be considered as a cheat or a hack by some players or developers, as it can give you an unfair advantage over other players who do not use a mod macro. Therefore, you should be careful and respectful when using a mod macro, and avoid using it in servers or modes that prohibit it.
-
download agar io mod macro zoom
-download agar io mod macro split
-download agar io mod macro feed
-download agar io mod macro eject
-download agar io mod macro script
-download agar io mod macro extension
-download agar io mod macro hack
-download agar io mod macro cheat
-download agar io mod macro free
-download agar io mod macro apk
-download agar io mod macro android
-download agar io mod macro ios
-download agar io mod macro pc
-download agar io mod macro chrome
-download agar io mod macro firefox
-download agar io mod macro tampermonkey
-download agar io mod macro greasyfork
-download agar io mod macro delta
-download agar io mod macro ogario
-download agar io mod macro agartool
-download agar io mod macro fps booster
-download agar io mod macro unlimited zoom
-download agar io mod macro double split
-download agar io mod macro triple split
-download agar io mod macro tricksplit
-download agar io mod macro popsplit
-download agar io mod macro x16 split
-download agar io mod macro fast mass
-download agar io mod macro auto respawn
-download agar io mod macro stop movement
-download agar io mod macro interactive color
-download agar io mod macro color change
-download agar io mod macro attack range
-download agar io mod macro map border
-download agar io mod macro sector label
-download agar io mod macro mini map
-download agar io mod macro fps control
-download agar io mod macro hot keys
-download agar io mod macro chat
-download agar io mod macro helpers
-download agar io mo
-
The types and features of mod macros for Agar io
-
There are many types and features of mod macros for Agar io, each with different functions and purposes. Some of the most popular and widely used mod macros for Agar io are:
- Zoom in or out with the mouse wheel - Eject mass with E, R, T, P, or Q keys - Split with A, S, D, F, G, H, J, K, L, Z, X, C, V, B keys - Change skin with W key - Show mass and speed with M key - Show coordinates with C key - Show leaderboard with L key - Show chat with Enter key - Show statistics with S key - Use bots with B key
- Zoom in or out with the mouse wheel - Eject mass faster with E key - Split into 16 cells with Z key - Change skin with W key - Show mass and speed with M key - Show coordinates with C key - Show leaderboard with L key - Show chat with Enter key - Show statistics with S key - Use bots with B key
- Zoom in or out with the mouse wheel - Eject mass faster with E key - Split into 16 cells with Z key - Change skin with W key - Show mass and speed with M key - Show coordinates with C key - Show leaderboard with L key - Show chat with Enter key - Show statistics with S key - Use scripts to customize the game interface and functions
- Zoom in or out with the mouse wheel - Eject mass faster with E key - Split into 16 cells with Z key - Change skin with W key - Show mass and speed with M key - Show coordinates with C key - Show leaderboard with L key - Show chat with Enter key - Show statistics with S key - Use bots to play for you or help you
-
-
-
How to download and install Agar io mod macro?
-
If you want to download and install a mod macro for Agar io, you will need some tools and steps to do it. Here are the general sources and requirements for Agar io mod macro:
-
The sources and requirements for Agar io mod macro
-
To download and install a mod macro for Agar io, you will need the following sources and requirements:
-
-
A web browser that supports user scripts, such as Chrome, Firefox, Opera, or Safari.
-
A user script manager extension for your web browser, such as Tampermonkey, Greasemonkey, Violentmonkey, or NinjaKit.
-
A mod macro user script for Agar io from a reliable and safe website, such as Greasy Fork, OpenUserJS, or GitHub.
-
An internet connection and an Agar io account.
-
-
The steps and tips for downloading and installing Agar io mod macro
-
To download and install a mod macro for Agar io, you can follow these steps and tips:
After installing the user script manager extension, go to the website of the mod macro user script that you want to use. For example, if you want to use Agar Tool, go to https://greasyfork.org/en/scripts/370575-agar-tool and click on the "Install this script" button.
-
After installing the mod macro user script, go to the Agar io website at https://agar.io/ and log in with your account. You should see a new menu or interface on the game screen that indicates that the mod macro is working.
-
You can now customize and use the mod macro features and functions according to your preference and play style. You can also enable or disable the mod macro by clicking on the user script manager icon on your web browser and toggling the switch next to the mod macro name.
-
-
Some tips for downloading and installing Agar io mod macro are:
-
-
Make sure that you download and install a mod macro from a trusted and updated source, as some mod macros may contain viruses, malware, or outdated code that can harm your device or game account.
-
Make sure that you read and follow the instructions and requirements of the mod macro carefully, as some mod macros may have different or additional steps or tools for installation or usage.
-
Make sure that you respect the rules and policies of Agar io and other players, as some mod macros may be banned or frowned upon by the game developer or community. Do not use a mod macro to cheat, hack, or harass other players, as this can ruin the game experience for everyone and get you banned or reported.
-
-
How to use Agar io mod macro effectively?
-
After downloading and installing a mod macro for Agar io, you may wonder how to use it effectively to enhance your gameplay experience. Here are some common and advanced commands and functions of Agar io mod macro, as well as some best practices and strategies for using it.
-
The common and advanced commands and functions of Agar io mod macro
-
The common and advanced commands and functions of Agar io mod macro vary depending on the type and feature of the mod macro that you use. However, some of the most common and useful commands and functions are:
-
-
Zooming in or out of the map: This can help you see more or less details of the map, such as the location of other cells, viruses, or pellets. You can use this to plan your movements, avoid dangers, or find opportunities. You can usually zoom in or out with the mouse wheel or by pressing a key.
-
Ejecting mass faster or slower: This can help you control the amount of mass that you eject from your cell, which can be used for various purposes, such as feeding other cells, baiting other cells, or escaping from other cells. You can usually eject mass faster or slower with different keys, such as E, R, T, P, or Q.
-
Splitting into multiple cells: This can help you split your cell into more than two smaller cells, which can be used for various purposes, such as catching other cells, dodging other cells, or spreading your mass. You can usually split into multiple cells with one key, such as A, S, D, F, G, H, J, K, L, Z, X, C, V, B.
-
Changing the skin or color of your cell: This can help you change the appearance of your cell, which can be used for various purposes , such as expressing your personality, showing your affiliation, or disguising your identity. You can usually change the skin or color of your cell with the W key or by selecting a skin from the menu.
-
Showing the coordinates, mass, or speed of your cell: This can help you see the exact position, size, or velocity of your cell, which can be used for various purposes, such as navigating the map, measuring your growth, or adjusting your movement. You can usually show the coordinates, mass, or speed of your cell with the M or C keys or by enabling an option from the menu.
-
Showing the leaderboard, chat, or statistics of the game: This can help you see the ranking, communication, or performance of yourself and other players, which can be used for various purposes, such as competing, socializing, or improving. You can usually show the leaderboard, chat, or statistics of the game with the L, Enter, or S keys or by enabling an option from the menu.
-
Using bots or scripts to automate some actions or movements: This can help you use artificial intelligence or code to perform some tasks or behaviors for you or assist you in the game, which can be used for various purposes, such as playing when you are away, helping you when you are stuck, or testing some strategies. You can usually use bots or scripts with the B key or by installing a script from a website.
-
-
The best practices and strategies for using Agar io mod macro
-
The best practices and strategies for using Agar io mod macro depend on your personal preference and play style. However, some of the general tips and advice are:
-
-
Use a mod macro that suits your needs and goals: There are many types and features of mod macros for Agar io, but not all of them may be useful or enjoyable for you. You should choose a mod macro that offers the features and functions that you want and need in the game, and avoid using a mod macro that has unnecessary or unwanted features and functions.
-
Use a mod macro that is compatible and safe: There are many sources and websites that offer mod macros for Agar io, but not all of them may be reliable or secure. You should download and install a mod macro that is compatible with your web browser and user script manager extension, and avoid downloading and installing a mod macro that may contain viruses, malware, or outdated code.
-
Use a mod macro that is respectful and ethical: There are many benefits and advantages that a mod macro can provide for Agar io players, but not all of them may be fair or acceptable. You should use a mod macro that is respectful and ethical to other players and the game developer, and avoid using a mod macro that may be banned or frowned upon by the game rules and policies.
-
Use a mod macro that is fun and challenging: There are many features and functions that a mod macro can offer for Agar io players, but not all of them may be fun or challenging. You should use a mod macro that is fun and challenging to enhance your gameplay experience, and avoid using a mod macro that may make the game too easy or boring.
-
-
Conclusion
-
Agar io is a simple but addictive online multiplayer game where you control a cell and try to eat other cells to grow bigger. However, if you want to have more features and advantages in the game, you can also download and install a mod macro for Agar io, which can enhance your performance, functionality, or appearance in the game. In this article, we explained what Agar io is, what a mod macro is , how to download and install it, and how to use it effectively. We hope that this article was helpful and informative for you, and that you will enjoy Agar io with a new level of fun and excitement. If you have any questions or feedback, please feel free to leave a comment below. Happy gaming!
-
FAQs
-
Here are some frequently asked questions and answers about Agar io mod macro:
-
-
Q: Is Agar io mod macro legal or illegal? A: Agar io mod macro is not illegal, but it may be against the game rules or policies. You should check the terms of service and privacy policy of Agar io before using a mod macro, and respect the rights and wishes of the game developer and other players.
-
Q: Is Agar io mod macro safe or risky? A: Agar io mod macro is not risky, but it may be unsafe. You should download and install a mod macro from a trusted and updated source, and avoid downloading and installing a mod macro that may contain viruses, malware, or outdated code. You should also scan your device and game account regularly for any potential threats or issues.
-
Q: Is Agar io mod macro free or paid? A: Agar io mod macro is usually free, but it may be paid. You should check the price and payment method of the mod macro before downloading and installing it, and avoid downloading and installing a mod macro that may charge you without your consent or knowledge. You should also support the original game developer by purchasing the game or in-game items if you can.
-
Q: Is Agar io mod macro easy or hard? A: Agar io mod macro is usually easy, but it may be hard. You should follow the instructions and requirements of the mod macro carefully, and avoid skipping or missing any steps or tools for installation or usage. You should also practice and experiment with the mod macro features and functions until you master them.
-
Q: Is Agar io mod macro fun or boring? A: Agar io mod macro is usually fun, but it may be boring. You should choose a mod macro that suits your needs and goals, and avoid using a mod macro that has unnecessary or unwanted features and functions. You should also use a mod macro that is fun and challenging, and avoid using a mod macro that may make the game too easy or boring.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Drift Racing 2 How to Master the Art of Tandem Drifting.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Drift Racing 2 How to Master the Art of Tandem Drifting.md
deleted file mode 100644
index 3152bec9173eccec1726be13d5d77416cc0ee424..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Drift Racing 2 How to Master the Art of Tandem Drifting.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
CarX Drift Racing 2: A Review of the Best Drift Racing Game
-
If you are a fan of drift racing, you might have heard of CarX Drift Racing 2, the sequel of the most desired drift-game in the world. This game offers an unprecedented and realistic experience of driving real sports cars on one of many race tracks available throughout the game. In this article, we will review CarX Drift Racing 2 and tell you why you should play it, what features it has, and what are its pros and cons.
CarX Drift Racing 2 is a mobile game developed by CarX Technologies, a company that specializes in creating realistic car physics and graphics for games. It is the second installment of the CarX Drift Racing series, which has over 100 million fans around the world. The game was released in December 2018 for Android and iOS devices, and has since received many updates and improvements.
-
Why should you play CarX Drift Racing 2?
-
CarX Drift Racing 2 is not just another racing game. It is a game that lets you experience the thrill and excitement of drifting, a driving technique where the driver intentionally oversteers the car to make it slide sideways. Drifting requires skill, precision, and practice, and CarX Drift Racing 2 gives you the opportunity to master it. You can compete against real people in online championships, race in tandems with other players, customize your car and track, and enjoy the realistic graphics and physics of the game. Whether you are a beginner or a pro, CarX Drift Racing 2 will challenge you and keep you entertained for hours.
-
Features of CarX Drift Racing 2
-
Online Rooms
-
This is the game mode that you have been waiting for. You can now drift in real time with your friends or other players from around the world. You can create or join an online room, pick a location, drift, and earn points. You can also watch other players drift using the drone camera. You can earn valuable rewards for achieving different ranks in online rooms.
-
Visual Auto Tuning
-
This feature allows you to customize your car's appearance to suit your style and preferences. You can replace mirrors, lights, running boards, bumpers, and many other parts. You can also create a unique image of your car with body kits, rims, vinyls, and more. The possibilities are endless.
-
car x drift racing 2 online rooms
-car x drift racing 2 visual auto tuning
-car x drift racing 2 improved performance tuning
-car x drift racing 2 realistic driving physics
-car x drift racing 2 XDS mode
-car x drift racing 2 multiplayer championships
-car x drift racing 2 premium subscription
-car x drift racing 2 cars list and how to unlock them
-car x drift racing 2 best cars for drifting
-car x drift racing 2 tips and tricks
-car x drift racing 2 download for android
-car x drift racing 2 download for ios
-car x drift racing 2 apk mod unlimited money
-car x drift racing 2 cheats and hacks
-car x drift racing 2 latest update
-car x drift racing 2 review and rating
-car x drift racing 2 gameplay videos
-car x drift racing 2 screenshots and wallpapers
-car x drift racing 2 official website and social media
-car x drift racing 2 support and feedback
-car x drift racing 2 forums and community
-car x drift racing 2 news and events
-car x drift racing 2 guides and walkthroughs
-car x drift racing 2 codes and coupons
-car x drift racing 2 free coins and rewards
-car x drift racing 2 custom vinyls and decals
-car x drift racing 2 body kits and rims
-car x drift racing 2 suspension and tyre pressure settings
-car x drift racing 2 engine and turbo tuning
-car x drift racing 2 gear box and brakes tuning
-car x drift racing 2 steering and control settings
-car x drift racing 2 different surfaces and tracks
-car x drift racing 2 tandem drifting and evaluation system
-car x drift racing 2 leader and follower roles
-car x drift racing 2 top-32 tournament mode
-car x drift racing 2 league ranking and rewards
-car x drift racing 2 drone camera and replays
-car x drift racing 2 muscle cars and sports cars
-car x drift racing 2 real life drift cars and telemetric data
-car x drift racing 2 addictive gameplay and warning message
-
Improved Performance Tuning
-
This feature allows you to fine-tune your car's performance to match your driving skills and needs. You can adjust your suspension, springs, tyre pressure, wheel angle, engine, turbine pressure, gear box, brakes, locking differential, and more. You can show some quality drift only if you have your car fine-tuned to your needs.
-
The Most True to Life Racing on a Mobile Platform
-
This feature makes CarX Drift Racing 2 stand out from other racing games. The game has improved steering control that is perfect for quick side changing, backwards and drift donuts. The game also shows how tyre pressure affects driving physics. The game developers ran a number of field tests with real drift cars to collect data and improve the game physics. The game also has realistic sound effects that make you feel like you are driving a real car. You can hear the sound of engine, turbo, tyres, and exhaust.
-
XDS Mode
-
This feature allows you to enjoy tandem drifting with artificial intelligence. You can select from different modes of difficulty and learn how to drift from the best drivers. You can also improve your own skills by following the leader or leading the follower. You can earn coins and reputation points by performing well in XDS mode.
-
Top-32 Mode
-
This feature allows you to compete in the world championships of drift racing. You can qualify for the Top-32 list of the best drivers from all over the world. You can then challenge them in head-to-head battles and prove your skills. You can win trophies and prizes by advancing in the Top-32 mode.
-
Multiplayer Mode
-
This feature allows you to race against other players in real time. You can join a random race or create your own lobby. You can choose from different modes such as Classic, Time Attack, or Drift Race. You can also chat with other players and make friends. You can earn coins and reputation points by winning races in multiplayer mode.
-
Pros and Cons of CarX Drift Racing 2
-
Pros
-
Realistic graphics and physics
-
The game has stunning graphics that make you feel like you are in a real race track. The game also has realistic physics that simulate the behaviour of real cars and tyres. The game is a feast for your eyes and ears.
-
Customizable cars and tracks
-
The game has a wide range of cars and tracks that you can choose from. You can also customize your car's appearance and performance to suit your style and preferences. You can create your own unique car and track with the visual auto tuning and track editor features.
-
Challenging and fun gameplay
-
The game has various game modes that offer different levels of challenge and fun. You can drift solo or with other players, race against time or opponents, or compete in championships. The game also has a dynamic scoring system that rewards you for your style, skill, and speed.
-
Cons
-
High battery consumption
-
The game has high-quality graphics and physics that require a lot of processing power from your device. This means that the game drains your battery faster than other games. You might need to charge your device more often or lower the graphics settings to save battery life.
-
In-app purchases and ads
-
The game is free to download and play, but it also has in-app purchases and ads that might affect your gaming experience. You might need to spend real money to unlock some cars, tracks, or features, or watch ads to earn some coins or bonuses. You can also disable the ads by purchasing the premium version of the game.
-
Steep learning curve
-
The game is not easy to master, especially for beginners. Drifting requires a lot of practice and patience, and the game does not have a tutorial or a guide to help you learn the basics. You might need to watch some videos or read some tips online to improve your skills.
-
Conclusion
-
Summary of the main points
-
In conclusion, CarX Drift Racing 2 is a drift racing game that offers an unprecedented and realistic experience of driving real sports cars on one of many race tracks available throughout the game. The game has many features such as online rooms, visual auto tuning, improved performance tuning, XDS mode, Top-32 mode, multiplayer mode, realistic graphics and physics, customizable cars and tracks, challenging and fun gameplay, etc. The game also has some drawbacks such as high battery consumption, in-app purchases and ads, steep learning curve, etc.
-
Recommendation and rating
-
We recommend CarX Drift Racing 2 to anyone who loves drift racing or wants to try something new and exciting. The game is suitable for both beginners and pros, as it offers different levels of difficulty and challenge. The game is also free to download and play, so you have nothing to lose by giving it a try. We rate CarX Drift Racing 2 4.5 out of 5 stars for its amazing graphics, physics, gameplay, features, etc.
- FAQs Q: How do I download CarX Drift Racing 2? A: You can download CarX Drift Racing 2 from Google Play Store for Android devices or App Store for iOS devices. Q: How do I control my car in CarX Drift Racing 2? A: You can control your car in CarX Drift Racing 2 using different options such as tilt, buttons, or steering wheel. You can also adjust the sensitivity and position of the controls in the settings menu. Q: How do I earn coins and reputation points in CarX Drift Racing 2? A: You can earn coins and reputation points in CarX Drift Racing 2 by drifting, racing, and competing in different game modes. You can also watch ads or complete offers to get some extra coins or bonuses. Q: How do I unlock new cars and tracks in CarX Drift Racing 2? A: You can unlock new cars and tracks in CarX Drift Racing 2 by spending coins or real money. You can also unlock some cars and tracks by achieving certain ranks or completing certain tasks in the game. Q: How do I customize my car and track in CarX Drift Racing 2? A: You can customize your car and track in CarX Drift Racing 2 by using the visual auto tuning and track editor features. You can change the appearance and performance of your car, and create your own unique track with different objects and settings. Q: How do I improve my skills in CarX Drift Racing 2? A: You can improve your skills in CarX Drift Racing 2 by practicing and learning from other players. You can also use the XDS mode to drift with artificial intelligence, or watch some videos or read some tips online to get some advice and tricks. 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Carros Rebaixados Online A game that lets you change the color wheels and glass of your car.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Carros Rebaixados Online A game that lets you change the color wheels and glass of your car.md
deleted file mode 100644
index 9002dffcf0ec9af4c731487bc3a4401ce0b66a6d..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Carros Rebaixados Online A game that lets you change the color wheels and glass of your car.md
+++ /dev/null
@@ -1,158 +0,0 @@
-
-
Carros Rebaixados Online APK: A Fun and Customizable Simulation Game
-
If you are a fan of cars and simulation games, you might want to check out Carros Rebaixados Online APK, a game that lets you customize and show off your car to your friends. This game is developed by Sebby Games, a Brazilian studio that specializes in creating realistic and immersive car games. In this game, you can choose from various models of cars, modify them according to your preferences, and drive them around in different scenarios. You can also play online with other players, chat with them, and compete with them. In this article, we will tell you everything you need to know about this game, including how to download and install it, what features it offers, how to play it, what are its pros and cons, how it compares to other similar games, and some tips to improve your experience.
-
How to download and install Carros Rebaixados Online APK on your Android device?
-
Downloading and installing Carros Rebaixados Online APK is very easy and straightforward. You can follow these simple steps:
Go to [this link](^1^) or [this link](^2^) on your Android device's browser.
-
Tap on the download button and wait for the APK file to be downloaded.
-
Once the download is complete, tap on the file and allow the installation from unknown sources if prompted.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the game from your app drawer or home screen and enjoy!
-
-
Features of Carros Rebaixados Online APK
-
Carros Rebaixados Online APK is a game that offers a lot of features for car enthusiasts. Here are some of them:
-
Detailed car models and customization options
-
The game features several models of cars that are completely detailed and realistic. You can customize your car in various ways, such as changing its color, wheels, glass, xenon, neon, speakers, LED, etc. You can also choose the size of the car wheel rim and turn up the bass of the song. You can make your car unique and express your personality through it.
-
First or third person perspective and 360 degrees car interiors
-
The game allows you to drive your car from either a first or a third person perspective. You can switch between them anytime you want. You can also see the car interiors in 360 degrees, which adds to the realism and immersion of the game. You can see every detail of your car's dashboard, seats, steering wheel, etc.
-
Interactive elements and realistic physics
-
The game also features many interactive elements in cars, such as opening car doors, hood, trunk, and windows, turning on the car, turning on the lights, etc. The game also has realistic physics that make the car behave according to its weight, speed, suspension, etc. You can feel the difference between driving on asphalt, dirt, or grass.
-
Day and night mode and camera filters
-
The game also has a day and night mode that changes the lighting and atmosphere of the game. You can drive your car in different times of the day and see how it affects the visibility and mood of the game. You can also use different camera filters to change the color and contrast of the game. You can choose from sepia, black and white, vintage, etc.
-
Music and sound effects
-
The game also has a great soundtrack that features various genres of music, such as rap, funk, pop, rock, etc. You can listen to your favorite songs while driving your car and enjoy the rhythm and vibe of the game. You can also hear realistic sound effects of your car's engine, brakes, horn, etc. The game also supports Bluetooth speakers and headphones for a better audio experience.
-
Multiple wheels, neon, speakers, and LED
-
The game also offers multiple options for wheels, neon, speakers, and LED for your car. You can choose from different types and colors of wheels that suit your car's style and performance. You can also add neon lights to your car's body and wheels to make it glow in the dark. You can also install speakers and LED in your car's trunk to create a party atmosphere.
-
Steering wheel, accelerometer, or arrows control
-
The game also gives you three options for controlling your car: steering wheel, accelerometer, or arrows. You can choose the one that you prefer and that is more comfortable for you. You can also adjust the sensitivity and position of the controls according to your preference.
-
carros rebaixados online apk download
-carros rebaixados online apk mod
-carros rebaixados online apk atualizado
-carros rebaixados online apk hack
-carros rebaixados online apk dinheiro infinito
-carros rebaixados online apk android
-carros rebaixados online apk 2023
-carros rebaixados online apk para pc
-carros rebaixados online apk mediafıre
-carros rebaixados online apk uptodown
-carros rebaixados online apk versão antiga
-carros rebaixados online apk obb
-carros rebaixados online apk revdl
-carros rebaixados online apk unlimited money
-carros rebaixados online apk offline
-carros rebaixados online apk free
-carros rebaixados online apk latest version
-carros rebaixados online apk mega mod
-carros rebaixados online apk tudo liberado
-carros rebaixados online apk com som automotivo
-carros rebaixados online apk sem internet
-carros rebaixados online apk com neon
-carros rebaixados online apk com casas
-carros rebaixados online apk com motos
-carros rebaixados online apk com graficos realistas
-carros rebaixados online apk com suspensão a ar
-carros rebaixados online apk com multiplayer
-carros rebaixados online apk com mapas brasileiros
-carros rebaixados online apk com novos veiculos
-carros rebaixados online apk com rodas originais
-carros rebaixados online apk com musicas brasileiras
-carros rebaixados online apk com customização completa
-carros rebaixados online apk com fisica realista
-carros rebaixados online apk com chat de voz
-carros rebaixados online apk com camera 360 graus
-carros rebaixados online apk com controle de volante
-carros rebaixados online apk com modo dia e noite
-carros rebaixados online apk com filtros para a camera
-carros rebaixados online apk com xenon colorido
-carros rebaixados online apk com interior detalhado dos veiculos
-
Online mode with friends and other players
-
The game also has an online mode that allows you to play with your friends and other players from around the world. You can join or create rooms with up to 10 players and chat with them using text or voice messages. You can also challenge them to races or show off your car's modifications. You can also see their cars' details and stats.
-
Gameplay of Carros Rebaixados Online APK
-
Carros Rebaixados Online APK is a game that is easy to play but hard to master. Here are some tips on how to play the game:
-
How to start and play the game?
-
To start the game, you need to choose a car model from the garage. You can see the details and stats of each car before choosing it. You can also modify your car in the garage by tapping on the wrench icon. Once you are ready, you can tap on the play button to enter the game world. You can choose from different scenarios, such as city, beach, farm, etc. You can also choose whether you want to play offline or online.
-
How to modify and show off your car?
-
To modify your car, you need to tap on the wrench icon in the garage or in the game world. You can then access various options for customization, such as color, wheels, glass, xenon, neon, speakers, LED, etc. You can also adjust the size of the wheel rim and the bass of the song. You can see the changes in real time and preview them before applying them. To show off your car, you can drive it around in the game world and interact with other cars and objects. You can also use the camera icon to take screenshots or videos of your car and share them with your friends or on social media.
-
How to interact with other cars and objects?
-
To interact with other cars and objects, you need to tap on the hand icon in the game world. You can then access various options for interaction, such as opening car doors, hood, trunk, and windows, turning on the car, turning on the lights, honking the horn, etc. You can also use the chat icon to communicate with other players using text or voice messages. You can also use the emoji icon to express your emotions or reactions.
-
How to switch between modes and perspectives?
-
To switch between modes and perspectives, you need to tap on the gear icon in the game world. You can then access various options for settings, such as day and night mode, camera filters, sound and music volume, language, etc. You can also switch between first or third person perspective by tapping on the eye icon. You can also switch between steering wheel, accelerometer, or arrows control by tapping on the controller icon.
-
Review of Carros Rebaixados Online APK
-
Carros Rebaixados Online APK is a game that has received a lot of positive feedback from its users. Here are some of its pros and cons, ratings and reviews, and comparison with other similar games:
-
Pros and cons of Carros Rebaixados Online APK
-
The game has many pros, such as:
-
-
It has realistic and detailed graphics and physics.
-
It has a lot of customization options for cars.
-
It has an online mode with chat and voice messages.
-
It has a great soundtrack and sound effects.
-
It has a simple and intuitive interface and controls.
-
-
The game also has some cons, such as:
-
-
It may have some bugs and glitches.
-
It may consume a lot of battery and data.
-
It may have some ads and in-app purchases.
-
It may not be compatible with some devices or regions.
-
It may not have a lot of variety in scenarios or cars.
-
-
Ratings and reviews of Carros Rebaixados Online APK on Google Play Store
-
The game has a rating of 4.4 out of 5 stars on Google Play Store based on more than 100 thousand reviews. Here are some of the reviews from the users:
-
-
-
User
-
Rating
-
Review
-
-
-
Lucas Santos
-
5 stars
-
"This game is very good, I recommend it to everyone who likes cars and simulation games. The graphics are amazing, the cars are very realistic, and the online mode is very fun. I love this game!"
-
-
-
Maria Silva
-
4 stars
-
"I like this game a lot, it is very entertaining and addictive. The only thing I don't like is that it has too many ads and it consumes a lot of battery. But other than that, it is a great game."
-
-
-
Pedro Oliveira
-
3 stars
-
"The game is good, but it could be better. It needs more scenarios, more cars, more customization options, more interaction options, etc. It also has some bugs and glitches that need to be fixed."
-
-
https://jinyurl.com/2uNU8K
-
What is the song Saudades Mil about?
-
Saudades Mil is a Portuguese expression that means "a thousand sorrows" or "a thousand longings". It is often used to express nostalgia, sadness, or missing someone or something. The song Saudades Mil by 509-E is a letter from a prisoner to his friend, who is also in jail. The prisoner tells his friend about his life, his memories, his regrets, and his hopes. He also expresses his sorrow for losing his wife, his friend's husband, and another inmate. He ends the letter by saying that he will see his friend soon, when they both get out of prison.
-
The story behind the song
-
The song Saudades Mil was released in 1999 as part of the album Provérbios 13 by 509-E. The group name stands for "5th floor, cell number 9, east wing", which was where the two members of the group, Dexter and Afro-X, were incarcerated in Carandiru Penitentiary in São Paulo. They started making rap music in prison as a way to cope with their situation and to denounce the injustices and violence they faced. They recorded their songs using a cassette recorder and smuggled them out of prison with the help of other inmates and visitors. Their songs became popular in the underground rap scene and eventually reached mainstream audiences.
-
The meaning of the lyrics
-
The lyrics of Saudades Mil are written in a mix of Portuguese and slang, which reflects the culture and reality of the Brazilian urban poor. The lyrics are full of references to places, people, events, and expressions that are familiar to those who live in the favelas (slums) or in prison. Some examples are:
-
-
Diadema: A city in the metropolitan area of São Paulo, where Dexter was born and raised.
-
Laisla: Dexter's daughter, who was born when he was already in prison.
-
Amarildo: Afro-X's brother-in-law, who was killed by rival gang members.
-
Jorge: Jorge Ben Jor, a famous Brazilian singer-songwriter, who wrote a song called Charles Anjo 45, about a criminal who escapes from prison.
-
Charles: A reference to Charles Anjo 45, as well as to Charles Bronson, an American actor who starred in movies about vigilantes and outlaws.
-
-
The lyrics also convey a range of emotions, such as anger, sadness, frustration, hope, love, and gratitude. The prisoner expresses his anger at the system that put him in jail, his sadness for losing his loved ones, his frustration for wasting his life, his hope for getting out of prison and starting over, his love for his daughter and his friends, and his gratitude for receiving a letter from his friend.
-
The impact of the song
-
The song Saudades Mil had a huge
impact on the Brazilian rap scene and society. It was one of the first songs to expose the harsh reality of life in prison and the social problems that lead to crime and violence. It also showed the potential of rap as a form of artistic expression and social criticism. The song inspired many other rap artists to tell their stories and to use rap as a tool for education and empowerment. The song also raised awareness and sympathy among the public and the authorities for the situation of prisoners and their families. The song was praised by critics and fans alike for its authenticity, creativity, and emotion.
-
Who are 509-E and what is their style?
-
509-E is a Brazilian rap group formed by Dexter and Afro-X in 1998, while they were serving time in Carandiru Penitentiary. They are considered one of the pioneers and most influential groups of Brazilian rap music.
-
509-e saudades mil letra e download
-saudades mil a carta 1999 rap nacional
-letra da música saudades mil de 509-e
-download mp3 509-e saudades mil a carta
-509-e provérbios 13 saudades mil letra
-saudades mil 509-e youtube video
-a carta 1999 509-e letra e música
-como baixar saudades mil de 509-e
-saudades mil a carta dexter e afro-x
-letra de saudades mil 509-e com tradução
-509-e saudades mil a carta instrumental
-saudades mil a carta 1999 história real
-significado da música saudades mil de 509-e
-download grátis 509-e saudades mil a carta
-509-e saudades mil a carta remix
-saudades mil a carta 1999 cifra
-letra completa de saudades mil de 509-e
-download zip 509-e saudades mil a carta
-509-e saudades mil a carta karaoke
-saudades mil a carta 1999 análise
-ouvir online 509-e saudades mil a carta
-saudades mil a carta 1999 spotify
-letra original de saudades mil de 509-e
-download flac 509-e saudades mil a carta
-509-e saudades mil a carta acapella
-saudades mil a carta 1999 clipe oficial
-letra em inglês de saudades mil de 509-e
-download wav 509-e saudades mil a carta
-509-e saudades mil a carta cover
-saudades mil a carta 1999 letras.mus.br [^1^]
-assistir online 509-e saudades mil a carta
-saudades mil a carta 1999 deezer
-letra em espanhol de saudades mil de 509-e
-download m4a 509-e saudades mil a carta
-509-e saudades mil a carta live
-saudades mil a carta 1999 apple music
-letra em francês de saudades mil de 509-e
-download ogg 509-e saudades mil a carta
-509-e saudades mil a carta piano tutorial
-saudades mil a carta 1999 soundcloud
-letra em italiano de saudades mil de 509-e
-download wma 509-e saudades mil a carta
-509-e saudades mil a carta guitar tabs
-saudades mil a carta 1999 shazam
-letra em português de saudades mil de 509-e
-
The origin and history of 509-E
-
Dexter and Afro-X were both born and raised in poor neighborhoods of São Paulo, where they were exposed to crime, violence, drugs, and racism. They both started rapping at a young age, influenced by American rap artists such as Public Enemy, N.W.A., and Tupac Shakur. They also joined gangs and got involved in criminal activities, which led them to prison. Dexter was arrested for robbery and Afro-X for drug trafficking. They met in prison and decided to form a rap group, using their cell number as their name. They wrote songs about their experiences, their opinions, their dreams, and their struggles. They recorded their songs using a cassette recorder and smuggled them out of prison with the help of other inmates and visitors. They released their first album, Provérbios 13, in 1999, which included the song Saudades Mil. The album was a success and earned them recognition and respect in the rap scene. They continued to make music while in prison, releasing two more albums: MMII DC (2002) (2002 AD) and É Nóis Que Tá (2006) (It's Us Who Are Here). They also participated in several rap festivals and events, such as Hutúz Rap Festival, Rap é Compromisso (Rap is Commitment), and Hip Hop Manifesto. They were released from prison in 2007 and 2008, respectively, after serving their sentences. They resumed their musical careers, both as solo artists and as a group. They also engaged in social projects and initiatives, such as Rap na Escola (Rap in School), Rap na Quebrada (Rap in the Hood), Rap na Febem (Rap in the Juvenile Detention Center), Rap na Cadeia (Rap in Prison), Rap na Rua (Rap on the Street), Rap na Igreja (Rap in Church), Rap na Paz (Rap for Peace), Rap na Vida (Rap for Life), Rap na Luta (Rap for Struggle), Rap na Arte (Rap for Art), Rap na Cultura (Rap for Culture), Rap na História (Rap for History), Rap na Educação (Rap for Education), Rap na Consciência (Rap for Consciousness), Rap na Liberdade (Rap for Freedom), Rap na Esperança (Rap for Hope), Rap na Fé (Rap for Faith), Rap na União (Rap for Unity), Rap na Diversidade (Rap for Diversity), Rap na Resistência (Rap for Resistance), Rap na Transformação (Rap for Transformation), Rap na Revolução (Rap for Revolution), Rap no Amor (Rap for Love), Rap no Respeito (Rap for Respect), Rap no Perdão (Rap for Forgiveness), Rap no Reconhecimento (Rap for Recognition), Rap no Sucesso (Rap for Success), Rap no Futuro (Rap for Future).
-
The influences and inspirations of 509-E
-
509-E is influenced by various musical genres, such as funk, soul, reggae, rock, samba, bossa nova, MPB (Musica Popular Brasileira), and gospel. They are also inspired by various rap artists, such as Racionais MC's, Sabotage, Facção Central, MV Bill, GOG, RZO, Thaíde e DJ Hum, SNJ, Rappin' Hood, Emicida, Criolo, Projota, Rashid, and many others. They also draw inspiration from other sources, such as literature, cinema, philosophy, religion, politics, history, and culture. Some of their references are Machado de Assis, Paulo Freire, Malcolm X, Martin Luther King Jr., Nelson Mandela, Che Guevara, Bob Marley, Jesus Christ, Buddha, Gandhi, Zumbi dos Palmares, Dandara dos Palmares, Chico Mendes, Carlos Marighella, Carlos Drummond de Andrade, Clarice Lispector, Fernando Pessoa, Luís de Camões, Jorge Amado, Gabriel García Márquez, Pablo Neruda, Mario Vargas Llosa, Gabriel O Pensador, Cidade de Deus (City of God), Tropa de Elite (Elite Squad), Carandiru (Carandiru), Pixote (Pixote), O Auto da Compadecida (A Dog's Will), O Pagador de Promessas (The Given Word), O Quatrilho (The Quatrilho), Central do Brasil (Central Station), O Som ao Redor (Neighboring Sounds), Bacurau (Bacurau), Aquarius (Aquarius), Sócrates (Socrates), Platão (Plato), Aristóteles (Aristotle), Descartes (Descartes), Kant (Kant), Hegel (Hegel), Marx (Marx), Nietzsche (Nietzsche), Sartre (Sartre), Foucault (Foucault), Derrida (Derrida), Deleuze (Deleuze), Baudrillard (Baudrillard), Bauman (Bauman), Freire (Freire), Gramsci (Gramsci), Fanon (Fanon), Said (Said), Spivak (Spivak), Bhabha (Bhabha), Hall (Hall), Butler (Butler), hooks (hooks), Lorde (Lorde), Davis (Davis), Anzaldúa (Anzaldúa), Moraga (Moraga), Crenshaw (Crenshaw), and many others.
-
The themes and messages of 509-E
-
509-E is known for addressing various themes and messages in their songs, such as social injustice, racism, violence, poverty, prison, drugs, corruption, education, culture, identity, spirituality, hope, love, friendship, family, and freedom. They use rap as a way to express their feelings, opinions, experiences, and visions. They also use rap as a way to educate, inform, inspire, and empower their listeners. They aim to raise awareness and consciousness about the problems and challenges that affect their communities and society. They also aim to promote positive values and attitudes, such as respect, solidarity, dignity, courage, resilience, creativity, and peace. They believe that rap can be a force for change and transformation.
-
How to download the song Saudades Mil for free?
-
If you want to download the song Saudades Mil by 509-E for free, you need to be aware of some legal and ethical issues. You also need to know the best sites and apps to download music. And you need to follow some simple steps to download the song.
-
The legal and ethical issues of downloading music
-
Downloading music for free can be considered illegal and unethical in some cases. This is because it can violate the intellectual property rights of the artists and the music industry. Intellectual property rights are the legal rights that protect the creations and inventions of individuals and organizations. They include copyrights, trademarks, patents, and trade secrets. By downloading music for free, you can be infringing on these rights and causing harm to the creators and owners of the music. You can also be exposing yourself to legal risks and penalties.
-
However, downloading music for free can also be considered legal and ethical in some cases. This is because it can fall under the exceptions and limitations of intellectual property rights. These are the situations where the use of protected works is allowed without permission or payment. They include fair use, fair dealing, public domain, creative commons, and copyleft. By downloading music for free under these situations, you can be respecting the rights of the artists and the music industry. You can also be supporting the culture and the public interest.
-
Therefore, before downloading music for free, you should check the legal status and ethical implications of your actions. You should also respect the wishes and interests of the artists and the music industry. You should also acknowledge and credit the sources of the music you download.
-
The best sites and apps to download music
-
There are many sites and apps that allow you to download music for free. However, not all of them are safe, reliable, or legal. Some of them may contain viruses
Based on the web search results, I found three sites that are safe and legal to download music: Bandcamp, Jamendo Music, and Internet Archive. These sites offer free music downloads under Creative Commons licenses or public domain. They also have a variety of genres, artists, and songs to choose from. Here is a brief description of each site and how to download Saudades Mil from them.
-
Bandcamp
-
Bandcamp is a site that allows artists to upload their music and set their own prices. You can browse by genre, tag, location, or popularity. You can also stream music online or download it as MP3, FLAC, ALAC, AAC, Ogg Vorbis, WAV, or AIFF files. To download Saudades Mil from Bandcamp, you need to follow these steps:
-
-
Go to the Bandcamp homepage and type "Saudades Mil" in the search box.
-
Select the song by 509-E from the results.
-
Click on the "Buy Digital Track" button.
-
Enter "0" in the name your price field and click on "Download to your computer".
-
Choose your preferred format and click on "Download".
-
Save the file to your device and enjoy.
-
-
Jamendo Music
-
Jamendo Music is a site that offers free music downloads under Creative Commons licenses. You can discover new music by browsing through curated playlists, genres, moods, or trending songs. You can also stream music online or download it as MP3 files. To download Saudades Mil from Jamendo Music, you need to follow these steps:
-
-
Go to the Jamendo Music homepage and type "Saudades Mil" in the search box.
-
Select the song by 509-E from the results.
-
Click on the "Download" button below the song title.
-
Create a free account or log in with your existing account.
-
Choose your preferred quality and click on "Download".
-
Save the file to your device and enjoy.
-
-
Internet Archive
-
Internet Archive is a site that offers free access to millions of digital files, including music, audio, podcasts, radio programs, and more. You can search by keyword, collection, creator, date, language, or media type. You can also stream music online or download it as MP3, OGG Vorbis, FLAC, or other formats. To download Saudades Mil from Internet Archive, you need to follow these steps:
-
-
Go to the Internet Archive homepage and type "Saudades Mil" in the search box.
-
Select the song by 509-E from the results.
-
Click on the "VBR MP3" link under the Download Options section.
-
Save the file to your device and enjoy.
-
-
Conclusion
-
In this article, we have learned about the song Saudades Mil by 509-E, one of the most influential rap groups in Brazil. We have explored what this song is about, who are the artists behind it, and how you can download it for free. We have also learned about some legal and ethical issues of downloading music, as well as some of the best sites and apps to do so. We hope you have enjoyed this article and found it useful. If you want to learn more about Brazilian rap music or 509-E, you can check out these links:
-
-
[The History of Brazilian Rap Music]
-
[509-E Official Website]
-
[509-E YouTube Channel]
-
-
Thank you for reading this article. If you liked it, please share it with your friends and leave a comment below. We would love to hear your feedback and suggestions. And don't forget to check out our other articles on rap music and culture.
-
Frequently Asked Questions
-
Here are some of the most common questions that people ask about Saudades Mil and 509-E:
-
-
What does 509-E mean?
-
509-E is the name of a Brazilian rap group formed by Dexter and Afro-X in 1998. The name stands for "5th floor, cell number 9, east wing", which was where they were incarcerated in Carandiru Penitentiary in São Paulo.
-
What does Saudades Mil mean?
-
Saudades Mil is a Portuguese expression that means "a thousand sorrows or "a thousand longings". It is often used to express nostalgia, sadness, or missing someone or something. The song Saudades Mil by 509-E is a letter from a prisoner to his friend, who is also in jail.
-
How can I listen to Saudades Mil online?
-
You can listen to Saudades Mil online by streaming it on various platforms, such as YouTube, Spotify, Apple Music, Deezer, or SoundCloud. You can also watch the official video of the song on YouTube.
-
Is Saudades Mil based on a true story?
-
Yes, Saudades Mil is based on a true story. The song is a letter from Dexter to Afro-X, who were both imprisoned in Carandiru Penitentiary in São Paulo. The song tells the story of their lives, their memories, their regrets, and their hopes. The song also mentions real people and events that happened to them or around them.
-
What are some other songs by 509-E that I should listen to?
-
Some other songs by 509-E that you should listen to are:
-
-
Oitavo Anjo (Eighth Angel)
-
Milagre (Miracle)
-
Só Os Fortes (Only The Strong)
-
Depois da Meia Noite (After Midnight)
-
Saudosa Maloca (Nostalgic Shack)
-
-
What are some other Brazilian rap artists that I should listen to?
-
Some other Brazilian rap artists that you should listen to are:
-
-
Racionais MC's
-
Sabotage
-
Facção Central
-
MV Bill
-
GOG
-
RZO
-
Thaíde e DJ Hum
-
SNJ
-
Rappin' Hood
-
Emicida
-
Criolo
-
Projota
-
Rashid
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Ada Ehi - The Final Say Download Mp3 and Lyrics.md b/spaces/1phancelerku/anime-remove-background/Ada Ehi - The Final Say Download Mp3 and Lyrics.md
deleted file mode 100644
index b014a9181f066e5468c1270950f1dc443618d976..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Ada Ehi - The Final Say Download Mp3 and Lyrics.md
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
Download The Final Say by Ada Mp3
-
If you are looking for a powerful and uplifting gospel song to inspire your faith and remind you of God's love, then you should download The Final Say by Ada mp3. This song is one of the tracks from ADA's EP (Vol.1), a collection of five amazing songs by the Nigerian gospel singer and songwriter Ada Ehi. In this article, we will tell you what this song is about, why you should download it, and how to do it easily and safely.
The Final Say by Ada is a gospel song that celebrates the sovereignty and supremacy of Jesus Christ over every situation. It declares that Jesus has the final say in everything, and that nothing can stop His plans and purposes for His children. It also expresses gratitude and praise to God for His love, grace, and power.
-
The message of the song
-
The message of the song is based on the biblical truth that God is in control of everything, and that He works all things together for good for those who love Him and are called according to His purpose (Romans 8:28). It encourages believers to trust in God's promises and His faithfulness, and to not be afraid or discouraged by the challenges and trials they may face in life. It also reminds them that they are more than conquerors through Christ who loves them (Romans 8:37), and that they have victory over sin, death, and the devil through His blood and resurrection.
-
The lyrics of the song
-
The lyrics of the song are simple yet profound, using repetition and rhyme to create a catchy and memorable tune. Here are some of the lines from the chorus:
-
-Jesus, You have the final say Jesus, You have the final say You have the final say No matter what may come my way You have the final say
-
You can find the full lyrics of the song on [Genius](^4^) or [GospelJingle](^3^).
-
Why you should download The Final Say by Ada mp3
-
There are many reasons why you should download The Final Say by Ada mp3, but here are some of the most important ones:
-
The benefits of listening to gospel music
-
Gospel music is not just entertainment, but also a form of worship and ministry. Listening to gospel music can help you to:
-
download ada ehi the final say mp3
-the final say by ada mp3 free download
-ada the final say mp3 download audio
-download the final say by ada ehi lyrics
-the final say by ada gospel song mp3 download
-ada the final say mp3 download video
-download the final say by ada ehi music
-the final say by ada mp3 download fakaza
-ada the final say mp3 download 320kbps
-download the final say by ada ehi live
-the final say by ada mp3 download skull
-ada the final say mp3 download waploaded
-download the final say by ada ehi album
-the final say by ada mp3 download mdundo
-ada the final say mp3 download naijaloaded
-download the final say by ada ehi song
-the final say by ada mp3 download tubidy
-ada the final say mp3 download zamusic
-download the final say by ada ehi instrumental
-the final say by ada mp3 download pagalworld
-ada the final say mp3 download tooxclusive
-download the final say by ada ehi karaoke
-the final say by ada mp3 download djjohal
-ada the final say mp3 download datafilehost
-download the final say by ada ehi remix
-the final say by ada mp3 download ghanamotion
-ada the final say mp3 download naijavibes
-download the final say by ada ehi chords
-the final say by ada mp3 download waptrick
-ada the final say mp3 download praisezion
-download the final say by ada ehi ringtone
-the final say by ada mp3 download gospelminds
-ada the final say mp3 download gospelhotspot
-download the final say by ada ehi spotify
-the final say by ada mp3 download soundcloud
-ada the final say mp3 download gospelcentric
-download the final say by ada ehi itunes
-the final say by ada mp3 download amazon
-ada the final say mp3 download gospelmusicnaija
-download the final say by ada ehi youtube
-
-
Strengthen your faith and relationship with God
-
Receive comfort, peace, joy, and hope from His presence
-
Learn more about His word and His character
-
Be inspired to live a godly and fruitful life
-
Share the gospel with others through music
-
-
The quality and availability of the mp3 file
-
When you download The Final Say by Ada mp3, you will get a high-quality audio file that you can enjoy on any device. You will also be able to access it anytime and anywhere, without needing an internet connection or a streaming service. You can also create your own playlist or mixtape with other songs by Ada or other gospel artists.
-
How to download The Final Say by Ada
How to download The Final Say by Ada mp3
-
Downloading The Final Say by Ada mp3 is very easy and fast, as long as you follow these steps:
-
The steps to follow
-
-
Go to one of the sources that offer the mp3 file for free or for a small fee. We will recommend some of the best sources in the next section.
-
Find the song on the website or app, and click on the download button or link. You may need to sign up or log in to some of the sources before you can download.
-
Choose the format and quality of the mp3 file that you want to download. The higher the quality, the larger the file size. We suggest you choose at least 128 kbps for a good sound quality.
-
Wait for the download to complete, and then save the file to your device or cloud storage. You can also transfer the file to other devices using a USB cable, Bluetooth, or Wi-Fi.
-
Enjoy listening to The Final Say by Ada mp3 anytime and anywhere!
-
-
The best sources to download from
-
There are many sources that offer The Final Say by Ada mp3 for download, but not all of them are reliable and safe. Some of them may contain viruses, malware, or spam that can harm your device or compromise your privacy. To avoid these risks, we recommend you to download from these trusted and verified sources:
-
-
-
Source
-
Link
-
Price
-
Features
-
-
-
iTunes
-
-
$0.99
-
- High-quality mp3 file - Supports Apple devices - Syncs with iCloud - Supports Ada's ministry
-
-
-
Amazon Music
-
-
$0.99
-
- High-quality mp3 file - Supports various devices - Syncs with Amazon account - Supports Ada's ministry
-
-
-
GospelJingle
-
-
Free
-
- Medium-quality mp3 file - Supports various devices - Easy and fast download - No sign up required
-
-
-
NaijaMusic
-
-
Free
-
- Medium-quality mp3 file - Supports various devices - Easy and fast download - No sign up required
-
Conclusion
-
We hope that this article has helped you to learn more about The Final Say by Ada, and how to download it as an mp3 file. This song is a wonderful way to worship God and to declare His lordship over your life. It will also bless you with peace, joy, and hope as you listen to it.
-
Summary of the main points
-
Here are the main points that we covered in this article:
-
-
The Final Say by Ada is a gospel song that celebrates the sovereignty and supremacy of Jesus Christ over every situation.
-
The song has a powerful message, based on the biblical truth that God is in control of everything, and that He works all things together for good for those who love Him.
-
The song has simple yet profound lyrics, using repetition and rhyme to create a catchy and memorable tune.
-
Downloading The Final Say by Ada mp3 has many benefits, such as strengthening your faith, receiving comfort and hope, learning more about God, and supporting Ada's ministry.
-
Downloading The Final Say by Ada mp3 is easy and fast, as long as you follow the steps and use the trusted sources that we recommended.
-
-
Call to action
-
Now that you know how to download The Final Say by Ada mp3, what are you waiting for? Go ahead and download it today, and enjoy listening to this amazing song. You can also share it with your friends and family, and let them know about the goodness and greatness of God. You will not regret it!
-
FAQs
-
Q1: Who is Ada Ehi?
-
A1: Ada Ehi is a Nigerian gospel singer, songwriter, recording and performing artist. She started her musical career at the age of 10 as a backup singer for Tosin Jegede. She later joined the Christ Embassy Church and became a member of the LoveWorld music team. She has released several albums and singles, such as Future Now, Born of God, Only You Jesus, I Testify, and many more. She is also a wife and a mother of two children.
-
Q2: What is ADA's EP (Vol.1)?
-
A2: ADA's EP (Vol.1) is a collection of five songs by Ada Ehi, released in 2019. The songs are The Final Say, Beautiful, See What The Lord Has Done, The Faithful God, and No One Like You. The EP showcases Ada's versatility and creativity as a gospel artist, as well as her passion for God and His people.
-
Q3: How can I watch the official video of The Final Say by Ada?
-
A3: You can watch the official video of The Final Say by Ada on [YouTube] or [Vimeo]. The video features Ada singing and dancing with joy and confidence, surrounded by colorful backgrounds and props. It also has some scenes of people celebrating God's goodness and faithfulness in their lives.
-
Q4: How can I support Ada's ministry?
-
A4: You can support Ada's ministry by downloading her songs, watching her videos, following her on social media, subscribing to her newsletter, attending her concerts and events, praying for her and her family, and giving generously to her projects and causes. You can also share her songs and messages with others, and encourage them to do the same.
-
Q5: Where can I find more songs by Ada?
-
A5: You can find more songs by Ada on her [official website], [Spotify], [Apple Music], [Deezer], [SoundCloud], [Boomplay], [Audiomack], [Napster], [Tidal], or any other music streaming platform. You can also buy her CDs or DVDs from online or offline stores.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download MusicHQ.net The Ultimate Source for Full HD Movies and TV Series Online.md b/spaces/1phancelerku/anime-remove-background/Download MusicHQ.net The Ultimate Source for Full HD Movies and TV Series Online.md
deleted file mode 100644
index a6c092df20c7faf19d8fe658c2fc4be9ea4ecb50..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download MusicHQ.net The Ultimate Source for Full HD Movies and TV Series Online.md
+++ /dev/null
@@ -1,216 +0,0 @@
-
-
Download MusicHQ.net: A Guide to Watch Full HD Movies Online
-
If you are a movie lover, you might have heard of MusicHQ.net, a commercial-free video streaming service that offers full HD movies and TV series online. But did you know that you can also download MusicHQ.net and watch your favorite movies offline? In this article, we will show you what MusicHQ.net is, why you should download it, how to download it, and what are some of the best alternatives to it.
MusicHQ.net is a website that provides free access to thousands of movies and TV shows in various genres and languages. You can watch them online with full subtitles and 1080p quality, or you can download them to your device and watch them anytime, anywhere. MusicHQ.net was created in 2019 and has gained popularity among movie fans around the world.
-
Features of MusicHQ.net
-
Some of the features that make MusicHQ.net stand out from other streaming sites are:
-
-
It has a simple and user-friendly interface that allows you to browse and search for movies easily.
-
It has a large and diverse collection of movies and TV shows, from classics to new releases, from Hollywood to Bollywood, from action to comedy.
-
It updates its content regularly and adds new movies and episodes as soon as they are available.
-
It supports multiple devices, such as computers, smartphones, tablets, smart TVs, etc.
-
It does not require any registration or subscription to use its service.
-
It does not show any annoying ads or pop-ups that interrupt your viewing experience.
-
-
How to access MusicHQ.net?
-
To access MusicHQ.net, you need to have a stable internet connection and a web browser. You can visit the official website of MusicHQ.net at www.musichq.net and start watching or downloading movies for free. However, you should be aware that MusicHQ.net may be blocked or restricted in some countries or regions due to legal issues or copyright infringement. In that case, you may need to use a VPN (virtual private network) service or a proxy server to bypass the geo-restrictions and access MusicHQ.net safely and anonymously.
-
download musichq.net movies
-download musichq.net free
-download musichq.net full hd
-download musichq.net subtitles
-download musichq.net tv series
-download musichq.net app
-download musichq.net apk
-download musichq.net for pc
-download musichq.net for android
-download musichq.net for ios
-download musichq.net online
-download musichq.net offline
-download musichq.net latest
-download musichq.net 2023
-download musichq.net 1080p
-download musichq.net 720p
-download musichq.net 4k
-download musichq.net hdrip
-download musichq.net bluray
-download musichq.net dvdrip
-download musichq.net mp4
-download musichq.net mkv
-download musichq.net avi
-download musichq.net torrent
-download musichq.net magnet link
-download musichq.net streaming site
-download musichq.net alternative
-download musichq.net proxy
-download musichq.net mirror site
-download musichq.net unblocked site
-download musichq.cc movies (musichq.cc is a similar domain to musichq.net[^3^])
-how to download from musichq.net
-is it safe to download from musichq.net
-is it legal to download from musichq.net
-is it free to download from musichq.net
-best movies to download from musichq.net
-best tv series to download from musichq.net
-best genres to download from musichq.net
-best quality to download from musichq.net
-best format to download from musichq.net
-best software to download from musichq.net
-best device to download from musichq.net
-best vpn to download from musichq.net
-best browser to download from musichq.net
-best downloader to download from musichq.net
-
Why download MusicHQ.net?
-
While watching movies online on MusicHQ.net is convenient and enjoyable, there are some reasons why you may want to download MusicHQ.net instead. Here are some of them:
-
Benefits of downloading MusicHQ.net
-
-
You can watch movies offline without worrying about internet speed, bandwidth, or data usage.
-
You can save movies on your device and watch them anytime, anywhere, even when you don't have access to the internet.
-
You can share movies with your friends and family without any hassle.
-
You can avoid buffering, lagging, or crashing issues that may occur when streaming movies online.
-
You can have more control over the quality, format, size, and storage of the movies you download.
-
-
Risks of downloading MusicHQ.net
-
-
You may encounter malware, viruses, or spyware that may harm your device or compromise your privacy.
-
You may violate the intellectual property rights of the movie owners or distributors and face legal consequences.
-
You may consume a lot of storage space on your device and slow down its performance.
-
You may not be able to download some movies due to technical issues or copyright restrictions.
-
-
How to download MusicHQ.net?
If you have decided to download MusicHQ.net, you need to follow some simple steps to do it successfully. Here are the instructions:
-
Step-by-step instructions
-
-
Go to the official website of MusicHQ.net at www.musichq.net and find the movie or TV show you want to download.
-
Click on the movie or TV show poster and you will be redirected to a new page with more details and options.
-
Scroll down and look for the download button below the video player. It may have different labels, such as "Download", "Download HD", "Download Full Movie", etc.
-
Click on the download button and you will see a pop-up window with different links and formats to choose from. You can select the quality, size, and format of the movie you want to download, such as 1080p, 720p, MP4, MKV, etc.
-
Click on the link that suits your preference and you will be taken to another page where you can start the download process. You may need to click on another button or link that says "Download Now", "Start Download", "Confirm Download", etc.
-
Wait for the download to finish and enjoy your movie offline.
-
-
Tips and tricks
-
Here are some tips and tricks that can help you download MusicHQ.net more easily and safely:
-
-
Use a VPN service or a proxy server to access MusicHQ.net if it is blocked or restricted in your country or region.
-
Use an ad-blocker or a pop-up blocker to avoid annoying ads or pop-ups that may interfere with your download.
-
Use a reliable antivirus or anti-malware software to scan the downloaded files and protect your device from any potential threats.
-
Use a download manager or a downloader app to speed up the download, resume it if it is interrupted, and manage it more efficiently.
-
Check the reviews and ratings of the movies or TV shows before downloading them to make sure they are of good quality and match your expectations.
-
-
Alternatives to MusicHQ.net
-
If you are looking for some other websites or apps that can offer similar or better services than MusicHQ.net, you may want to check out these alternatives:
-
List of top 10 alternatives
-
-
Netflix: The most popular and widely used streaming service that offers original and exclusive movies and TV shows, as well as a huge library of licensed content. You can watch online or download offline with a paid subscription.
-
Amazon Prime Video: Another popular and widely used streaming service that offers original and exclusive movies and TV shows, as well as a huge library of licensed content. You can watch online or download offline with a paid subscription.
-
Hulu: A streaming service that offers original and exclusive movies and TV shows, as well as a huge library of licensed content. You can watch online or download offline with a paid subscription.
-
Disney+: A streaming service that offers original and exclusive movies and TV shows from Disney, Pixar, Marvel, Star Wars, National Geographic, and more. You can watch online or download offline with a paid subscription.
-
HBO Max: A streaming service that offers original and exclusive movies and TV shows from HBO, Warner Bros., DC, Cartoon Network, Adult Swim, and more. You can watch online or download offline with a paid subscription.
-
YouTube: The most popular and widely used video-sharing platform that offers millions of user-generated videos, as well as some original and licensed content. You can watch online for free or download offline with a paid subscription.
-
Tubi: A free streaming service that offers thousands of movies and TV shows in various genres and languages. You can watch online for free but you cannot download offline.
-
Crackle: A free streaming service that offers thousands of movies and TV shows in various genres and languages. You can watch online for free but you cannot download offline.
-
Popcornflix: A free streaming service that offers thousands of movies and TV shows in various genres and languages. You can watch online for free but you cannot download offline.
-
Vudu: A streaming service that offers thousands of movies and TV shows in various genres and languages. You can watch online for free or download offline with a paid subscription.
-
-
Comparison table
-
-
-
Name
-
Price
-
Content
-
Quality
-
Download
-
-
-
MusicHQ.net
-
Free
-
Thousands of movies and TV shows in various genres and languages
-
Full HD (1080p)
-
Yes
-
-
-
Netflix
-
$8.99-$17.99 per month
-
Original and exclusive movies and TV shows, as well as a huge library of licensed content
-
Full HD (1080p) or Ultra HD (4K)
-
Yes
-
-
-
Amazon Prime Video
-
$8.99 per month or $119 per year
-
Original and exclusive movies and TV shows, as well as a huge library of licensed content
-
Full HD (1080p) or Ultra HD (4K)
-
Yes
-
-
-
Hulu
-
$5.99-$11.99 per month or $64.99-$70.99 per month with live TV
-
Original and exclusive movies and TV shows, as well as a huge library of licensed content
-
Full HD (1080p) or Ultra HD (4K)
-
Yes
-
-
-
Disney+
-
$7.99 per month or $79.99 per year
-
Original and exclusive movies and TV shows from Disney, Pixar, Marvel, Star Wars, National Geographic, and more
-
Full HD (1080p) or Ultra HD (4K)
-
Yes
-
-
HBO Max
-
$9.99-$14.99 per month
-
Original and exclusive movies and TV shows from HBO, Warner Bros., DC, Cartoon Network, Adult Swim, and more
-
Full HD (1080p) or Ultra HD (4K)
-
Yes
-
-
-
YouTube
-
Free or $11.99 per month for YouTube Premium
-
Millions of user-generated videos, as well as some original and licensed content
-
Full HD (1080p) or Ultra HD (4K)
-
Yes
-
-
-
Tubi
-
Free
-
Thousands of movies and TV shows in various genres and languages
-
Full HD (1080p)
-
No
-
-
-
Crackle
-
Free
-
Thousands of movies and TV shows in various genres and languages
-
Full HD (1080p)
-
No
-
-
-
Popcornflix
-
Free
-
Thousands of movies and TV shows in various genres and languages
-
Full HD (1080p)
-
No
-
-
-
Vudu
-
Free or $3.99-$19.99 per movie or TV show
-
Thousands of movies and TV shows in various genres and languages
-
Full HD (1080p) or Ultra HD (4K)
-
Yes
-
-
Conclusion
-
In conclusion, MusicHQ.net is a great website to watch full HD movies online for free. However, if you want to enjoy your movies offline, you can also download MusicHQ.net and save them on your device. You just need to follow some simple steps and tips to do it safely and easily. However, you should also be aware of the risks and legal issues that may arise from downloading MusicHQ.net. If you are looking for some alternatives to MusicHQ.net, you can check out the list and comparison table above and choose the one that suits your needs and preferences.
-
FAQs
-
Here are some of the frequently asked questions about MusicHQ.net:
-
-
Is MusicHQ.net legal?
-
The legality of MusicHQ.net depends on your country or region's laws and regulations regarding streaming and downloading copyrighted content. In some countries or regions, MusicHQ.net may be considered illegal and may be blocked or restricted by the authorities. In that case, you should use a VPN service or a proxy server to access MusicHQ.net safely and anonymously.
-
Is MusicHQ.net safe?
-
The safety of MusicHQ.net depends on the source and quality of the files you download from it. Some files may contain malware, viruses, or spyware that may harm your device or compromise your privacy. To avoid this, you should use a reliable antivirus or anti-malware software to scan the downloaded files before opening them. You should also use an ad-blocker or a pop-up blocker to avoid annoying ads or pop-ups that may interfere with your download.
-
How can I download MusicHQ.net faster?
-
The speed of downloading MusicHQ.net depends on several factors, such as your internet connection, bandwidth, data usage, file size, format, quality, etc. To download MusicHQ.net faster, you should use a download manager or a downloader app that can speed up the download, resume it if it is interrupted, and manage it more efficiently. You should also choose the file size, format, and quality that match your device's specifications and storage capacity.
-
How can I watch MusicHQ.net on my TV?
-
To watch MusicHQ.net on your TV, you need to have a smart TV that supports web browsing or a streaming device that can connect your TV to the internet. You can then visit the official website of MusicHQ.net at www.musichq.net and watch your favorite movies online. Alternatively, you can download MusicHQ.net on your computer or smartphone and transfer the files to a USB drive or an external hard drive. You can then plug the USB drive or the external hard drive into your TV and watch your movies offline.
-
How can I request a movie or TV show on MusicHQ.net?
-
To request a movie or TV show on MusicHQ.net, you need to contact the website's administrators via email or social media. You can find their contact information on the website's homepage or footer. You can send them your request and they will try to add it to their collection as soon as possible. However, there is no guarantee that your request will be fulfilled, as it depends on the availability and legality of the movie or TV show you want.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_all_in_one.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_all_in_one.py
deleted file mode 100644
index 71e95cfe4544feb15e27e683096e850f3f8594a0..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_all_in_one.py
+++ /dev/null
@@ -1,1294 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-import os
-import random
-import re
-import time
-from typing import Callable, List, Optional, Union
-
-import numpy as np
-import paddle
-import PIL
-import PIL.Image
-from packaging import version
-
-from paddlenlp.transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
-
-from ...configuration_utils import FrozenDict
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...pipeline_utils import DiffusionPipeline
-from ...schedulers import (
- DDIMScheduler,
- DPMSolverMultistepScheduler,
- EulerAncestralDiscreteScheduler,
- EulerDiscreteScheduler,
- LMSDiscreteScheduler,
- PNDMScheduler,
-)
-from ...utils import PIL_INTERPOLATION, deprecate, logging
-from ...utils.testing_utils import load_image
-from . import StableDiffusionPipelineOutput
-from .safety_checker import StableDiffusionSafetyChecker
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-def save_all(images, FORMAT="jpg", OUTDIR="./outputs/"):
- if not isinstance(images, (list, tuple)):
- images = [images]
- for image in images:
- PRECISION = "fp32"
- argument = image.argument
- os.makedirs(OUTDIR, exist_ok=True)
- epoch_time = argument["epoch_time"]
- PROMPT = argument["prompt"]
- NEGPROMPT = argument["negative_prompt"]
- HEIGHT = argument["height"]
- WIDTH = argument["width"]
- SEED = argument["seed"]
- STRENGTH = argument.get("strength", 1)
- INFERENCE_STEPS = argument["num_inference_steps"]
- GUIDANCE_SCALE = argument["guidance_scale"]
-
- filename = f"{str(epoch_time)}_scale_{GUIDANCE_SCALE}_steps_{INFERENCE_STEPS}_seed_{SEED}.{FORMAT}"
- filedir = f"{OUTDIR}/{filename}"
- image.save(filedir)
- with open(f"{OUTDIR}/{epoch_time}_prompt.txt", "w") as file:
- file.write(
- f"PROMPT: {PROMPT}\nNEG_PROMPT: {NEGPROMPT}\n\nINFERENCE_STEPS: {INFERENCE_STEPS}\nHeight: {HEIGHT}\nWidth: {WIDTH}\nSeed: {SEED}\n\nPrecision: {PRECISION}\nSTRENGTH: {STRENGTH}\nGUIDANCE_SCALE: {GUIDANCE_SCALE}"
- )
-
-
-re_attention = re.compile(
- r"""
-\\\(|
-\\\)|
-\\\[|
-\\]|
-\\\\|
-\\|
-\(|
-\[|
-:([+-]?[.\d]+)\)|
-\)|
-]|
-[^\\()\[\]:]+|
-:
-""",
- re.X,
-)
-
-
-def parse_prompt_attention(text):
- """
- Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
- Accepted tokens are:
- (abc) - increases attention to abc by a multiplier of 1.1
- (abc:3.12) - increases attention to abc by a multiplier of 3.12
- [abc] - decreases attention to abc by a multiplier of 1.1
- \( - literal character '('
- \[ - literal character '['
- \) - literal character ')'
- \] - literal character ']'
- \\ - literal character '\'
- anything else - just text
- >>> parse_prompt_attention('normal text')
- [['normal text', 1.0]]
- >>> parse_prompt_attention('an (important) word')
- [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
- >>> parse_prompt_attention('(unbalanced')
- [['unbalanced', 1.1]]
- >>> parse_prompt_attention('\(literal\]')
- [['(literal]', 1.0]]
- >>> parse_prompt_attention('(unnecessary)(parens)')
- [['unnecessaryparens', 1.1]]
- >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
- [['a ', 1.0],
- ['house', 1.5730000000000004],
- [' ', 1.1],
- ['on', 1.0],
- [' a ', 1.1],
- ['hill', 0.55],
- [', sun, ', 1.1],
- ['sky', 1.4641000000000006],
- ['.', 1.1]]
- """
-
- res = []
- round_brackets = []
- square_brackets = []
-
- round_bracket_multiplier = 1.1
- square_bracket_multiplier = 1 / 1.1
-
- def multiply_range(start_position, multiplier):
- for p in range(start_position, len(res)):
- res[p][1] *= multiplier
-
- for m in re_attention.finditer(text):
- text = m.group(0)
- weight = m.group(1)
-
- if text.startswith("\\"):
- res.append([text[1:], 1.0])
- elif text == "(":
- round_brackets.append(len(res))
- elif text == "[":
- square_brackets.append(len(res))
- elif weight is not None and len(round_brackets) > 0:
- multiply_range(round_brackets.pop(), float(weight))
- elif text == ")" and len(round_brackets) > 0:
- multiply_range(round_brackets.pop(), round_bracket_multiplier)
- elif text == "]" and len(square_brackets) > 0:
- multiply_range(square_brackets.pop(), square_bracket_multiplier)
- else:
- res.append([text, 1.0])
-
- for pos in round_brackets:
- multiply_range(pos, round_bracket_multiplier)
-
- for pos in square_brackets:
- multiply_range(pos, square_bracket_multiplier)
-
- if len(res) == 0:
- res = [["", 1.0]]
-
- # merge runs of identical weights
- i = 0
- while i + 1 < len(res):
- if res[i][1] == res[i + 1][1]:
- res[i][0] += res[i + 1][0]
- res.pop(i + 1)
- else:
- i += 1
-
- return res
-
-
-def get_prompts_with_weights(pipe: DiffusionPipeline, prompt: List[str], max_length: int):
- r"""
- Tokenize a list of prompts and return its tokens with weights of each token.
-
- No padding, starting or ending token is included.
- """
- tokens = []
- weights = []
- for text in prompt:
- texts_and_weights = parse_prompt_attention(text)
- text_token = []
- text_weight = []
- for word, weight in texts_and_weights:
- # tokenize and discard the starting and the ending token
- token = pipe.tokenizer(word).input_ids[1:-1]
- text_token += token
-
- # copy the weight by length of token
- text_weight += [weight] * len(token)
-
- # stop if the text is too long (longer than truncation limit)
- if len(text_token) > max_length:
- break
-
- # truncate
- if len(text_token) > max_length:
- text_token = text_token[:max_length]
- text_weight = text_weight[:max_length]
-
- tokens.append(text_token)
- weights.append(text_weight)
- return tokens, weights
-
-
-def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, pad, no_boseos_middle=True, chunk_length=77):
- r"""
- Pad the tokens (with starting and ending tokens) and weights (with 1.0) to max_length.
- """
- max_embeddings_multiples = (max_length - 2) // (chunk_length - 2)
- weights_length = max_length if no_boseos_middle else max_embeddings_multiples * chunk_length
- for i in range(len(tokens)):
- tokens[i] = [bos] + tokens[i] + [eos] + [pad] * (max_length - 2 - len(tokens[i]))
- if no_boseos_middle:
- weights[i] = [1.0] + weights[i] + [1.0] * (max_length - 1 - len(weights[i]))
- else:
- w = []
- if len(weights[i]) == 0:
- w = [1.0] * weights_length
- else:
- for j in range((len(weights[i]) - 1) // chunk_length + 1):
- w.append(1.0) # weight for starting token in this chunk
- w += weights[i][j * chunk_length : min(len(weights[i]), (j + 1) * chunk_length)]
- w.append(1.0) # weight for ending token in this chunk
- w += [1.0] * (weights_length - len(w))
- weights[i] = w[:]
-
- return tokens, weights
-
-
-def get_unweighted_text_embeddings(
- pipe: DiffusionPipeline, text_input: paddle.Tensor, chunk_length: int, no_boseos_middle: Optional[bool] = True
-):
- """
- When the length of tokens is a multiple of the capacity of the text encoder,
- it should be split into chunks and sent to the text encoder individually.
- """
- max_embeddings_multiples = (text_input.shape[1] - 2) // (chunk_length - 2)
- if max_embeddings_multiples > 1:
- text_embeddings = []
- for i in range(max_embeddings_multiples):
- # extract the i-th chunk
- text_input_chunk = text_input[:, i * (chunk_length - 2) : (i + 1) * (chunk_length - 2) + 2].clone()
-
- # cover the head and the tail by the starting and the ending tokens
- text_input_chunk[:, 0] = text_input[0, 0]
- text_input_chunk[:, -1] = text_input[0, -1]
-
- attention_mask = paddle.ones_like(text_input_chunk)
- text_embedding = pipe.text_encoder(text_input_chunk, attention_mask=attention_mask)[0]
-
- if no_boseos_middle:
- if i == 0:
- # discard the ending token
- text_embedding = text_embedding[:, :-1]
- elif i == max_embeddings_multiples - 1:
- # discard the starting token
- text_embedding = text_embedding[:, 1:]
- else:
- # discard both starting and ending tokens
- text_embedding = text_embedding[:, 1:-1]
-
- text_embeddings.append(text_embedding)
- text_embeddings = paddle.concat(text_embeddings, axis=1)
- else:
- attention_mask = paddle.ones_like(text_input)
- text_embeddings = pipe.text_encoder(text_input, attention_mask=attention_mask)[0]
- return text_embeddings
-
-
-def get_weighted_text_embeddings(
- pipe: DiffusionPipeline,
- prompt: Union[str, List[str]],
- uncond_prompt: Optional[Union[str, List[str]]] = None,
- max_embeddings_multiples: Optional[int] = 1,
- no_boseos_middle: Optional[bool] = False,
- skip_parsing: Optional[bool] = False,
- skip_weighting: Optional[bool] = False,
- **kwargs
-):
- r"""
- Prompts can be assigned with local weights using brackets. For example,
- prompt 'A (very beautiful) masterpiece' highlights the words 'very beautiful',
- and the embedding tokens corresponding to the words get multiplied by a constant, 1.1.
-
- Also, to regularize of the embedding, the weighted embedding would be scaled to preserve the original mean.
-
- Args:
- pipe (`DiffusionPipeline`):
- Pipe to provide access to the tokenizer and the text encoder.
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- uncond_prompt (`str` or `List[str]`):
- The unconditional prompt or prompts for guide the image generation. If unconditional prompt
- is provided, the embeddings of prompt and uncond_prompt are concatenated.
- max_embeddings_multiples (`int`, *optional*, defaults to `1`):
- The max multiple length of prompt embeddings compared to the max output length of text encoder.
- no_boseos_middle (`bool`, *optional*, defaults to `False`):
- If the length of text token is multiples of the capacity of text encoder, whether reserve the starting and
- ending token in each of the chunk in the middle.
- skip_parsing (`bool`, *optional*, defaults to `False`):
- Skip the parsing of brackets.
- skip_weighting (`bool`, *optional*, defaults to `False`):
- Skip the weighting. When the parsing is skipped, it is forced True.
- """
- max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
- if isinstance(prompt, str):
- prompt = [prompt]
-
- if not skip_parsing:
- prompt_tokens, prompt_weights = get_prompts_with_weights(pipe, prompt, max_length - 2)
- if uncond_prompt is not None:
- if isinstance(uncond_prompt, str):
- uncond_prompt = [uncond_prompt]
- uncond_tokens, uncond_weights = get_prompts_with_weights(pipe, uncond_prompt, max_length - 2)
- else:
- prompt_tokens = [
- token[1:-1] for token in pipe.tokenizer(prompt, max_length=max_length, truncation=True).input_ids
- ]
- prompt_weights = [[1.0] * len(token) for token in prompt_tokens]
- if uncond_prompt is not None:
- if isinstance(uncond_prompt, str):
- uncond_prompt = [uncond_prompt]
- uncond_tokens = [
- token[1:-1]
- for token in pipe.tokenizer(uncond_prompt, max_length=max_length, truncation=True).input_ids
- ]
- uncond_weights = [[1.0] * len(token) for token in uncond_tokens]
-
- # round up the longest length of tokens to a multiple of (model_max_length - 2)
- max_length = max([len(token) for token in prompt_tokens])
- if uncond_prompt is not None:
- max_length = max(max_length, max([len(token) for token in uncond_tokens]))
-
- max_embeddings_multiples = min(
- max_embeddings_multiples, (max_length - 1) // (pipe.tokenizer.model_max_length - 2) + 1
- )
- max_embeddings_multiples = max(1, max_embeddings_multiples)
- max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
-
- # pad the length of tokens and weights
- # support bert tokenizer
- bos = pipe.tokenizer.bos_token_id if pipe.tokenizer.bos_token_id is not None else pipe.tokenizer.cls_token_id
- eos = pipe.tokenizer.eos_token_id if pipe.tokenizer.eos_token_id is not None else pipe.tokenizer.sep_token_id
- pad = pipe.tokenizer.pad_token_id
- prompt_tokens, prompt_weights = pad_tokens_and_weights(
- prompt_tokens,
- prompt_weights,
- max_length,
- bos,
- eos,
- pad,
- no_boseos_middle=no_boseos_middle,
- chunk_length=pipe.tokenizer.model_max_length,
- )
- prompt_tokens = paddle.to_tensor(prompt_tokens)
- if uncond_prompt is not None:
- uncond_tokens, uncond_weights = pad_tokens_and_weights(
- uncond_tokens,
- uncond_weights,
- max_length,
- bos,
- eos,
- pad,
- no_boseos_middle=no_boseos_middle,
- chunk_length=pipe.tokenizer.model_max_length,
- )
- uncond_tokens = paddle.to_tensor(uncond_tokens)
-
- # get the embeddings
- text_embeddings = get_unweighted_text_embeddings(
- pipe, prompt_tokens, pipe.tokenizer.model_max_length, no_boseos_middle=no_boseos_middle
- )
- prompt_weights = paddle.to_tensor(prompt_weights, dtype=text_embeddings.dtype)
- if uncond_prompt is not None:
- uncond_embeddings = get_unweighted_text_embeddings(
- pipe, uncond_tokens, pipe.tokenizer.model_max_length, no_boseos_middle=no_boseos_middle
- )
- uncond_weights = paddle.to_tensor(uncond_weights, dtype=uncond_embeddings.dtype)
-
- # assign weights to the prompts and normalize in the sense of mean
- # TODO: should we normalize by chunk or in a whole (current implementation)?
- if (not skip_parsing) and (not skip_weighting):
- previous_mean = text_embeddings.mean(axis=[-2, -1])
- text_embeddings *= prompt_weights.unsqueeze(-1)
- text_embeddings *= previous_mean / text_embeddings.mean(axis=[-2, -1])
- if uncond_prompt is not None:
- previous_mean = uncond_embeddings.mean(axis=[-2, -1])
- uncond_embeddings *= uncond_weights.unsqueeze(-1)
- uncond_embeddings *= previous_mean / uncond_embeddings.mean(axis=[-2, -1])
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- if uncond_prompt is not None:
- text_embeddings = paddle.concat([uncond_embeddings, text_embeddings])
-
- return text_embeddings
-
-
-def preprocess_image(image):
- w, h = image.size
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
- image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
- image = np.array(image).astype(np.float32) / 255.0
- image = image[None].transpose(0, 3, 1, 2)
- image = paddle.to_tensor(image)
- return 2.0 * image - 1.0
-
-
-def preprocess_mask(mask):
- mask = mask.convert("L")
- w, h = mask.size
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
- mask = mask.resize((w // 8, h // 8), resample=PIL_INTERPOLATION["nearest"])
- mask = np.array(mask).astype(np.float32) / 255.0
- mask = np.tile(mask, (4, 1, 1))
- mask = mask[None].transpose(0, 1, 2, 3) # what does this step do?
- mask = 1 - mask # repaint white, keep black
- mask = paddle.to_tensor(mask)
- return mask
-
-
-class StableDiffusionPipelineAllinOne(DiffusionPipeline):
- r"""
- Pipeline for text-to-image image-to-image inpainting generation using Stable Diffusion.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular xxxx, etc.)
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`PNDMScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`]
- or [`DPMSolverMultistepScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please, refer to the [model card](https://huggingface.co/junnyu/stable-diffusion-v1-4-paddle) for details.
- feature_extractor ([`CLIPFeatureExtractor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
- _optional_components = ["safety_checker", "feature_extractor"]
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: Union[
- DDIMScheduler,
- PNDMScheduler,
- LMSDiscreteScheduler,
- EulerDiscreteScheduler,
- EulerAncestralDiscreteScheduler,
- DPMSolverMultistepScheduler,
- ],
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPFeatureExtractor,
- requires_safety_checker: bool = False,
- ):
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
- deprecation_message = (
- f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
- f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
- "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
- " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
- " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
- " file"
- )
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(scheduler.config)
- new_config["steps_offset"] = 1
- scheduler._internal_dict = FrozenDict(new_config)
-
- if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
- deprecation_message = (
- f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
- " `clip_sample` should be set to False in the configuration file. Please make sure to update the"
- " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
- " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
- " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
- )
- deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(scheduler.config)
- new_config["clip_sample"] = False
- scheduler._internal_dict = FrozenDict(new_config)
-
- if safety_checker is None and requires_safety_checker:
- logger.warning(
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
- " results in services or applications open to the public. PaddleNLP team, diffusers team and Hugging Face"
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
- )
- if safety_checker is not None and feature_extractor is None:
- raise ValueError(
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
- )
- is_unet_version_less_0_9_0 = hasattr(unet.config, "_ppdiffusers_version") and version.parse(
- version.parse(unet.config._ppdiffusers_version).base_version
- ) < version.parse("0.9.0.dev0")
- is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
- if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
- deprecation_message = (
- "The configuration file of the unet has set the default `sample_size` to smaller than"
- " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the"
- " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
- " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
- " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
- " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
- " in the config might lead to incorrect results in future versions. If you have downloaded this"
- " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
- " the `unet/config.json` file"
- )
- deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
- new_config = dict(unet.config)
- new_config["sample_size"] = 64
- unet._internal_dict = FrozenDict(new_config)
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- self.register_to_config(requires_safety_checker=requires_safety_checker)
-
- def __call__(self, *args, **kwargs):
- return self.text2image(*args, **kwargs)
-
- def text2img(self, *args, **kwargs):
- return self.text2image(*args, **kwargs)
-
- def _encode_prompt(
- self,
- prompt,
- negative_prompt,
- max_embeddings_multiples,
- no_boseos_middle,
- skip_parsing,
- skip_weighting,
- do_classifier_free_guidance,
- num_images_per_prompt,
- ):
- if do_classifier_free_guidance and negative_prompt is None:
- negative_prompt = ""
- text_embeddings = get_weighted_text_embeddings(
- self, prompt, negative_prompt, max_embeddings_multiples, no_boseos_middle, skip_parsing, skip_weighting
- )
-
- bs_embed, seq_len, _ = text_embeddings.shape
- text_embeddings = text_embeddings.tile([1, num_images_per_prompt, 1])
- text_embeddings = text_embeddings.reshape([bs_embed * num_images_per_prompt, seq_len, -1])
- return text_embeddings
-
- def run_safety_checker(self, image, dtype):
- if self.safety_checker is not None:
- safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pd")
- image, has_nsfw_concept = self.safety_checker(
- images=image, clip_input=safety_checker_input.pixel_values.cast(dtype)
- )
- else:
- has_nsfw_concept = None
- return image, has_nsfw_concept
-
- def decode_latents(self, latents):
- latents = 1 / 0.18215 * latents
- image = self.vae.decode(latents).sample
- image = (image / 2 + 0.5).clip(0, 1)
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
- image = image.transpose([0, 2, 3, 1]).cast("float32").numpy()
- return image
-
- def prepare_extra_step_kwargs(self, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- return extra_step_kwargs
-
- def check_inputs_text2img(self, prompt, height, width, callback_steps):
- if not isinstance(prompt, str) and not isinstance(prompt, list):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- def check_inputs_img2img_inpaint(self, prompt, strength, callback_steps):
- if not isinstance(prompt, str) and not isinstance(prompt, list):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if strength < 0 or strength > 1:
- raise ValueError(f"The value of strength should in [1.0, 1.0] but is {strength}")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- def prepare_latents_text2img(self, batch_size, num_channels_latents, height, width, dtype, latents=None):
- shape = [batch_size, num_channels_latents, height // 8, width // 8]
- if latents is None:
- latents = paddle.randn(shape, dtype=dtype)
- else:
- if latents.shape != shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- def prepare_latents_img2img(self, image, timestep, num_images_per_prompt, dtype):
- image = image.cast(dtype=dtype)
- init_latent_dist = self.vae.encode(image).latent_dist
- init_latents = init_latent_dist.sample()
- init_latents = 0.18215 * init_latents
-
- b, c, h, w = init_latents.shape
- init_latents = init_latents.tile([1, num_images_per_prompt, 1, 1])
- init_latents = init_latents.reshape([b * num_images_per_prompt, c, h, w])
-
- # add noise to latents using the timesteps
- noise = paddle.randn(init_latents.shape, dtype=dtype)
-
- # get latents
- init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
- latents = init_latents
-
- return latents
-
- def get_timesteps(self, num_inference_steps, strength):
- # get the original timestep using init_timestep
- offset = self.scheduler.config.get("steps_offset", 0)
- init_timestep = int(num_inference_steps * strength) + offset
- init_timestep = min(init_timestep, num_inference_steps)
-
- t_start = max(num_inference_steps - init_timestep + offset, 0)
- timesteps = self.scheduler.timesteps[t_start:]
-
- return timesteps
-
- def prepare_latents_inpaint(self, image, timestep, num_images_per_prompt, dtype):
- image = image.cast(dtype)
- init_latent_dist = self.vae.encode(image).latent_dist
- init_latents = init_latent_dist.sample()
- init_latents = 0.18215 * init_latents
-
- b, c, h, w = init_latents.shape
- init_latents = init_latents.tile([1, num_images_per_prompt, 1, 1])
- init_latents = init_latents.reshape([b * num_images_per_prompt, c, h, w])
-
- init_latents_orig = init_latents
-
- # add noise to latents using the timesteps
- noise = paddle.randn(init_latents.shape, dtype=dtype)
- init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
- latents = init_latents
- return latents, init_latents_orig, noise
-
- @paddle.no_grad()
- def text2image(
- self,
- prompt: Union[str, List[str]],
- height: int = 512,
- width: int = 512,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- seed: Optional[int] = None,
- latents: Optional[paddle.Tensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, paddle.Tensor], None]] = None,
- callback_steps: Optional[int] = 1,
- # new add
- max_embeddings_multiples: Optional[int] = 1,
- no_boseos_middle: Optional[bool] = False,
- skip_parsing: Optional[bool] = False,
- skip_weighting: Optional[bool] = False,
- **kwargs,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- height (`int`, *optional*, defaults to 512):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to 512):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- seed (`int`, *optional*):
- Random number seed.
- latents (`paddle.Tensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `seed`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: paddle.Tensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- seed = random.randint(0, 2**32) if seed is None else seed
- argument = dict(
- prompt=prompt,
- negative_prompt=negative_prompt,
- height=height,
- width=width,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- num_images_per_prompt=num_images_per_prompt,
- eta=eta,
- seed=seed,
- latents=latents,
- max_embeddings_multiples=max_embeddings_multiples,
- no_boseos_middle=no_boseos_middle,
- skip_parsing=skip_parsing,
- skip_weighting=skip_weighting,
- epoch_time=time.time(),
- )
- paddle.seed(seed)
- # 1. Check inputs. Raise error if not correct
- self.check_inputs_text2img(prompt, height, width, callback_steps)
-
- # 2. Define call parameters
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- text_embeddings = self._encode_prompt(
- prompt,
- negative_prompt,
- max_embeddings_multiples,
- no_boseos_middle,
- skip_parsing,
- skip_weighting,
- do_classifier_free_guidance,
- num_images_per_prompt,
- )
-
- # 4. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps)
- timesteps = self.scheduler.timesteps
-
- # 5. Prepare latent variables
- num_channels_latents = self.unet.in_channels
- latents = self.prepare_latents_text2img(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- height,
- width,
- text_embeddings.dtype,
- latents,
- )
-
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(eta)
-
- # 7. Denoising loop
- for i, t in enumerate(self.progress_bar(timesteps)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # call the callback, if provided
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # 8. Post-processing
- image = self.decode_latents(latents)
-
- # 9. Run safety checker
- image, has_nsfw_concept = self.run_safety_checker(image, text_embeddings.dtype)
-
- # 10. Convert to PIL
- if output_type == "pil":
- image = self.numpy_to_pil(image, argument=argument)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
-
- @paddle.no_grad()
- def img2img(
- self,
- prompt: Union[str, List[str]],
- image: Union[paddle.Tensor, PIL.Image.Image],
- strength: float = 0.8,
- height=None,
- width=None,
- num_inference_steps: Optional[int] = 50,
- guidance_scale: Optional[float] = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: Optional[float] = 0.0,
- seed: Optional[int] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, paddle.Tensor], None]] = None,
- callback_steps: Optional[int] = 1,
- # new add
- max_embeddings_multiples: Optional[int] = 1,
- no_boseos_middle: Optional[bool] = False,
- skip_parsing: Optional[bool] = False,
- skip_weighting: Optional[bool] = False,
- **kwargs,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- image (`paddle.Tensor` or `PIL.Image.Image`):
- `Image`, or tensor representing an image batch, that will be used as the starting point for the
- process.
- strength (`float`, *optional*, defaults to 0.8):
- Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
- `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
- number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
- noise will be maximum and the denoising process will run for the full number of iterations specified in
- `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference. This parameter will be modulated by `strength`.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- seed (`int`, *optional*):
- A random seed.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: paddle.Tensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- seed = random.randint(0, 2**32) if seed is None else seed
- image_str = image
- if isinstance(image_str, str):
- image = load_image(image_str)
-
- if height is None and width is None:
- width = (image.size[0] // 8) * 8
- height = (image.size[1] // 8) * 8
- elif height is None and width is not None:
- height = (image.size[1] // 8) * 8
- elif width is None and height is not None:
- width = (image.size[0] // 8) * 8
- else:
- height = height
- width = width
-
- argument = dict(
- prompt=prompt,
- image=image_str,
- negative_prompt=negative_prompt,
- height=height,
- width=width,
- strength=strength,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- num_images_per_prompt=num_images_per_prompt,
- eta=eta,
- seed=seed,
- max_embeddings_multiples=max_embeddings_multiples,
- no_boseos_middle=no_boseos_middle,
- skip_parsing=skip_parsing,
- skip_weighting=skip_weighting,
- epoch_time=time.time(),
- )
- paddle.seed(seed)
-
- # 1. Check inputs
- self.check_inputs_img2img_inpaint(prompt, strength, callback_steps)
-
- # 2. Define call parameters
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- text_embeddings = self._encode_prompt(
- prompt,
- negative_prompt,
- max_embeddings_multiples,
- no_boseos_middle,
- skip_parsing,
- skip_weighting,
- do_classifier_free_guidance,
- num_images_per_prompt,
- )
-
- # 4. Preprocess image
- if isinstance(image, PIL.Image.Image):
- image = image.resize((width, height))
- image = preprocess_image(image)
-
- # 5. set timesteps
- self.scheduler.set_timesteps(num_inference_steps)
- timesteps = self.get_timesteps(num_inference_steps, strength)
- latent_timestep = timesteps[:1].tile([batch_size * num_images_per_prompt])
-
- # 6. Prepare latent variables
- latents = self.prepare_latents_img2img(image, latent_timestep, num_images_per_prompt, text_embeddings.dtype)
-
- # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(eta)
-
- # 8. Denoising loop
- for i, t in enumerate(self.progress_bar(timesteps)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # call the callback, if provided
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # 9. Post-processing
- image = self.decode_latents(latents)
-
- # 10. Run safety checker
- image, has_nsfw_concept = self.run_safety_checker(image, text_embeddings.dtype)
-
- # 11. Convert to PIL
- if output_type == "pil":
- image = self.numpy_to_pil(image, argument=argument)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
-
- @paddle.no_grad()
- def inpaint(
- self,
- prompt: Union[str, List[str]],
- image: Union[paddle.Tensor, PIL.Image.Image],
- mask_image: Union[paddle.Tensor, PIL.Image.Image],
- height=None,
- width=None,
- strength: float = 0.8,
- num_inference_steps: Optional[int] = 50,
- guidance_scale: Optional[float] = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: Optional[float] = 0.0,
- seed: Optional[int] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, paddle.Tensor], None]] = None,
- callback_steps: Optional[int] = 1,
- # new add
- max_embeddings_multiples: Optional[int] = 1,
- no_boseos_middle: Optional[bool] = False,
- skip_parsing: Optional[bool] = False,
- skip_weighting: Optional[bool] = False,
- **kwargs,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- image (`paddle.Tensor` or `PIL.Image.Image`):
- `Image`, or tensor representing an image batch, that will be used as the starting point for the
- process. This is the image whose masked region will be inpainted.
- mask_image (`paddle.Tensor` or `PIL.Image.Image`):
- `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
- replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
- PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
- contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
- strength (`float`, *optional*, defaults to 0.8):
- Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1. When `strength`
- is 1, the denoising process will be run on the masked area for the full number of iterations specified
- in `num_inference_steps`. `image` will be used as a reference for the masked area, adding more
- noise to that region the larger the `strength`. If `strength` is 0, no inpainting will occur.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The reference number of denoising steps. More denoising steps usually lead to a higher quality image at
- the expense of slower inference. This parameter will be modulated by `strength`, as explained above.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- seed (`int`, *optional*):
- A random seed.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: paddle.Tensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- seed = random.randint(0, 2**32) if seed is None else seed
- image_str = image
- mask_image_str = mask_image
-
- if isinstance(image_str, str):
- image = load_image(image_str)
- if isinstance(mask_image_str, str):
- mask_image = load_image(mask_image_str)
-
- if height is None and width is None:
- width = (image.size[0] // 8) * 8
- height = (image.size[1] // 8) * 8
- elif height is None and width is not None:
- height = (image.size[1] // 8) * 8
- elif width is None and height is not None:
- width = (image.size[0] // 8) * 8
- else:
- height = height
- width = width
-
- argument = dict(
- prompt=prompt,
- image=image_str,
- mask_image=mask_image_str,
- negative_prompt=negative_prompt,
- height=height,
- width=width,
- strength=strength,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- num_images_per_prompt=num_images_per_prompt,
- eta=eta,
- seed=seed,
- max_embeddings_multiples=max_embeddings_multiples,
- no_boseos_middle=no_boseos_middle,
- skip_parsing=skip_parsing,
- skip_weighting=skip_weighting,
- epoch_time=time.time(),
- )
- paddle.seed(seed)
-
- # 1. Check inputs
- self.check_inputs_img2img_inpaint(prompt, strength, callback_steps)
-
- # 2. Define call parameters
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- text_embeddings = self._encode_prompt(
- prompt,
- negative_prompt,
- max_embeddings_multiples,
- no_boseos_middle,
- skip_parsing,
- skip_weighting,
- do_classifier_free_guidance,
- num_images_per_prompt,
- )
-
- if not isinstance(image, paddle.Tensor):
- image = image.resize((width, height))
- image = preprocess_image(image)
-
- if not isinstance(mask_image, paddle.Tensor):
- mask_image = mask_image.resize((width, height))
- mask_image = preprocess_mask(mask_image)
-
- # 5. set timesteps
- self.scheduler.set_timesteps(num_inference_steps)
- timesteps = self.get_timesteps(num_inference_steps, strength)
- latent_timestep = timesteps[:1].tile([batch_size * num_images_per_prompt])
-
- # 6. Prepare latent variables
- # encode the init image into latents and scale the latents
- latents, init_latents_orig, noise = self.prepare_latents_inpaint(
- image, latent_timestep, num_images_per_prompt, text_embeddings.dtype
- )
-
- # 7. Prepare mask latent
- mask = mask_image.cast(latents.dtype)
- mask = paddle.concat([mask] * batch_size * num_images_per_prompt)
-
- # 8. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(eta)
-
- # 9. Denoising loop
- for i, t in enumerate(self.progress_bar(timesteps)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
- # masking
- init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, t)
-
- latents = (init_latents_proper * mask) + (latents * (1 - mask))
-
- # call the callback, if provided
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # 10. Post-processing
- image = self.decode_latents(latents)
-
- # 11. Run safety checker
- image, has_nsfw_concept = self.run_safety_checker(image, text_embeddings.dtype)
-
- # 12. Convert to PIL
- if output_type == "pil":
- image = self.numpy_to_pil(image, argument=argument)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
-
- @staticmethod
- def numpy_to_pil(images, **kwargs):
- """
- Convert a numpy image or a batch of images to a PIL image.
- """
- if images.ndim == 3:
- images = images[None, ...]
- images = (images * 255).round().astype("uint8")
- pil_images = []
- argument = kwargs.pop("argument", None)
- for image in images:
- image = PIL.Image.fromarray(image)
- if argument is not None:
- image.argument = argument
- pil_images.append(image)
-
- return pil_images
diff --git a/spaces/7hao/bingo/src/lib/isomorphic/browser.ts b/spaces/7hao/bingo/src/lib/isomorphic/browser.ts
deleted file mode 100644
index de125b1f1786d1618cb1ff47f403d76c6784f4ce..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/lib/isomorphic/browser.ts
+++ /dev/null
@@ -1,11 +0,0 @@
-'use client'
-
-const debug = console.info.bind(console)
-
-class WebSocketAlias extends WebSocket {
- constructor(address: string | URL, ...args: any) {
- super(address)
- }
-}
-
-export default { fetch, WebSocket: WebSocketAlias, debug }
diff --git a/spaces/A00001/bingothoo/src/components/tailwind-indicator.tsx b/spaces/A00001/bingothoo/src/components/tailwind-indicator.tsx
deleted file mode 100644
index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/src/components/tailwind-indicator.tsx
+++ /dev/null
@@ -1,14 +0,0 @@
-export function TailwindIndicator() {
- if (process.env.NODE_ENV === 'production') return null
-
- return (
-
-
xs
-
sm
-
md
-
lg
-
xl
-
2xl
-
- )
-}
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/models_onnx.py b/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/models_onnx.py
deleted file mode 100644
index b0ed4a7847b419beef014f9afa1048400a829ebe..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/models_onnx.py
+++ /dev/null
@@ -1,819 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMsNSFsidM(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- version,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- if version == "v1":
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- else:
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- self.speaker_map = None
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def construct_spkmixmap(self, n_speaker):
- self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels))
- for i in range(n_speaker):
- self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]]))
- self.speaker_map = self.speaker_map.unsqueeze(0)
-
- def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None):
- if self.speaker_map is not None: # [N, S] * [S, B, 1, H]
- g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1]
- g = g * self.speaker_map # [N, S, B, 1, H]
- g = torch.sum(g, dim=1) # [N, 1, B, 1, H]
- g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N]
- else:
- g = g.unsqueeze(0)
- g = self.emb_g(g).transpose(1, 2)
-
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/you.py b/spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/you.py
deleted file mode 100644
index 02985ed14d4848c2de20a99b4771d208286a2558..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/you.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import sys
-import json
-import urllib.parse
-
-from curl_cffi import requests
-
-config = json.loads(sys.argv[1])
-messages = config['messages']
-prompt = ''
-
-
-def transform(messages: list) -> list:
- result = []
- i = 0
-
- while i < len(messages):
- if messages[i]['role'] == 'user':
- question = messages[i]['content']
- i += 1
-
- if i < len(messages) and messages[i]['role'] == 'assistant':
- answer = messages[i]['content']
- i += 1
- else:
- answer = ''
-
- result.append({'question': question, 'answer': answer})
-
- elif messages[i]['role'] == 'assistant':
- result.append({'question': '', 'answer': messages[i]['content']})
- i += 1
-
- elif messages[i]['role'] == 'system':
- result.append({'question': messages[i]['content'], 'answer': ''})
- i += 1
-
- return result
-
-headers = {
- 'Content-Type': 'application/x-www-form-urlencoded',
- 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
- 'Sec-Fetch-Site': 'same-origin',
- 'Accept-Language': 'en-GB,en;q=0.9',
- 'Sec-Fetch-Mode': 'navigate',
- 'Host': 'you.com',
- 'Origin': 'https://you.com',
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.4 Safari/605.1.15',
- 'Referer': 'https://you.com/api/streamingSearch?q=nice&safeSearch=Moderate&onShoppingPage=false&mkt=&responseFilter=WebPages,Translations,TimeZone,Computation,RelatedSearches&domain=youchat&queryTraceId=7a6671f8-5881-404d-8ea3-c3f8301f85ba&chat=%5B%7B%22question%22%3A%22hi%22%2C%22answer%22%3A%22Hello!%20How%20can%20I%20assist%20you%20today%3F%22%7D%5D&chatId=7a6671f8-5881-404d-8ea3-c3f8301f85ba&__cf_chl_tk=ex2bw6vn5vbLsUm8J5rDYUC0Bjzc1XZqka6vUl6765A-1684108495-0-gaNycGzNDtA',
- 'Connection': 'keep-alive',
- 'Sec-Fetch-Dest': 'document',
- 'Priority': 'u=0, i',
-}
-
-if messages[-1]['role'] == 'user':
- prompt = messages[-1]['content']
- messages = messages[:-1]
-
-params = urllib.parse.urlencode({
- 'q': prompt,
- 'domain': 'youchat',
- 'chat': transform(messages)
-})
-
-def output(chunk):
- if b'"youChatToken"' in chunk:
- chunk_json = json.loads(chunk.decode().split('data: ')[1])
-
- print(chunk_json['youChatToken'], flush=True, end = '')
-
-while True:
- try:
- response = requests.get(f'https://you.com/api/streamingSearch?{params}',
- headers=headers, content_callback=output, impersonate='safari15_5')
-
- exit(0)
-
- except Exception as e:
- print('an error occured, retrying... |', e, flush=True)
- continue
\ No newline at end of file
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet101_8xb32_in1k.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet101_8xb32_in1k.py
deleted file mode 100644
index 388d2cd918ab75ec46346faa0448ef9cf2893fc8..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet101_8xb32_in1k.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/resnet101.py', '../_base_/datasets/imagenet_bs32.py',
- '../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
-]
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-mixup_in1k.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-mixup_in1k.py
deleted file mode 100644
index 2a153d0e18f521f72b8beaf4cbea36d41f5b3300..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-mixup_in1k.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = [
- '../_base_/models/resnet50_mixup.py',
- '../_base_/datasets/imagenet_bs32.py',
- '../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
-]
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/abortedGenerations.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/abortedGenerations.ts
deleted file mode 100644
index 575cf637bfef812c40905e35570ba3ca1a31b241..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/abortedGenerations.ts
+++ /dev/null
@@ -1,29 +0,0 @@
-// Shouldn't be needed if we dove into sveltekit internals, see https://github.com/huggingface/chat-ui/pull/88#issuecomment-1523173850
-
-import { setTimeout } from "node:timers/promises";
-import { collections } from "./database";
-
-let closed = false;
-process.on("SIGINT", () => {
- closed = true;
-});
-
-export let abortedGenerations: Map = new Map();
-
-async function maintainAbortedGenerations() {
- while (!closed) {
- await setTimeout(1000);
-
- try {
- const aborts = await collections.abortedGenerations.find({}).sort({ createdAt: 1 }).toArray();
-
- abortedGenerations = new Map(
- aborts.map(({ conversationId, createdAt }) => [conversationId.toString(), createdAt])
- );
- } catch (err) {
- console.error(err);
- }
- }
-}
-
-maintainAbortedGenerations();
diff --git a/spaces/Adapter/CoAdapter/ldm/modules/diffusionmodules/util.py b/spaces/Adapter/CoAdapter/ldm/modules/diffusionmodules/util.py
deleted file mode 100644
index 637363dfe34799e70cfdbcd11445212df9d9ca1f..0000000000000000000000000000000000000000
--- a/spaces/Adapter/CoAdapter/ldm/modules/diffusionmodules/util.py
+++ /dev/null
@@ -1,270 +0,0 @@
-# adopted from
-# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
-# and
-# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
-# and
-# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py
-#
-# thanks!
-
-
-import os
-import math
-import torch
-import torch.nn as nn
-import numpy as np
-from einops import repeat
-
-from ldm.util import instantiate_from_config
-
-
-def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- if schedule == "linear":
- betas = (
- torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2
- )
-
- elif schedule == "cosine":
- timesteps = (
- torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s
- )
- alphas = timesteps / (1 + cosine_s) * np.pi / 2
- alphas = torch.cos(alphas).pow(2)
- alphas = alphas / alphas[0]
- betas = 1 - alphas[1:] / alphas[:-1]
- betas = np.clip(betas, a_min=0, a_max=0.999)
-
- elif schedule == "sqrt_linear":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64)
- elif schedule == "sqrt":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5
- else:
- raise ValueError(f"schedule '{schedule}' unknown.")
- return betas.numpy()
-
-
-def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True):
- if ddim_discr_method == 'uniform':
- c = num_ddpm_timesteps // num_ddim_timesteps
- ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c)))
- elif ddim_discr_method == 'quad':
- ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int)
- else:
- raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"')
-
- # assert ddim_timesteps.shape[0] == num_ddim_timesteps
- # add one to get the final alpha values right (the ones from first scale to data during sampling)
- steps_out = ddim_timesteps + 1
- if verbose:
- print(f'Selected timesteps for ddim sampler: {steps_out}')
- return steps_out
-
-
-def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True):
- # select alphas for computing the variance schedule
- alphas = alphacums[ddim_timesteps]
- alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist())
-
- # according the the formula provided in https://arxiv.org/abs/2010.02502
- sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev))
- if verbose:
- print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}')
- print(f'For the chosen value of eta, which is {eta}, '
- f'this results in the following sigma_t schedule for ddim sampler {sigmas}')
- return sigmas, alphas, alphas_prev
-
-
-def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function,
- which defines the cumulative product of (1-beta) over time from t = [0,1].
- :param num_diffusion_timesteps: the number of betas to produce.
- :param alpha_bar: a lambda that takes an argument t from 0 to 1 and
- produces the cumulative product of (1-beta) up to that
- part of the diffusion process.
- :param max_beta: the maximum beta to use; use values lower than 1 to
- prevent singularities.
- """
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return np.array(betas)
-
-
-def extract_into_tensor(a, t, x_shape):
- b, *_ = t.shape
- out = a.gather(-1, t)
- return out.reshape(b, *((1,) * (len(x_shape) - 1)))
-
-
-def checkpoint(func, inputs, params, flag):
- """
- Evaluate a function without caching intermediate activations, allowing for
- reduced memory at the expense of extra compute in the backward pass.
- :param func: the function to evaluate.
- :param inputs: the argument sequence to pass to `func`.
- :param params: a sequence of parameters `func` depends on but does not
- explicitly take as arguments.
- :param flag: if False, disable gradient checkpointing.
- """
- if flag:
- args = tuple(inputs) + tuple(params)
- return CheckpointFunction.apply(func, len(inputs), *args)
- else:
- return func(*inputs)
-
-
-class CheckpointFunction(torch.autograd.Function):
- @staticmethod
- def forward(ctx, run_function, length, *args):
- ctx.run_function = run_function
- ctx.input_tensors = list(args[:length])
- ctx.input_params = list(args[length:])
- ctx.gpu_autocast_kwargs = {"enabled": torch.is_autocast_enabled(),
- "dtype": torch.get_autocast_gpu_dtype(),
- "cache_enabled": torch.is_autocast_cache_enabled()}
- with torch.no_grad():
- output_tensors = ctx.run_function(*ctx.input_tensors)
- return output_tensors
-
- @staticmethod
- def backward(ctx, *output_grads):
- ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
- with torch.enable_grad(), \
- torch.cuda.amp.autocast(**ctx.gpu_autocast_kwargs):
- # Fixes a bug where the first op in run_function modifies the
- # Tensor storage in place, which is not allowed for detach()'d
- # Tensors.
- shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
- output_tensors = ctx.run_function(*shallow_copies)
- input_grads = torch.autograd.grad(
- output_tensors,
- ctx.input_tensors + ctx.input_params,
- output_grads,
- allow_unused=True,
- )
- del ctx.input_tensors
- del ctx.input_params
- del output_tensors
- return (None, None) + input_grads
-
-
-def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):
- """
- Create sinusoidal timestep embeddings.
- :param timesteps: a 1-D Tensor of N indices, one per batch element.
- These may be fractional.
- :param dim: the dimension of the output.
- :param max_period: controls the minimum frequency of the embeddings.
- :return: an [N x dim] Tensor of positional embeddings.
- """
- if not repeat_only:
- half = dim // 2
- freqs = torch.exp(
- -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half
- ).to(device=timesteps.device)
- args = timesteps[:, None].float() * freqs[None]
- embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
- if dim % 2:
- embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
- else:
- embedding = repeat(timesteps, 'b -> b d', d=dim)
- return embedding
-
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-
-def scale_module(module, scale):
- """
- Scale the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().mul_(scale)
- return module
-
-
-def mean_flat(tensor):
- """
- Take the mean over all non-batch dimensions.
- """
- return tensor.mean(dim=list(range(1, len(tensor.shape))))
-
-
-def normalization(channels):
- """
- Make a standard normalization layer.
- :param channels: number of input channels.
- :return: an nn.Module for normalization.
- """
- return GroupNorm32(32, channels)
-
-
-# PyTorch 1.7 has SiLU, but we support PyTorch 1.5.
-class SiLU(nn.Module):
- def forward(self, x):
- return x * torch.sigmoid(x)
-
-
-class GroupNorm32(nn.GroupNorm):
- def forward(self, x):
- return super().forward(x.float()).type(x.dtype)
-
-def conv_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D convolution module.
- """
- if dims == 1:
- return nn.Conv1d(*args, **kwargs)
- elif dims == 2:
- return nn.Conv2d(*args, **kwargs)
- elif dims == 3:
- return nn.Conv3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-def linear(*args, **kwargs):
- """
- Create a linear module.
- """
- return nn.Linear(*args, **kwargs)
-
-
-def avg_pool_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D average pooling module.
- """
- if dims == 1:
- return nn.AvgPool1d(*args, **kwargs)
- elif dims == 2:
- return nn.AvgPool2d(*args, **kwargs)
- elif dims == 3:
- return nn.AvgPool3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-class HybridConditioner(nn.Module):
-
- def __init__(self, c_concat_config, c_crossattn_config):
- super().__init__()
- self.concat_conditioner = instantiate_from_config(c_concat_config)
- self.crossattn_conditioner = instantiate_from_config(c_crossattn_config)
-
- def forward(self, c_concat, c_crossattn):
- c_concat = self.concat_conditioner(c_concat)
- c_crossattn = self.crossattn_conditioner(c_crossattn)
- return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]}
-
-
-def noise_like(shape, device, repeat=False):
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
- noise = lambda: torch.randn(shape, device=device)
- return repeat_noise() if repeat else noise()
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/clock.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/clock.d.ts
deleted file mode 100644
index f1575f4e1098fcc9691c8796db9e0bf3fd2ee408..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/clock.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import Clock from './time/clock/Clock';
-export default Clock;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/line.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/line.d.ts
deleted file mode 100644
index 57e54ba53a414dd1a2538b0abd46c028b81a5309..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/line.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import Line from './gameobjects/rendertexture/line/Line.js';
-export default Line;
\ No newline at end of file
diff --git a/spaces/AiMimicry/sovits-models/inference/infer_tool_grad.py b/spaces/AiMimicry/sovits-models/inference/infer_tool_grad.py
deleted file mode 100644
index b75af49c08e2e724839828bc419792ed580809bb..0000000000000000000000000000000000000000
--- a/spaces/AiMimicry/sovits-models/inference/infer_tool_grad.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import hashlib
-import json
-import logging
-import os
-import time
-from pathlib import Path
-import io
-import librosa
-import maad
-import numpy as np
-from inference import slicer
-import parselmouth
-import soundfile
-import torch
-import torchaudio
-
-from hubert import hubert_model
-import utils
-from models import SynthesizerTrn
-logging.getLogger('numba').setLevel(logging.WARNING)
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-
-def resize2d_f0(x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)),
- source)
- res = np.nan_to_num(target)
- return res
-
-def get_f0(x, p_len,f0_up_key=0):
-
- time_step = 160 / 16000 * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
-
- f0 = parselmouth.Sound(x, 16000).to_pitch_ac(
- time_step=time_step / 1000, voicing_threshold=0.6,
- pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
-
- pad_size=(p_len - len(f0) + 1) // 2
- if(pad_size>0 or p_len - len(f0) - pad_size>0):
- f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant')
-
- f0 *= pow(2, f0_up_key / 12)
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0
-
-def clean_pitch(input_pitch):
- num_nan = np.sum(input_pitch == 1)
- if num_nan / len(input_pitch) > 0.9:
- input_pitch[input_pitch != 1] = 1
- return input_pitch
-
-
-def plt_pitch(input_pitch):
- input_pitch = input_pitch.astype(float)
- input_pitch[input_pitch == 1] = np.nan
- return input_pitch
-
-
-def f0_to_pitch(ff):
- f0_pitch = 69 + 12 * np.log2(ff / 440)
- return f0_pitch
-
-
-def fill_a_to_b(a, b):
- if len(a) < len(b):
- for _ in range(0, len(b) - len(a)):
- a.append(a[0])
-
-
-def mkdir(paths: list):
- for path in paths:
- if not os.path.exists(path):
- os.mkdir(path)
-
-
-class VitsSvc(object):
- def __init__(self):
- self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- self.SVCVITS = None
- self.hps = None
- self.speakers = None
- self.hubert_soft = utils.get_hubert_model()
-
- def set_device(self, device):
- self.device = torch.device(device)
- self.hubert_soft.to(self.device)
- if self.SVCVITS != None:
- self.SVCVITS.to(self.device)
-
- def loadCheckpoint(self, path):
- self.hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json")
- self.SVCVITS = SynthesizerTrn(
- self.hps.data.filter_length // 2 + 1,
- self.hps.train.segment_size // self.hps.data.hop_length,
- **self.hps.model)
- _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", self.SVCVITS, None)
- _ = self.SVCVITS.eval().to(self.device)
- self.speakers = self.hps.spk
-
- def get_units(self, source, sr):
- source = source.unsqueeze(0).to(self.device)
- with torch.inference_mode():
- units = self.hubert_soft.units(source)
- return units
-
-
- def get_unit_pitch(self, in_path, tran):
- source, sr = torchaudio.load(in_path)
- source = torchaudio.functional.resample(source, sr, 16000)
- if len(source.shape) == 2 and source.shape[1] >= 2:
- source = torch.mean(source, dim=0).unsqueeze(0)
- soft = self.get_units(source, sr).squeeze(0).cpu().numpy()
- f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran)
- return soft, f0
-
- def infer(self, speaker_id, tran, raw_path):
- speaker_id = self.speakers[speaker_id]
- sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0)
- soft, pitch = self.get_unit_pitch(raw_path, tran)
- f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device)
- stn_tst = torch.FloatTensor(soft)
- with torch.no_grad():
- x_tst = stn_tst.unsqueeze(0).to(self.device)
- x_tst = torch.repeat_interleave(x_tst, repeats=2, dim=1).transpose(1, 2)
- audio = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[0,0].data.float()
- return audio, audio.shape[-1]
-
- def inference(self,srcaudio,chara,tran,slice_db):
- sampling_rate, audio = srcaudio
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- soundfile.write("tmpwav.wav", audio, 16000, format="wav")
- chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks)
- audio = []
- for (slice_tag, data) in audio_data:
- length = int(np.ceil(len(data) / audio_sr * self.hps.data.sampling_rate))
- raw_path = io.BytesIO()
- soundfile.write(raw_path, data, audio_sr, format="wav")
- raw_path.seek(0)
- if slice_tag:
- _audio = np.zeros(length)
- else:
- out_audio, out_sr = self.infer(chara, tran, raw_path)
- _audio = out_audio.cpu().numpy()
- audio.extend(list(_audio))
- audio = (np.array(audio) * 32768.0).astype('int16')
- return (self.hps.data.sampling_rate,audio)
diff --git a/spaces/Alpaca233/ChatPDF-GUI/README.md b/spaces/Alpaca233/ChatPDF-GUI/README.md
deleted file mode 100644
index d794eb787d72b855e773d774cfe22d0f285e15c0..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/ChatPDF-GUI/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-sdk: gradio
-emoji: 🚀
-colorFrom: red
-colorTo: red
-pinned: false
-app_file: app.py
----
\ No newline at end of file
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg3/training/dataset.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg3/training/dataset.py
deleted file mode 100644
index c4f10460a3c1d864d544fc7c9344cffd723312fe..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg3/training/dataset.py
+++ /dev/null
@@ -1,274 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Streaming images and labels from datasets created with dataset_tool.py."""
-
-import os
-import numpy as np
-import zipfile
-import PIL.Image
-import json
-import torch
-import dnnlib
-from petrel_client.client import Client
-import cv2
-
-
-try:
- import pyspng
-except ImportError:
- pyspng = None
-
-# ----------------------------------------------------------------------------
-
-
-class Dataset(torch.utils.data.Dataset):
- def __init__(self,
- name, # Name of the dataset.
- raw_shape, # Shape of the raw image data (NCHW).
- # Artificially limit the size of the dataset. None = no limit. Applied before xflip.
- max_size=None,
- # Enable conditioning labels? False = label dimension is zero.
- use_labels=False,
- # Artificially double the size of the dataset via x-flips. Applied after max_size.
- xflip=False,
- # Random seed to use when applying max_size.
- random_seed=0,
- square=False,
- ):
- print('Inside Dataset')
- self._name = name
- self._raw_shape = list(raw_shape)
- self._use_labels = use_labels
- self._raw_labels = None
- self._label_shape = None
- self._square = square
- print("inside dataset, _square: ", self._square)
-
- # Apply max_size.
- self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64)
- if (max_size is not None) and (self._raw_idx.size > max_size):
- np.random.RandomState(random_seed).shuffle(self._raw_idx)
- self._raw_idx = np.sort(self._raw_idx[:max_size])
-
- # Apply xflip.
- self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8)
- if xflip:
- self._raw_idx = np.tile(self._raw_idx, 2)
- self._xflip = np.concatenate(
- [self._xflip, np.ones_like(self._xflip)])
-
- def _get_raw_labels(self):
- if self._raw_labels is None:
- self._raw_labels = self._load_raw_labels() if self._use_labels else None
- if self._raw_labels is None:
- self._raw_labels = np.zeros(
- [self._raw_shape[0], 0], dtype=np.float32)
- assert isinstance(self._raw_labels, np.ndarray)
- assert self._raw_labels.shape[0] == self._raw_shape[0]
- assert self._raw_labels.dtype in [np.float32, np.int64]
- if self._raw_labels.dtype == np.int64:
- assert self._raw_labels.ndim == 1
- assert np.all(self._raw_labels >= 0)
- return self._raw_labels
-
- def close(self): # to be overridden by subclass
- pass
-
- def _load_raw_image(self, raw_idx): # to be overridden by subclass
- raise NotImplementedError
-
- def _load_raw_labels(self): # to be overridden by subclass
- raise NotImplementedError
-
- def __getstate__(self):
- return dict(self.__dict__, _raw_labels=None)
-
- def __del__(self):
- try:
- self.close()
- except:
- pass
-
- def __len__(self):
- return self._raw_idx.size
-
- def __getitem__(self, idx):
- image = self._load_raw_image(self._raw_idx[idx])
- assert isinstance(image, np.ndarray)
- assert list(image.shape) == self.image_shape
- assert image.dtype == np.uint8
- if self._xflip[idx]:
- assert image.ndim == 3 # CHW
- image = image[:, :, ::-1]
- return image.copy(), self.get_label(idx)
-
- def get_label(self, idx):
- label = self._get_raw_labels()[self._raw_idx[idx]]
- if label.dtype == np.int64:
- onehot = np.zeros(self.label_shape, dtype=np.float32)
- onehot[label] = 1
- label = onehot
- return label.copy()
-
- def get_details(self, idx):
- d = dnnlib.EasyDict()
- d.raw_idx = int(self._raw_idx[idx])
- d.xflip = (int(self._xflip[idx]) != 0)
- d.raw_label = self._get_raw_labels()[d.raw_idx].copy()
- return d
-
- @property
- def name(self):
- return self._name
-
- @property
- def image_shape(self):
- return list(self._raw_shape[1:])
-
- @property
- def num_channels(self):
- assert len(self.image_shape) == 3 # CHW
- return self.image_shape[0]
-
- @property
- def resolution(self):
- assert len(self.image_shape) == 3 # CHW
- if self._square:
- assert self.image_shape[1] == self.image_shape[2]
- else:
- assert self.image_shape[1] == self.image_shape[2] * 2
- return self.image_shape[1]
-
- @property
- def label_shape(self):
- if self._label_shape is None:
- raw_labels = self._get_raw_labels()
- if raw_labels.dtype == np.int64:
- self._label_shape = [int(np.max(raw_labels)) + 1]
- else:
- self._label_shape = raw_labels.shape[1:]
- return list(self._label_shape)
-
- @property
- def label_dim(self):
- assert len(self.label_shape) == 1
- return self.label_shape[0]
-
- @property
- def has_labels(self):
- return any(x != 0 for x in self.label_shape)
-
- @property
- def has_onehot_labels(self):
- return self._get_raw_labels().dtype == np.int64
-
-# ----------------------------------------------------------------------------
-
-
-class ImageFolderDataset(Dataset):
- def __init__(self,
- path, # Path to directory or zip.
- # Ensure specific resolution, None = highest available.
- resolution=None,
- ceph=False,
- square=False,
- # Additional arguments for the Dataset base class.
- **super_kwargs,
- ):
- self._path = path
- self._zipfile = None
- self._square = square
-
- if os.path.isdir(self._path):
- self._type = 'dir'
- self._all_fnames = {os.path.relpath(os.path.join(
- root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files}
- elif self._file_ext(self._path) == '.zip':
- self._type = 'zip'
- self._all_fnames = set(self._get_zipfile().namelist())
- else:
- raise IOError('Path must point to a directory or zip')
-
- PIL.Image.init()
- self._image_fnames = sorted(
- fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION)
- if len(self._image_fnames) == 0:
- raise IOError('No image files found in the specified path')
-
- name = os.path.splitext(os.path.basename(self._path))[0]
- raw_shape = [len(self._image_fnames)] + \
- list(self._load_raw_image(0).shape)
- # if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution):
- # raise IOError('Image files do not match the specified resolution')
- if resolution is not None:
- if self._square:
- raw_shape[2] = raw_shape[3] = resolution
- else:
- raw_shape[2] = resolution
- raw_shape[3] = resolution // 2
- # print(raw_shape)
- super().__init__(name=name, raw_shape=raw_shape, square=square, **super_kwargs)
-
- @staticmethod
- def _file_ext(fname):
- return os.path.splitext(fname)[1].lower()
-
- def _get_zipfile(self):
- assert self._type == 'zip'
- if self._zipfile is None:
- self._zipfile = zipfile.ZipFile(self._path)
- return self._zipfile
-
- def _open_file(self, fname):
- if self._type == 'dir':
- return open(os.path.join(self._path, fname), 'rb')
- if self._type == 'zip':
- return self._get_zipfile().open(fname, 'r')
- return None
-
- def close(self):
- try:
- if self._zipfile is not None:
- self._zipfile.close()
- finally:
- self._zipfile = None
-
- def __getstate__(self):
- return dict(super().__getstate__(), _zipfile=None)
-
- def _load_raw_image(self, raw_idx):
- fname = self._image_fnames[raw_idx]
- with self._open_file(fname) as f:
- if pyspng is not None and self._file_ext(fname) == '.png':
- image = pyspng.load(f.read())
- else:
- image = np.array(PIL.Image.open(f))
- if image.ndim == 2:
- image = image[:, :, np.newaxis] # HW => HWC
- image = image.transpose(2, 0, 1) # HWC => CHW
- return image
-
- def _load_raw_labels(self):
- fname = 'dataset.json'
- if fname not in self._all_fnames:
- return None
- with self._open_file(fname) as f:
- labels = json.load(f)['labels']
- if labels is None:
- return None
- labels = dict(labels)
- labels = [labels[fname.replace('\\', '/')]
- for fname in self._image_fnames]
- labels = np.array(labels)
- labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim])
- return labels
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/ddim/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/ddim/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r50_fpn_mstrain_3x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r50_fpn_mstrain_3x_coco.py
deleted file mode 100644
index 91fa28cde470cb323f90f89a56d8acb6f9f0a22e..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r50_fpn_mstrain_3x_coco.py
+++ /dev/null
@@ -1,20 +0,0 @@
-_base_ = './paa_r50_fpn_1x_coco.py'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(
- type='Resize',
- img_scale=[(1333, 640), (1333, 800)],
- multiscale_mode='range',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-data = dict(train=dict(pipeline=train_pipeline))
-lr_config = dict(step=[28, 34])
-runner = dict(type='EpochBasedRunner', max_epochs=36)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/misc.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/misc.py
deleted file mode 100644
index 2c58d0d7fee9fe3d4519270ad8c1e998d0d8a18c..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/misc.py
+++ /dev/null
@@ -1,377 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import collections.abc
-import functools
-import itertools
-import subprocess
-import warnings
-from collections import abc
-from importlib import import_module
-from inspect import getfullargspec
-from itertools import repeat
-
-
-# From PyTorch internals
-def _ntuple(n):
-
- def parse(x):
- if isinstance(x, collections.abc.Iterable):
- return x
- return tuple(repeat(x, n))
-
- return parse
-
-
-to_1tuple = _ntuple(1)
-to_2tuple = _ntuple(2)
-to_3tuple = _ntuple(3)
-to_4tuple = _ntuple(4)
-to_ntuple = _ntuple
-
-
-def is_str(x):
- """Whether the input is an string instance.
-
- Note: This method is deprecated since python 2 is no longer supported.
- """
- return isinstance(x, str)
-
-
-def import_modules_from_strings(imports, allow_failed_imports=False):
- """Import modules from the given list of strings.
-
- Args:
- imports (list | str | None): The given module names to be imported.
- allow_failed_imports (bool): If True, the failed imports will return
- None. Otherwise, an ImportError is raise. Default: False.
-
- Returns:
- list[module] | module | None: The imported modules.
-
- Examples:
- >>> osp, sys = import_modules_from_strings(
- ... ['os.path', 'sys'])
- >>> import os.path as osp_
- >>> import sys as sys_
- >>> assert osp == osp_
- >>> assert sys == sys_
- """
- if not imports:
- return
- single_import = False
- if isinstance(imports, str):
- single_import = True
- imports = [imports]
- if not isinstance(imports, list):
- raise TypeError(
- f'custom_imports must be a list but got type {type(imports)}')
- imported = []
- for imp in imports:
- if not isinstance(imp, str):
- raise TypeError(
- f'{imp} is of type {type(imp)} and cannot be imported.')
- try:
- imported_tmp = import_module(imp)
- except ImportError:
- if allow_failed_imports:
- warnings.warn(f'{imp} failed to import and is ignored.',
- UserWarning)
- imported_tmp = None
- else:
- raise ImportError
- imported.append(imported_tmp)
- if single_import:
- imported = imported[0]
- return imported
-
-
-def iter_cast(inputs, dst_type, return_type=None):
- """Cast elements of an iterable object into some type.
-
- Args:
- inputs (Iterable): The input object.
- dst_type (type): Destination type.
- return_type (type, optional): If specified, the output object will be
- converted to this type, otherwise an iterator.
-
- Returns:
- iterator or specified type: The converted object.
- """
- if not isinstance(inputs, abc.Iterable):
- raise TypeError('inputs must be an iterable object')
- if not isinstance(dst_type, type):
- raise TypeError('"dst_type" must be a valid type')
-
- out_iterable = map(dst_type, inputs)
-
- if return_type is None:
- return out_iterable
- else:
- return return_type(out_iterable)
-
-
-def list_cast(inputs, dst_type):
- """Cast elements of an iterable object into a list of some type.
-
- A partial method of :func:`iter_cast`.
- """
- return iter_cast(inputs, dst_type, return_type=list)
-
-
-def tuple_cast(inputs, dst_type):
- """Cast elements of an iterable object into a tuple of some type.
-
- A partial method of :func:`iter_cast`.
- """
- return iter_cast(inputs, dst_type, return_type=tuple)
-
-
-def is_seq_of(seq, expected_type, seq_type=None):
- """Check whether it is a sequence of some type.
-
- Args:
- seq (Sequence): The sequence to be checked.
- expected_type (type): Expected type of sequence items.
- seq_type (type, optional): Expected sequence type.
-
- Returns:
- bool: Whether the sequence is valid.
- """
- if seq_type is None:
- exp_seq_type = abc.Sequence
- else:
- assert isinstance(seq_type, type)
- exp_seq_type = seq_type
- if not isinstance(seq, exp_seq_type):
- return False
- for item in seq:
- if not isinstance(item, expected_type):
- return False
- return True
-
-
-def is_list_of(seq, expected_type):
- """Check whether it is a list of some type.
-
- A partial method of :func:`is_seq_of`.
- """
- return is_seq_of(seq, expected_type, seq_type=list)
-
-
-def is_tuple_of(seq, expected_type):
- """Check whether it is a tuple of some type.
-
- A partial method of :func:`is_seq_of`.
- """
- return is_seq_of(seq, expected_type, seq_type=tuple)
-
-
-def slice_list(in_list, lens):
- """Slice a list into several sub lists by a list of given length.
-
- Args:
- in_list (list): The list to be sliced.
- lens(int or list): The expected length of each out list.
-
- Returns:
- list: A list of sliced list.
- """
- if isinstance(lens, int):
- assert len(in_list) % lens == 0
- lens = [lens] * int(len(in_list) / lens)
- if not isinstance(lens, list):
- raise TypeError('"indices" must be an integer or a list of integers')
- elif sum(lens) != len(in_list):
- raise ValueError('sum of lens and list length does not '
- f'match: {sum(lens)} != {len(in_list)}')
- out_list = []
- idx = 0
- for i in range(len(lens)):
- out_list.append(in_list[idx:idx + lens[i]])
- idx += lens[i]
- return out_list
-
-
-def concat_list(in_list):
- """Concatenate a list of list into a single list.
-
- Args:
- in_list (list): The list of list to be merged.
-
- Returns:
- list: The concatenated flat list.
- """
- return list(itertools.chain(*in_list))
-
-
-def check_prerequisites(
- prerequisites,
- checker,
- msg_tmpl='Prerequisites "{}" are required in method "{}" but not '
- 'found, please install them first.'): # yapf: disable
- """A decorator factory to check if prerequisites are satisfied.
-
- Args:
- prerequisites (str of list[str]): Prerequisites to be checked.
- checker (callable): The checker method that returns True if a
- prerequisite is meet, False otherwise.
- msg_tmpl (str): The message template with two variables.
-
- Returns:
- decorator: A specific decorator.
- """
-
- def wrap(func):
-
- @functools.wraps(func)
- def wrapped_func(*args, **kwargs):
- requirements = [prerequisites] if isinstance(
- prerequisites, str) else prerequisites
- missing = []
- for item in requirements:
- if not checker(item):
- missing.append(item)
- if missing:
- print(msg_tmpl.format(', '.join(missing), func.__name__))
- raise RuntimeError('Prerequisites not meet.')
- else:
- return func(*args, **kwargs)
-
- return wrapped_func
-
- return wrap
-
-
-def _check_py_package(package):
- try:
- import_module(package)
- except ImportError:
- return False
- else:
- return True
-
-
-def _check_executable(cmd):
- if subprocess.call(f'which {cmd}', shell=True) != 0:
- return False
- else:
- return True
-
-
-def requires_package(prerequisites):
- """A decorator to check if some python packages are installed.
-
- Example:
- >>> @requires_package('numpy')
- >>> func(arg1, args):
- >>> return numpy.zeros(1)
- array([0.])
- >>> @requires_package(['numpy', 'non_package'])
- >>> func(arg1, args):
- >>> return numpy.zeros(1)
- ImportError
- """
- return check_prerequisites(prerequisites, checker=_check_py_package)
-
-
-def requires_executable(prerequisites):
- """A decorator to check if some executable files are installed.
-
- Example:
- >>> @requires_executable('ffmpeg')
- >>> func(arg1, args):
- >>> print(1)
- 1
- """
- return check_prerequisites(prerequisites, checker=_check_executable)
-
-
-def deprecated_api_warning(name_dict, cls_name=None):
- """A decorator to check if some arguments are deprecate and try to replace
- deprecate src_arg_name to dst_arg_name.
-
- Args:
- name_dict(dict):
- key (str): Deprecate argument names.
- val (str): Expected argument names.
-
- Returns:
- func: New function.
- """
-
- def api_warning_wrapper(old_func):
-
- @functools.wraps(old_func)
- def new_func(*args, **kwargs):
- # get the arg spec of the decorated method
- args_info = getfullargspec(old_func)
- # get name of the function
- func_name = old_func.__name__
- if cls_name is not None:
- func_name = f'{cls_name}.{func_name}'
- if args:
- arg_names = args_info.args[:len(args)]
- for src_arg_name, dst_arg_name in name_dict.items():
- if src_arg_name in arg_names:
- warnings.warn(
- f'"{src_arg_name}" is deprecated in '
- f'`{func_name}`, please use "{dst_arg_name}" '
- 'instead')
- arg_names[arg_names.index(src_arg_name)] = dst_arg_name
- if kwargs:
- for src_arg_name, dst_arg_name in name_dict.items():
- if src_arg_name in kwargs:
-
- assert dst_arg_name not in kwargs, (
- f'The expected behavior is to replace '
- f'the deprecated key `{src_arg_name}` to '
- f'new key `{dst_arg_name}`, but got them '
- f'in the arguments at the same time, which '
- f'is confusing. `{src_arg_name} will be '
- f'deprecated in the future, please '
- f'use `{dst_arg_name}` instead.')
-
- warnings.warn(
- f'"{src_arg_name}" is deprecated in '
- f'`{func_name}`, please use "{dst_arg_name}" '
- 'instead')
- kwargs[dst_arg_name] = kwargs.pop(src_arg_name)
-
- # apply converted arguments to the decorated method
- output = old_func(*args, **kwargs)
- return output
-
- return new_func
-
- return api_warning_wrapper
-
-
-def is_method_overridden(method, base_class, derived_class):
- """Check if a method of base class is overridden in derived class.
-
- Args:
- method (str): the method name to check.
- base_class (type): the class of the base class.
- derived_class (type | Any): the class or instance of the derived class.
- """
- assert isinstance(base_class, type), \
- "base_class doesn't accept instance, Please pass class instead."
-
- if not isinstance(derived_class, type):
- derived_class = derived_class.__class__
-
- base_method = getattr(base_class, method)
- derived_method = getattr(derived_class, method)
- return derived_method != base_method
-
-
-def has_method(obj: object, method: str) -> bool:
- """Check whether the object has a method.
-
- Args:
- method (str): The method name to check.
- obj (object): The object to check.
-
- Returns:
- bool: True if the object has the method else False.
- """
- return hasattr(obj, method) and callable(getattr(obj, method))
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/video/processing.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/video/processing.py
deleted file mode 100644
index 3d90b96e0823d5f116755e7f498d25d17017224a..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/video/processing.py
+++ /dev/null
@@ -1,160 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os
-import os.path as osp
-import subprocess
-import tempfile
-
-from annotator.uniformer.mmcv.utils import requires_executable
-
-
-@requires_executable('ffmpeg')
-def convert_video(in_file,
- out_file,
- print_cmd=False,
- pre_options='',
- **kwargs):
- """Convert a video with ffmpeg.
-
- This provides a general api to ffmpeg, the executed command is::
-
- `ffmpeg -y -i `
-
- Options(kwargs) are mapped to ffmpeg commands with the following rules:
-
- - key=val: "-key val"
- - key=True: "-key"
- - key=False: ""
-
- Args:
- in_file (str): Input video filename.
- out_file (str): Output video filename.
- pre_options (str): Options appears before "-i ".
- print_cmd (bool): Whether to print the final ffmpeg command.
- """
- options = []
- for k, v in kwargs.items():
- if isinstance(v, bool):
- if v:
- options.append(f'-{k}')
- elif k == 'log_level':
- assert v in [
- 'quiet', 'panic', 'fatal', 'error', 'warning', 'info',
- 'verbose', 'debug', 'trace'
- ]
- options.append(f'-loglevel {v}')
- else:
- options.append(f'-{k} {v}')
- cmd = f'ffmpeg -y {pre_options} -i {in_file} {" ".join(options)} ' \
- f'{out_file}'
- if print_cmd:
- print(cmd)
- subprocess.call(cmd, shell=True)
-
-
-@requires_executable('ffmpeg')
-def resize_video(in_file,
- out_file,
- size=None,
- ratio=None,
- keep_ar=False,
- log_level='info',
- print_cmd=False):
- """Resize a video.
-
- Args:
- in_file (str): Input video filename.
- out_file (str): Output video filename.
- size (tuple): Expected size (w, h), eg, (320, 240) or (320, -1).
- ratio (tuple or float): Expected resize ratio, (2, 0.5) means
- (w*2, h*0.5).
- keep_ar (bool): Whether to keep original aspect ratio.
- log_level (str): Logging level of ffmpeg.
- print_cmd (bool): Whether to print the final ffmpeg command.
- """
- if size is None and ratio is None:
- raise ValueError('expected size or ratio must be specified')
- if size is not None and ratio is not None:
- raise ValueError('size and ratio cannot be specified at the same time')
- options = {'log_level': log_level}
- if size:
- if not keep_ar:
- options['vf'] = f'scale={size[0]}:{size[1]}'
- else:
- options['vf'] = f'scale=w={size[0]}:h={size[1]}:' \
- 'force_original_aspect_ratio=decrease'
- else:
- if not isinstance(ratio, tuple):
- ratio = (ratio, ratio)
- options['vf'] = f'scale="trunc(iw*{ratio[0]}):trunc(ih*{ratio[1]})"'
- convert_video(in_file, out_file, print_cmd, **options)
-
-
-@requires_executable('ffmpeg')
-def cut_video(in_file,
- out_file,
- start=None,
- end=None,
- vcodec=None,
- acodec=None,
- log_level='info',
- print_cmd=False):
- """Cut a clip from a video.
-
- Args:
- in_file (str): Input video filename.
- out_file (str): Output video filename.
- start (None or float): Start time (in seconds).
- end (None or float): End time (in seconds).
- vcodec (None or str): Output video codec, None for unchanged.
- acodec (None or str): Output audio codec, None for unchanged.
- log_level (str): Logging level of ffmpeg.
- print_cmd (bool): Whether to print the final ffmpeg command.
- """
- options = {'log_level': log_level}
- if vcodec is None:
- options['vcodec'] = 'copy'
- if acodec is None:
- options['acodec'] = 'copy'
- if start:
- options['ss'] = start
- else:
- start = 0
- if end:
- options['t'] = end - start
- convert_video(in_file, out_file, print_cmd, **options)
-
-
-@requires_executable('ffmpeg')
-def concat_video(video_list,
- out_file,
- vcodec=None,
- acodec=None,
- log_level='info',
- print_cmd=False):
- """Concatenate multiple videos into a single one.
-
- Args:
- video_list (list): A list of video filenames
- out_file (str): Output video filename
- vcodec (None or str): Output video codec, None for unchanged
- acodec (None or str): Output audio codec, None for unchanged
- log_level (str): Logging level of ffmpeg.
- print_cmd (bool): Whether to print the final ffmpeg command.
- """
- tmp_filehandler, tmp_filename = tempfile.mkstemp(suffix='.txt', text=True)
- with open(tmp_filename, 'w') as f:
- for filename in video_list:
- f.write(f'file {osp.abspath(filename)}\n')
- options = {'log_level': log_level}
- if vcodec is None:
- options['vcodec'] = 'copy'
- if acodec is None:
- options['acodec'] = 'copy'
- convert_video(
- tmp_filename,
- out_file,
- print_cmd,
- pre_options='-f concat -safe 0',
- **options)
- os.close(tmp_filehandler)
- os.remove(tmp_filename)
diff --git a/spaces/Benson/text-generation/Examples/3gp Video Download.md b/spaces/Benson/text-generation/Examples/3gp Video Download.md
deleted file mode 100644
index 6735f0ba63bc48ce94be76420876de66484f72df..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/3gp Video Download.md
+++ /dev/null
@@ -1,210 +0,0 @@
-
-
Cómo descargar vídeos 3GP desde Internet
-
¿Desea ver videos en su teléfono móvil sin preocuparse por el uso de datos o el espacio de almacenamiento? Si es así, es posible que esté interesado en descargar vídeos en formato 3GP. 3GP es un formato de archivo multimedia que fue desarrollado por el Proyecto de Asociación de Tercera Generación (3GPP) para su uso en teléfonos móviles 3G. Es un formato comprimido que puede almacenar secuencias de vídeo y audio con bajo ancho de banda y requisitos de datos. También es compatible con algunos teléfonos 2G y 4G.
Descargar vídeos en formato 3GP puede ser útil por varias razones. Puedes guardar tus videos favoritos de YouTube, Facebook, Instagram y otros sitios para verlos sin conexión. También puede convertir sus videos existentes a formato 3GP para ahorrar espacio en su teléfono o compartirlos con sus amigos. Sin embargo, no todos los sitios web o software soportan formato 3GP, por lo que es posible que necesite ayuda para encontrar los mejores sitios o herramientas para descargar videos 3GP.
-
En este artículo, le presentaremos los 9 mejores sitios para descargar películas y videos 3GP de Internet. También explicaremos qué es un archivo 3GP, cómo abrirlo y cómo convertirlo a otros formatos. Al final de este artículo, podrás descargar cualquier video que quieras en formato 3GP con facilidad.
-
Los 9 mejores sitios para descargar películas y videos 3GP
-
Hay muchos sitios web que ofrecen descargas de video gratuitas en varios formatos, incluyendo 3GP. Sin embargo, no todos son confiables, seguros o fáciles de usar. Algunos pueden tener cargos ocultos, malware o anuncios molestos. Algunos pueden tener opciones limitadas, baja calidad o velocidad lenta. Para ayudarle a evitar estos problemas, hemos seleccionado los 9 mejores sitios que creemos que son los mejores para descargar películas y videos 3GP. Aquí están:
-
-
Descargar 4.cc
-
-
Algunas características de Download4.cc son:
-
-
Soporta más de 1000 sitios, incluyendo YouTube, Twitter, Facebook, Instagram, TikTok, Vimeo, Dailymotion, etc.
-
Puede descargar vídeos en varios formatos, como MP4, MP3, AVI, MOV, WAV, etc.
-
Puede descargar vídeos en modo batch, hasta cinco a la vez.
-
Puede recortar y combinar sus vídeos descargados.
-
Tiene un rendimiento rápido y estable.
-
-
HitPaw
-
HitPaw es otro gran sitio web para descargar películas 3GP en pasos fáciles. Es una guía completa que le proporciona instrucciones detalladas sobre cómo descargar películas de diferentes fuentes, como YouTube, Netflix, Amazon Prime Video, Hulu, Disney, etc. También le da consejos sobre cómo elegir el mejor software descargador de películas, cómo evitar problemas legales y cómo disfrutar de sus películas descargadas en diferentes dispositivos.
-
Algunas características de HitPaw son:
-
-
Cubre varios géneros, como acción, comedia, terror, romance, ciencia ficción, etc.
-
Proporciona capturas de pantalla y vídeos para ilustrar los pasos.
-
Recomienda el mejor software de descarga de películas para cada fuente, como 4K Video Downloader, Y2Mate, VideoSolo Inovideo, etc.
-
Explica los pros y los contras de cada software, tales como precio, velocidad, calidad, características, etc.
-
Ofrece una prueba gratuita para algunos de los programas.
-
-
SaveTheVideo
-
SaveTheVideo es un descargador y convertidor de video en línea que puede ayudarlo a descargar videos 3GP de Instagram, Vimeo, Dailymotion y más. También es gratis, en línea y fácil de usar. Solo tiene que introducir la URL del vídeo que desea descargar en el sitio web, y haga clic en el botón Descargar. A continuación, puede seleccionar el formato de salida como 3GP y la calidad como HD o SD. El sitio web comenzará a descargar el video en unos segundos.
-
Algunas características de SaveTheVideo son:
-
-
-
Puede descargar vídeos en varios formatos, como MP4, MP3, AVI, MOV, WAV, etc.
-
Puede convertir vídeos a diferentes formatos en línea sin descargarlos.
-
Puede editar videos en línea recortando, cortando, rotando, agregando subtítulos, etc.
-
No tiene anuncios ni ventanas emergentes.
-
-
Cable salvavidas
-
Lifewire es un sitio web que le proporciona una explicación detallada de lo que es un archivo 3GP y cómo abrirlo. También le da información sobre las ventajas y desventajas del formato 3GP, y cómo convertirlo a otros formatos. Es un recurso útil para cualquiera que quiera aprender más sobre los archivos 3GP y cómo usarlos.
-
Algunas características de Lifewire son:
-
-
Define lo que es un archivo 3GP y cómo funciona.
-
Lista los programas que pueden abrir archivos 3GP en Windows, Mac, Android, iOS y Linux.
-
Compara el formato 3GP con otros formatos, como MP4, AVI, MOV, etc.
-
Sugiere algunas maneras de convertir archivos 3GP a otros formatos en línea o fuera de línea.
-
Responde algunas preguntas comunes sobre los archivos 3GP.
-
-
VideoProc
-
VideoProc es una revisión del mejor software de conversión de vídeo para 2023. Es una herramienta potente y fácil de usar que puede convertir vídeos desde y hacia formato 3GP con alta calidad y velocidad rápida. También puede descargar videos de más de 1000 sitios, editar videos con varias funciones y grabar videos desde webcam, pantalla o dispositivos externos. Es una guía completa que le muestra cómo usar VideoProc para convertir, descargar, editar y grabar videos en pasos simples.
-
Algunas características de VideoProc son:
-
-
Soporta más de 370 formatos de entrada y 420 formatos de salida, incluyendo 3GP, MP4, AVI, MOV, MKV, etc.
-
Puede convertir videos con velocidad 47x más rápida y sin pérdida de calidad.
-
Puede descargar vídeos de YouTube, Facebook, Instagram, Vimeo, Dailymotion, etc.
-
-
Puede grabar vídeos desde webcam, pantalla o dispositivos externos con audio y anotaciones.
-
-
Convertidor de vídeo de MiniTool
-
MiniTool Video Converter es una herramienta gratuita para convertir vídeos desde y hacia formato 3GP. Es una herramienta simple y fácil de usar que puede convertir videos en modo por lotes con alta calidad y velocidad rápida. También puede descargar vídeos de YouTube y otros sitios en varios formatos. Es una herramienta muy útil para cualquiera que quiera convertir o descargar vídeos gratis.
-
Algunas características de MiniTool Video Converter son:
-
-
Soporta más de 1000 formatos de entrada y salida, incluyendo 3GP, MP4, AVI, MOV, MKV, etc.
-
Puede convertir vídeos en modo por lotes sin límite de tamaño o tiempo.
-
Puede descargar vídeos de YouTube y otros sitios en varios formatos.
-
Puede extraer audio de archivos de vídeo y guardarlos como MP3, WAV, etc.
-
Tiene una interfaz limpia e intuitiva.
-
-
FileInfo.com
-
FileInfo.com es un recurso para obtener información sobre la extensión de archivo 3GP y el software relacionado. Es un sitio web que le proporciona los detalles básicos de los archivos 3GP, como el tipo de archivo, categoría, descripción, desarrollador, popularidad, etc. También enumera el software que puede abrir o convertir archivos 3GP en diferentes plataformas. Es un recurso útil para cualquiera que quiera aprender más sobre los archivos 3GP y cómo usarlos.
-
Algunas características de FileInfo.com son:
-
-
Proporciona la información básica de archivos 3GP y software relacionado.
-
Lista el software que puede abrir o convertir archivos 3GP en Windows, Mac, Android, iOS, Linux, etc.
-
Se enlaza a los sitios web oficiales del software para obtener más información o descargar.
-
Actualiza la información regularmente para mantenerse al día con los últimos desarrollos.
-
Tiene una función de búsqueda para encontrar información sobre otros tipos de archivos.
-
-
TechRadar
-
-
Algunas características de TechRadar son:
-
-
Revisa los 10 mejores convertidores de video gratis para PC y Mac, como Any Video Converter Free, Freemake Video Converter, HandBrake, etc.
-
Compara las características, rendimiento, calidad y facilidad de uso de cada software.
-
Da los pros y los contras de cada software, como velocidad, soporte de formato, opciones de edición, anuncios, etc.
-
Proporciona los enlaces de descarga y capturas de pantalla de cada software.
-
Actualiza la lista regularmente para incluir el último software y cambios.
-
-
Cualquier convertidor de vídeo libre
-
Any Video Converter Free es el mejor convertidor de vídeo gratuito en este momento que maneja archivos en línea y fuera de línea. Es una herramienta versátil y potente que puede convertir vídeos desde y hacia formato 3GP con alta calidad y velocidad rápida. También puede descargar vídeos de YouTube y otros sitios en varios formatos. También puede editar vídeos con varias funciones, como recorte, recorte, rotación, adición de efectos, subtítulos, marcas de agua, etc. Es una herramienta completa que puede satisfacer todas sus necesidades de conversión de vídeo.
-
Algunas características de Any Video Converter Free son:
-
-
Soporta más de 200 formatos de entrada y 70 formatos de salida, incluyendo 3GP, MP4, AVI, MOV, MKV, etc.
-
Puede convertir vídeos sin pérdida de calidad y hasta 30 veces más rápido.
-
Puede descargar vídeos de YouTube y otros sitios en varios formatos.
-
Puede editar videos con varias características, como recorte, recorte, rotación, adición de efectos, subtítulos, marcas de agua, etc.
-
No tiene anuncios ni malware.
-
-
Conclusión
-
-
Te presentamos los 9 mejores sitios para descargar películas y videos 3GP de Internet. Son:
-
-
-
Sitio
-
Características
-
-
-
Descargar.cc
-
Un clic para descargar vídeos 3GP de YouTube y otros sitios
-
-
-
HitPaw
-
Una guía completa para descargar películas 3GP en pasos fáciles
-
-
-
SaveTheVideo
-
Un descargador y convertidor de vídeo en línea para Instagram, Vimeo, Dailymotion, y más
-
-
-
Cable de vida
-
Una explicación detallada de lo que es un archivo 3GP y cómo abrirlo
-
-
-
VideoProc
-
Una revisión del mejor software de conversión de video para 2023
-
-
-
MiniTool Video Converter
-
Una herramienta gratuita para convertir vídeos desde y hacia formato 3GP
-
-
-
FileInfo.com
-
Un recurso para obtener información sobre la extensión de archivo 3GP y el software relacionado
-
-
-
TechRadar
-
Una lista de los mejores conversores de video gratis para tu PC y Mac en 2023
-
-
-
Cualquier convertidor de vídeo libre
-
El mejor convertidor de vídeo gratuito en este momento que maneja archivos en línea y fuera de línea
-
-
-
Entre estos sitios, recomendamos Any Video Converter Free como la mejor opción para descargar vídeos 3GP. Es una herramienta versátil y potente que puede convertir vídeos desde y hacia formato 3GP con alta calidad y velocidad rápida. También puede descargar vídeos de YouTube y otros sitios en varios formatos. También puede editar vídeos con varias funciones, como recorte, recorte, rotación, adición de efectos, subtítulos, marcas de agua, etc. Es una herramienta completa que puede satisfacer todas sus necesidades de conversión de vídeo.
-
Esperamos que este artículo te haya ayudado a aprender a descargar videos 3GP desde Internet. Si tiene alguna pregunta o sugerencia, no dude en dejar un comentario a continuación. ¡Gracias por leer!
-
Preguntas frecuentes
-
¿Cuáles son las ventajas y desventajas del formato 3GP?
-
-
-
Puede almacenar secuencias de vídeo y audio con bajo ancho de banda y requisitos de datos.
-
Es compatible con algunos teléfonos 2G, 3G y 4G.
-
Puede ahorrar uso de datos, espacio de almacenamiento o visualización sin conexión.
-
Se puede compartir fácilmente con amigos a través de Bluetooth o MMS.
-
-
Las desventajas del formato 3GP son:
-
-
Tiene una calidad baja en comparación con otros formatos, como MP4, AVI, MOV, etc.
-
No es compatible con algunos sitios web o software.
-
Puede que no se reproduzca en algunos dispositivos o reproductores multimedia.
-
Puede perder algunas características o metadatos cuando se convierte a otros formatos.
-
-
¿Cómo abrir un archivo 3GP en Windows o Mac?
-
Para abrir un archivo 3GP en Windows o Mac, necesita un programa que pueda soportar el formato 3GP. Algunos de los programas que pueden abrir archivos 3GP son:
-
-
VLC Media Player: Un reproductor multimedia gratuito y de código abierto que puede reproducir casi cualquier archivo de vídeo o audio.
-
MPC-HC: Un reproductor multimedia ligero y potente que puede reproducir la mayoría de los formatos de vídeo o audio.
-
GOM Player: Un reproductor multimedia popular y versátil que puede reproducir varios formatos de vídeo o audio.
-
KMPlayer: Un reproductor multimedia multifuncional que puede reproducir varios formatos de vídeo o audio.
-
PotPlayer: Un reproductor multimedia suave y estable que puede reproducir varios formatos de vídeo o audio.
-
iTunes: un reproductor multimedia y una biblioteca que puede reproducir música y vídeos en tu PC o Mac.
-
QuickTime Player: un reproductor multimedia que puede reproducir películas, música e imágenes en tu Mac.
-
iMovie: un software de edición de vídeo que puede importar y exportar vídeos en varios formatos en su Mac.
-
Reproductor de Windows Media: Un reproductor multimedia que puede reproducir música y videos en su PC con Windows.
-
Windows Movie Maker: un software de edición de vídeo que puede importar y exportar vídeos en varios formatos en su PC con Windows.
-
-
-
Online Video Converter: Una herramienta gratuita y en línea que puede convertir vídeos a y desde varios formatos.
-
CloudConvert: Una herramienta gratuita y en línea que puede convertir vídeos, audio, imágenes, documentos y más.
-
Zamzar: una herramienta gratuita y en línea que puede convertir videos, audio, imágenes, documentos y más.
-
Wondershare UniConverter: Un software potente y fácil de usar que puede convertir vídeos desde y hacia varios formatos.
-
Freemake Video Converter: Un software popular y versátil que puede convertir vídeos a y desde varios formatos.
-
-
¿Cómo descargar un video 3GP de YouTube?
-
Para descargar un video 3GP de YouTube, necesita una herramienta o software que pueda descargar videos de YouTube en formato 3GP. Algunas de las herramientas o software que pueden descargar vídeos de YouTube en formato 3GP son:
-
-
Download4.cc: Como se mencionó anteriormente, es uno de los mejores sitios web para descargar videos 3GP de YouTube y otros sitios.
-
Y2Mate: Una herramienta gratuita y en línea que puede descargar vídeos de YouTube en varios formatos, incluyendo 3GP.
-
VideoSolo Inovideo: Un software profesional y confiable que puede descargar videos de YouTube en varios formatos, incluyendo 3GP.
-
4K Video Downloader: Un software rápido y de alta calidad que puede descargar vídeos de YouTube en varios formatos, incluyendo 3GP.
-
ClipGrab: un software simple y fácil de usar que puede descargar videos de YouTube en varios formatos, incluyendo 3GP.
-
Cómo jugar un video 3GP en Android o iOS?
-
Para reproducir un video 3GP en Android o iOS, necesita una aplicación de reproductor de medios que pueda soportar el formato 3GP. Algunas de las aplicaciones de reproductores multimedia que pueden reproducir vídeos 3GP en Android o iOS son:
-
-
VLC para Android o iOS: Una aplicación de reproductor multimedia gratuita y de código abierto que puede reproducir casi cualquier archivo de vídeo o audio.
-
MX Player para Android o iOS: una aplicación de reproductor multimedia popular y potente que puede reproducir varios formatos de vídeo o audio.
-
-
GOM Player para Android o iOS: una aplicación de reproductor multimedia versátil y fluida que puede reproducir varios formatos de vídeo o audio.
-
PotPlayer para Android o iOS: una aplicación de reproductor de medios estable y rápido que puede reproducir varios formatos de vídeo o audio.
-
Cómo compartir un video 3GP con amigos?
-
Para compartir un vídeo 3GP con tus amigos, tienes varias opciones. Puedes:
-
-
Envía el vídeo 3GP vía Bluetooth o MMS a los teléfonos de tus amigos.
-
Sube el vídeo 3GP a un servicio en la nube, como Google Drive, Dropbox, OneDrive, etc., y comparte el enlace con tus amigos.
-
Sube el video 3GP a una plataforma de redes sociales, como Facebook, Instagram, Twitter, etc.
-
Graba el vídeo 3GP en un CD o DVD y dáselo a tus amigos.
-
Convierte el vídeo 3GP a otro formato, como MP4, AVI, MOV, etc., y compártelo con tus amigos utilizando cualquiera de los métodos anteriores.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Apkadmin Entre Nosotros Men Mod.md b/spaces/Benson/text-generation/Examples/Apkadmin Entre Nosotros Men Mod.md
deleted file mode 100644
index e185fd2f54c5cb6818a5d0a240b9412a3a3baaef..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Apkadmin Entre Nosotros Men Mod.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
Apkadmin entre nosotros Mod Menu: ¿Qué es y cómo usarlo?
-
Si eres un fan de Among Us, el popular juego de deducción social multijugador donde tienes que averiguar quién es el impostor entre tus compañeros de equipo, es posible que hayas oído hablar de los menús mod. Los menús mod son versiones modificadas del juego que te permiten acceder a varios trucos y hacks que pueden darte una ventaja sobre otros jugadores o simplemente hacer el juego más divertido. Uno de los menús mod más populares para Among Us es apkadmin, un sitio web que ofrece una descarga gratuita de un menú mod que tiene muchas características y opciones.
-
En este artículo, vamos a explicar lo que es apkadmin entre nosotros menú mod, qué características tiene, cuáles son sus ventajas y desventajas, cómo descargar e instalar, y cómo usarlo en su juego. También responderemos algunas preguntas frecuentes sobre apkadmin entre nosotros menú mod.
Características de Apkadmin entre nosotros Mod Menu
-
El menú de mod apkadmin entre nosotros tiene muchas características que pueden mejorar su juego o hacerlo más interesante. Algunas de estas características son:
-
-
Modo Dios: Esta característica te permite volverte invencible e inmune a cualquier daño o intento de matar de otros jugadores o impostores.
-
Desbloquear todas las pieles: Esta función le permite desbloquear todas las pieles, sombreros, mascotas y trajes que están disponibles en el juego sin pagar dinero o monedas.
-
Pasta de chat: Esta característica le permite pegar cualquier texto o mensaje en el cuadro de chat sin necesidad de escribirlo manualmente.
-
No hay anuncios: Esta función permite eliminar todos los anuncios que aparecen en el juego.
-
No cooldown: Esta función te permite evitar el temporizador de tiempo de reutilización que te impide realizar ciertas acciones en el juego, como matar, informar o llamar a una reunión de emergencia.
-
-
Mostrar impostores: Esta función te permite ver quiénes son los impostores en tu juego al marcarlos con un color rojo.
-
Mostrar fantasmas: Esta función te permite ver quiénes son los fantasmas en tu juego al marcarlos con un color blanco.
-
Mostrar roles: Esta característica le permite ver los roles de otros jugadores en su juego, como compañero de equipo, impostor, sheriff, doctor, ingeniero, etc.
-
Speed hack: Esta característica le permite aumentar o disminuir su velocidad en el juego.
-
Teletransportación: Esta función le permite teletransportarse a cualquier lugar del mapa.
-
Corte de pared: Esta característica le permite caminar a través de paredes y obstáculos.
-
Visión hack: Esta característica le permite ver todo en el mapa, incluso en la oscuridad o cuando las luces son saboteadas.
-
-
Estas son solo algunas de las características de la apkadmin entre nosotros menú mod. Hay muchas más características que puede explorar y probar por sí mismo.
-
Ventajas de usar Apkadmin entre nosotros Mod Menu
-
El uso de apkadmin entre nosotros menú mod puede tener algunas ventajas para su juego. Algunas de estas ventajas son:
-
-
Tener más diversión: Usando el menú mod puede hacer el juego más divertido y agradable para usted, especialmente si usted está aburrido de jugar de la misma manera o con las mismas reglas. Puedes experimentar con diferentes características y ver cómo afectan al juego.
-
Personalización de tu juego: Usando el menú mod puedes personalizar tu juego de acuerdo a tus preferencias y gustos. Puedes elegir qué funciones habilitar o deshabilitar, y cómo usarlas. También puedes cambiar tu apariencia y rol en el juego.
-
-
-
Desventajas de usar Apkadmin entre nosotros Mod Menu
-
Sin embargo, el uso de la apkadmin entre nosotros menú mod también puede tener algunas desventajas para su juego. Algunas de estas desventajas son:
-
-
Conseguir prohibido: El uso del menú mod puede conseguir que se le prohibió el juego o de ciertos servidores. Los desarrolladores de Among Us no apoyan ni aprueban el uso de menús mod, y pueden detectar y prohibir a los jugadores que los usan. Si te prohíben, puedes perder tu progreso y cuenta en el juego.
-
Arruinar el juego para otros: Usar el menú de mods puede arruinar el juego para otros jugadores que quieren jugar de forma justa y legítima. El menú mod puede darte una ventaja injusta sobre otros jugadores, o hacer el juego demasiado fácil o aburrido para ti. Esto puede hacer que otros jugadores se sientan frustrados o engañados, y pueden renunciar o reportarlo.
-
Riesgo de malware: El uso del menú mod puede exponer su dispositivo a malware o virus que pueden dañar su dispositivo o robar su información personal. El sitio web apkadmin puede no ser seguro, y puede contener enlaces maliciosos o archivos que pueden infectar su dispositivo. Siempre debe tener cuidado al descargar e instalar cualquier cosa de fuentes desconocidas.
-
-
Cómo descargar e instalar Apkadmin entre nosotros Mod Menu
-
Si desea descargar e instalar el apkadmin entre nosotros menú mod, tendrá que seguir algunos pasos. Aquí hay una guía paso a paso sobre cómo hacerlo.
-
-
Requisitos para Apkadmin entre nosotros Mod Menu
-
Antes de descargar e instalar el apkadmin entre nosotros menú mod, tendrá que tener algunos requisitos. Estos son:
-
-
Un dispositivo Android que puede ejecutarse entre nosotros.
-
El juego original Among Us instalado en su dispositivo.
-
Una conexión a Internet para descargar e instalar el menú mod.
-
Una aplicación de administrador de archivos para acceder y administrar sus archivos.
-
-
-
Una vez que tenga todos los requisitos, puede seguir estos pasos para descargar e instalar el apkadmin entre nosotros menú mod.
-
-
Ir a apkadmin.com, que es el sitio web oficial de apkadmin.
-
Buscar entre nosotros Mod Menú por Apkadmin en la barra de búsqueda o navegar por las categorías.
-
Seleccione la última versión del menú mod y haga clic en Descargar APK.
-
Espere a que termine la descarga y luego localice el archivo descargado en su aplicación de administrador de archivos.
-
Si no ha habilitado Fuentes desconocidas en su dispositivo, vaya a Configuración > Seguridad > Fuentes desconocidas y habilite. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store
Toque en el archivo descargado y haga clic en Instalar. Espere a que termine la instalación.
-
Abre el juego Among Us y disfruta del menú mod.
-
-
Cómo usar apkadmin entre nosotros Mod Menu
-
Después de haber descargado e instalado el apkadmin entre nosotros menú de mod, puede usarlo en su juego. Aquí hay una guía paso a paso sobre cómo hacerlo.
-
Cómo acceder a Apkadmin entre nosotros Mod Menu
-
Para acceder a la apkadmin entre nosotros menú mod, es necesario hacer lo siguiente:
-
-
Abre el juego Among Us y únete o crea un juego.
-
Una vez que estés en la pantalla del juego, toca el icono flotante que dice Mod Menu. Esto abrirá la interfaz del menú mod.
-
Puede arrastrar y mover el icono a cualquier posición de la pantalla.
-
También puede tocar el icono de nuevo para ocultar o mostrar la interfaz de menú mod.
-
-
Cómo activar y desactivar Apkadmin entre nosotros Características del menú Mod
-
Para habilitar y deshabilitar diferentes características del menú apkadmin entre nosotros mod, debe hacer lo siguiente:
-
-
Abra la interfaz del menú mod tocando el icono flotante.
-
Verá una lista de características con casillas de verificación junto a ellas. Puede pulsar en las casillas de verificación para habilitar o desactivar las características.
-
-
También puede utilizar los controles deslizantes junto a algunas características para ajustar sus valores o ajustes.
-
Algunas características pueden requerir que reinicies el juego o te unas a un nuevo juego para que funcione correctamente.
-
-
Conclusión
-
El menú de mod apkadmin entre nosotros es una versión modificada del juego que le permite acceder a varios trucos y hacks que pueden hacer que su juego más divertido o interesante. Sin embargo, también tiene algunas desventajas, como ser prohibido, arruinar el juego para otros y arriesgar el malware. Por lo tanto, debe usarlo bajo su propio riesgo y discreción, y ser respetuoso con otros jugadores y los desarrolladores del juego. Aquí hay algunos consejos y advertencias para usar el menú mod:
-
-
No utilice el menú mod en servidores públicos o oficiales, ya que esto puede hacer que otros jugadores lo prohíban o informen sobre usted. Úsalo solo en servidores privados o personalizados con tus amigos u otros usuarios mod.
-
No utilice el menú mod de forma excesiva o abusiva, ya que esto puede arruinar el juego para usted o para otros. Úsalo solo para fines de diversión o entretenimiento, y no para engañar o obtener una ventaja injusta.
-
No descargue ni instale el menú mod desde ninguna otra fuente que apkadmin.com, ya que esto puede exponer su dispositivo a malware o virus. Compruebe siempre el tamaño y el nombre del archivo antes de descargar o instalar nada.
-
No comparta su información personal o datos de cuenta con nadie en apkadmin.com, ya que esto puede comprometer su seguridad o privacidad. Siempre tenga cuidado al navegar o hacer clic en cualquier enlace o anuncio en apkadmin.com.
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre apkadmin entre nosotros menú mod:
-
Q: ¿Es seguro apkadmin entre nosotros menú mod?
-
-
Q: Es apkadmin entre nosotros menú mod libre?
-
A: Apkadmin entre nosotros el menú mod es gratuito para descargar e instalar desde apkadmin.com, pero puede contener anuncios o compras en la aplicación que pueden costarle dinero. Por lo tanto, debes tener cuidado al usarlo, y evitar hacer clic en cualquier enlace o anuncio que pueda cobrarte dinero.
-
Q: ¿Puedo usar apkadmin entre nosotros menú mod en dispositivos iOS?
-
A: Apkadmin entre nosotros menú mod solo es compatible con dispositivos Android, y no se puede utilizar en dispositivos iOS. Por lo tanto, si tiene un iPhone o iPad, no puede usar apkadmin entre nosotros menú mod en su dispositivo.
-
Q: ¿Puedo usar apkadmin entre nosotros menú mod en el PC?
-
A: Apkadmin entre nosotros menú mod solo es compatible con dispositivos Android, y no se puede utilizar en el PC. Por lo tanto, si tiene una computadora Windows o Mac, no puede usar apkadmin entre nosotros menú mod en su computadora.
-
Q: ¿Cómo puedo actualizar apkadmin entre nosotros menú mod?
-
-
-
Ir a apkadmin.com y comprobar si hay una nueva versión del menú mod disponible.
-
Si hay una nueva versión, descárgala e instálala siguiendo los mismos pasos que antes.
-
Si no hay una nueva versión, espere a que apkadmin suelte una y vuelva a comprobarla más tarde.
-
-
Espero que este artículo te haya ayudado a entender lo que es apkadmin entre nosotros menú mod, qué características tiene, cuáles son sus ventajas y desventajas, cómo descargar e instalar, y cómo usarlo en tu juego. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. Gracias por leer y divertirse jugando entre nosotros con apkadmin entre nosotros menú mod!
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Bubble Shooter 3 Descarga Gratuita.md b/spaces/Benson/text-generation/Examples/Bubble Shooter 3 Descarga Gratuita.md
deleted file mode 100644
index 42990fca97b47da818e7ebf4495b2aac7b655fa6..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Bubble Shooter 3 Descarga Gratuita.md
+++ /dev/null
@@ -1,47 +0,0 @@
-
-
Bubble Shooter 3 Descarga gratuita: Un juego divertido y adictivo para todas las edades
-
¿Te encanta jugar juegos que son fáciles de aprender pero difíciles de dominar? ¿Te gusta hacer estallar burbujas de colores y resolver puzzles? Si respondiste sí, entonces deberías probar Bubble Shooter 3, un juego gratuito que te mantendrá entretenido durante horas. En este artículo, te diremos qué es Bubble Shooter 3, por qué deberías descargarlo, cómo jugarlo y algunos consejos y trucos para dominarlo. ¡Vamos a empezar!
Bubble Shooter 3 es un clásico juego de burbujas que se inspira en juegos como Bejeweled y Candy Crush. Fue creado por Funnygames, un gran desarrollador de juegos en línea que ha lanzado muchos otros juegos populares. Aquí están algunas características de Bubble Shooter 3:
-
Un clásico juego de burbujas con tres modos
-
Bubble Shooter 3 tiene tres modos para elegir: clásico, rompecabezas y árcade. En el modo clásico, tienes que borrar todas las burbujas en la pantalla antes de que lleguen a la parte inferior. En el modo puzzle, tienes que borrar todas las burbujas en un número dado de movimientos. En el modo árcade, tienes que eliminar tantas burbujas como sea posible en un tiempo limitado. Cada modo tiene diferentes niveles de dificultad y desafíos.
-
Un juego simple y fácil de jugar
-
Bubble Shooter 3 es muy fácil de jugar. Todo lo que tienes que hacer es usar el ratón o el dedo para apuntar y disparar burbujas del mismo color. Cuando coinciden tres o más burbujas del mismo color, que pop y desaparecen. Cuanto más burbujas que pop, más puntos de puntuación. También puede hacer combos haciendo estallar más de tres burbujas en una sola toma o vinculando los colores coincidentes en una cadena.
-
-
Un juego que desafía tu cerebro y habilidades
-
-
¿Por qué descargar Bubble Shooter 3?
-
Hay muchas razones por las que deberías descargar Bubble Shooter 3. Aquí están algunas de ellas:
-
Es gratuito y está disponible para cualquier dispositivo
-
Bubble Shooter 3 es completamente gratis para descargar y jugar. No tienes que pagar nada ni registrar nada para disfrutar de este juego. También puedes reproducirlo en cualquier dispositivo, ya sea Android, iOS o portátil. Puedes jugar en cualquier momento, en cualquier lugar, siempre y cuando tengas una conexión a Internet.
-
Es divertido y relajante para jugar
-
Bubble Shooter 3 es un juego divertido y relajante que te hará feliz. Tiene colores brillantes, gráficos lindos, sonidos relajantes y animaciones suaves. Te hará sentir tranquilo y satisfecho mientras haces estallar burbujas y las ves estallar. También te hará sonreír al ver personajes divertidos como pandas, monos, gatos y más.
-
Es adecuado para todos
-
Bubble Shooter 3
Bubble Shooter 3 es un juego que es adecuado para todos, independientemente de la edad, el género o el fondo. Es un juego que cualquiera puede jugar y disfrutar, desde niños hasta adultos, desde principiantes hasta expertos. Es un juego que se puede jugar solo o con amigos y familiares. Es un juego que puede traer alegría y diversión a cualquiera que lo juegue.
-
Cómo descargar y jugar Bubble Shooter 3?
-
Si usted está interesado en jugar Bubble Shooter 3, aquí están los pasos que debe seguir:
-
Descárgalo desde la Google Play Store o la App Store
-
El primer paso es descargar el juego desde la Google Play Store o la App Store, dependiendo de tu dispositivo. Puedes encontrar los siguientes enlaces:
El juego es gratis para descargar e instalar, pero puede contener algunos anuncios y compras en la aplicación.
-
Iniciar el juego y elegir el modo
-
-
Dispara y combina burbujas del mismo color para hacerlas estallar
-
El paso final es comenzar a jugar el juego. Verás un disparador de burbujas en la parte inferior de la pantalla y un montón de burbujas en la parte superior. Tienes que usar el ratón o el dedo para apuntar y disparar burbujas del mismo color. Cuando coinciden tres o más burbujas del mismo color, que pop y desaparecen. Tienes que borrar todas las burbujas de la pantalla para completar el nivel y pasar a la siguiente.
-
Consejos y trucos para dominar Bubble Shooter 3
-
Bubble Shooter 3 es un juego que requiere habilidad y estrategia. Aquí hay algunos consejos y trucos para ayudarte a dominarlo:
-
Apunta cuidadosamente y usa las paredes para rebotar tus burbujas
-
Una de las habilidades más importantes en Bubble Shooter 3 es apuntar. Tienes que apuntar con cuidado y precisión para alcanzar tu objetivo. También puedes usar las paredes para rebotar tus burbujas y llegar a lugares difíciles. Esto puede ayudarte a crear más coincidencias y eliminar más burbujas.
-
Usa potenciadores y amplificadores para eliminar niveles difíciles
-
Otra habilidad en Bubble Shooter 3 es usar potenciadores y potenciadores. Estos son elementos especiales que pueden ayudarte a superar niveles difíciles. Por ejemplo, puede utilizar una bomba para explotar una gran área de burbujas, o una burbuja de arco iris para que coincida con cualquier color. También puedes usar monedas para comprar más potenciadores y potenciadores en la tienda.
-
Planifica tus movimientos y crea combos
-
La última habilidad en Bubble Shooter 3 es planificar tus movimientos y crear combos. Tienes que pensar con anticipación y anticiparte a lo que sucederá cuando hagas estallar una burbuja. Tienes que buscar oportunidades para crear combos haciendo estallar más de tres burbujas en una sola toma o vinculando los colores a juego en una cadena. Esto puede ayudarle a ganar más puntos y borrar más niveles.
-
Conclusión
-
- P: ¿Cuántos niveles hay en Bubble Shooter 3? R: Hay más de 1000 niveles en Bubble Shooter 3, cada uno con diferentes diseños, obstáculos y objetivos. P: ¿Cómo puedo obtener más monedas en Bubble Shooter 3? R: Puedes obtener más monedas completando niveles, viendo anuncios o comprándolos con dinero real. P: ¿Cómo puedo desbloquear nuevos tiradores de burbujas en Bubble Shooter 3? R: Puedes desbloquear nuevos tiradores de burbujas recogiendo estrellas de completar niveles. P: ¿Cómo cambio entre modos en Bubble Shooter 3? R: Puedes cambiar entre modos tocando el icono del menú en la esquina superior izquierda de la pantalla. P: ¿Cómo hago una pausa o reanudo el juego en Bubble Shooter 3? R: Puede hacer una pausa o reanudar el juego tocando el icono de pausa en la esquina superior derecha de la pantalla. 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Entre Nosotros Blackpink.md b/spaces/Benson/text-generation/Examples/Descargar Entre Nosotros Blackpink.md
deleted file mode 100644
index cecc78d1503f9b230fbc94cd5d00919ea71e426c..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Entre Nosotros Blackpink.md
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
Cómo descargar entre nosotros Blackpink: Una guía para parpadeos y jugadores
-
¿Eres un fan de BLACKPINK, la sensación global de K-pop? ¿Te encanta jugar entre nosotros, el popular juego multijugador de engaño y traición? Si respondiste sí a ambas preguntas, ¡estás de suerte! Hay un mod hecho por fans de Among Us que presenta miembros y temas de BLACKPINK, y se llama Among Us Blackpink. En este artículo, te mostraremos cómo descargar y jugar a este increíble mod, así como darte algunos consejos y trucos para hacer que tu experiencia de juego sea más divertida y agradable.
-
¿Qué hay entre nosotros Blackpink?
-
Un mod hecho por fans de Among Us con miembros y temas de BLACKPINK
-
Entre nosotros Blackpink es un mod o modificación de Among Us, un juego en el que tienes que trabajar junto a otros jugadores para completar tareas en una nave espacial, evitando ser asesinado por un impostor que se esconde entre vosotros. El mod fue creado por tres fans de BLACKPINK, también conocidos como Blinks, y fue lanzado el 23 de octubre de 2020. El mod cambia el juego original añadiendo miembros BLACKPINK como personajes jugables, así como pieles personalizadas, sombreros, mascotas, mapas y sonidos relacionados con BLACKPINK.
Las características y beneficios de jugar entre nosotros Blackpink
-
Jugar entre nosotros Blackpink tiene muchas características y beneficios que lo hacen más divertido y emocionante que el juego original. Estos son algunos de ellos:
-
-
Puedes elegir tu miembro BLACKPINK favorito como personaje, como Jisoo, Jennie, Rosé o Lisa.
-
Puedes personalizar tu personaje con diferentes pieles, sombreros y mascotas que se inspiran en los trajes, accesorios y canciones de BLACKPINK.
-
Puedes jugar en dos nuevos mapas que se basan en los videos musicales de BLACKPINK, como Kill This Love y How You Like That.
-
Puede disfrutar del juego con nuevos efectos de sonido y música que se toman de canciones y álbumes de BLACKPINK.
-
-
-
Cómo descargar Among Us Blackpink para diferentes dispositivos
-
Para PC
-
Descargar WinRAR y el archivo mod de los enlaces oficiales
-
Para jugar entre nosotros Blackpink en su PC, tendrá que descargar dos cosas: WinRAR y el archivo mod. WinRAR es un software que le permite extraer archivos comprimidos, como el archivo mod. El archivo mod es un archivo zip que contiene todos los datos y archivos necesarios para ejecutar el mod. Puede descargar WinRAR desde [22 this](https://www.win-rar.com/download.html?&L=0) y el archivo mod desde [this](https://drive.google.com/file/d/1f7lZy0aXQw9wGx6u8w2L5Z4WQX0ZnYi/view) enlace. Asegúrate de tener la última versión de Among Us instalada en tu PC antes de descargar el archivo mod.
-
Extraer el archivo mod y ejecutar el juego
-
Después de descargar WinRAR y el archivo mod, necesitará extraer el archivo mod usando WinRAR. Para hacer esto, siga estos pasos:
-
-
Haga clic derecho en el archivo mod y seleccione "Extraer aquí".
-
Una carpeta llamada "Among Us Blackpink" aparecerá en la misma ubicación que el archivo mod.
-
Abra la carpeta y haga doble clic en el archivo "Entre nosotros.exe" para ejecutar el juego.
-
Verá un mensaje que dice "Entre nosotros Blackpink Mod por @blackpinkmod". Haga clic en "OK" para continuar.
-
Ahora estás listo para jugar entre nosotros Blackpink en su PC!
-
-
Para Android
-
Descargar el archivo mod de los enlaces oficiales
-
Para jugar entre nosotros Blackpink en su dispositivo Android, solo tendrá que descargar una cosa: el archivo mod. El archivo mod es un archivo apk que contiene todos los datos y archivos necesarios para ejecutar el mod. Puede descargar el archivo mod de [this](https://drive.google.com/file/d/1f7lZy0aXQ9wGx6u8w2L5Z4WQX0ZnYiYb/view) enlace. Asegúrate de tener suficiente espacio de almacenamiento en tu dispositivo antes de descargar el archivo mod.
-
Instalar el archivo mod y ejecutar el juego
-
-
-
Vaya a la configuración de su dispositivo y habilite "Fuentes desconocidas" en las opciones de seguridad o privacidad. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store.
-
Busque el archivo mod en la carpeta de descargas de su dispositivo y toque en él para instalarlo.
-
Verá un mensaje que dice "¿Desea instalar esta aplicación?" Toque en "Instalar" para continuar.
-
Verá un mensaje que dice "App instalado". Toque en "Abrir" para ejecutar el juego.
-
Ahora está listo para jugar entre nosotros Blackpink en su dispositivo Android!
-
-
Para iOS
-
Esperar a que los desarrolladores para liberar el mod para dispositivos iOS
-
Desafortunadamente, no hay versión oficial de Among Us Blackpink para dispositivos iOS todavía. Los desarrolladores del mod están trabajando duro para que sea compatible con dispositivos iOS, pero no han anunciado una fecha de lanzamiento todavía. Sin embargo, han asegurado que lo lanzarán lo antes posible, ¡así que estad atentos!
-
Siga las cuentas oficiales de las redes sociales para actualizaciones
-
Si desea ser notificado cuando Among Us Blackpink está disponible para dispositivos iOS, puede seguir las cuentas de redes sociales oficiales de los desarrolladores. Publican actualizaciones regulares y noticias sobre el mod, así como capturas de pantalla y videos del juego. También puedes interactuar con otros Blinks que están jugando o esperando el mod, y compartir tus pensamientos y comentarios. Estas son algunas de sus cuentas de redes sociales:
Cómo jugar entre nosotros Blackpink con tus amigos
-
Crear o unirse a una habitación privada con un código
-
-
-
Lanzamiento entre nosotros Blackpink en su dispositivo y toque en "Online".
-
Introduzca el apodo deseado y seleccione la región del servidor preferido.
-
Si desea crear una habitación, toque en "Crear juego" y elegir la configuración del juego, como el número de impostores, el mapa, el idioma de chat, y las reglas del juego. Toque en "Confirmar" para crear la habitación y obtener el código.
-
Si desea unirse a una habitación, toque en "Introducir código" y escriba el código que su amigo le ha dado. Toque en el botón de flecha para unirse a la habitación.
-
Una vez que esté en la habitación, puede invitar a más amigos compartiendo el código con ellos. También puedes chatear con otros jugadores, cambiar la apariencia de tu personaje y personalizar la configuración del juego.
-
Cuando todos estén listos, toque en "Inicio" para comenzar el juego.
-
-
Elige tu miembro BLACKPINK favorito como tu personaje
-
Una de las mejores características de Among Us Blackpink es que puedes elegir tu miembro BLACKPINK favorito como personaje. Puedes hacer esto tocando el botón "BLACKPINK" en la esquina inferior derecha de la pantalla. Verás cuatro opciones: Jisoo, Jennie, Rosé y Lisa. Toca en la que quieras jugar y confirma tu elección. A continuación, verá el cambio de rostro de su personaje para que coincida con el miembro BLACKPINK que seleccionó. También puedes cambiar el color de tu personaje tocando la paleta de colores en la esquina inferior izquierda de la pantalla.
-
Disfruta del juego con pieles personalizadas, sombreros, mascotas, mapas y sonidos
-
Otra gran característica de Among Us Blackpink es que puedes disfrutar del juego con pieles personalizadas, sombreros, mascotas, mapas y sonidos relacionados con BLACKPINK. Puede acceder a estas funciones pulsando los botones en el centro inferior de la pantalla. Estos son algunos ejemplos de lo que puede encontrar:
-
-
-
Skins: Puedes elegir entre diferentes atuendos inspirados en los videos musicales de BLACKPINK, como Kill This Love, How You Like That, Ice Cream y Lovesick Girls.
-
-
Mascotas: Puedes elegir entre diferentes animales que están asociados con miembros de BLACKPINK, como un panda para Jisoo, un perro para Jennie, un gato para Rosé y un hámster para Lisa.
-
Mapas: Puedes jugar en dos nuevos mapas que se basan en videos musicales de BLACKPINK, como Kill This Love y How You Like That. Los mapas tienen diferentes diseños, tareas, respiraderos y sabotajes que son únicos para cada tema.
-
Sonidos: Puede disfrutar del juego con nuevos efectos de sonido y música que se toman de las canciones y álbumes de BLACKPINK. Los sonidos incluyen animaciones de muerte, reuniones de emergencia, resultados de votación, pantallas de victoria y derrota, y música de fondo.
-
-
Consejos y trucos para jugar entre nosotros Blackpink
-
Usa letras BLACKPINK como mensajes de chat
-
Una forma divertida de jugar Entre nosotros Blackpink es utilizar letras BLACKPINK como sus mensajes de chat. Esto hará que tu comunicación sea más interesante y creativa, además de mostrar tu amor por BLACKPINK. Por ejemplo, puedes usar estas letras:
-
-
Si eres un impostor y quieres mentir sobre tu ubicación o coartada: "Lo siento mucho pero es amor falso"
-
Si eres un compañero de equipo y quieres acusar a alguien de ser un impostor: "Eres un chico malo y eres malo para mí"
-
Si eres un compañero de equipo y quieres expresar tu frustración o enojo: "Golpéate con ese ddu-du ddu-du du du"
-
Si eres un compañero de equipo y quieres animar o felicitar a alguien: "Eres mi tipo favorito de visual"
-
Si eres un compañero de equipo y quieres coquetear o molestar a alguien: "Eres como un helado en este tiempo abrasador"
-
-
Tenga cuidado con las diferentes animaciones y efectos de sonido
-
-
Diviértete y sé respetuoso con otros jugadores
-
El consejo más importante para jugar Among Us Blackpink es divertirse y ser respetuoso con otros jugadores. Recuerde que este es un juego que está destinado a entretener y conectar a las personas que comparten un interés común en BLACKPINK y entre nosotros. Por lo tanto, no debe tomar el juego demasiado en serio o personalmente, y no debe ser grosero u ofensivo con otros jugadores. En su lugar, deberías disfrutar del juego con una actitud positiva y un espíritu amistoso, y deberías apreciar los esfuerzos y talentos de los desarrolladores de mods y los miembros de BLACKPINK.
-
Conclusión
-
Resumir los puntos principales del artículo
-
En conclusión, Among Us Blackpink es un mod hecho por fans de Among Us que presenta miembros y temas de BLACKPINK. Es una forma divertida y emocionante de jugar Among Us con tus amigos y otros Blinks, así como para mostrar tu amor y apoyo a BLACKPINK. Para jugar entre nosotros Blackpink, tendrá que descargar e instalar el archivo mod en su dispositivo, dependiendo de si está utilizando un PC, un dispositivo Android o un dispositivo iOS. También tendrá que crear o unirse a una habitación privada con un código, elegir su miembro favorito BLACKPINK como su personaje, y disfrutar del juego con pieles personalizadas, sombreros, mascotas, mapas y sonidos. También puedes usar algunos consejos y trucos para hacer que tu experiencia de juego sea más divertida y agradable, como usar letras BLACKPINK como tus mensajes de chat, ver las diferentes animaciones y efectos de sonido y divertirte y ser respetuoso con otros jugadores.
-
Invita a los lectores a probar el mod y compartir sus comentarios
-
-
Preguntas frecuentes
-
¿Es seguro descargar Blackpink?
-
Sí, entre nosotros Blackpink es seguro para descargar siempre y cuando utilice los enlaces oficiales que hemos proporcionado en este artículo. El archivo mod no contiene ningún virus o malware que pueda dañar su dispositivo o comprometer su privacidad. Sin embargo, siempre debe tener cuidado al descargar cualquier archivo de Internet, y escanearlos con un software antivirus antes de instalarlos.
-
¿Está Entre Nosotros Blackpink libre para jugar?
-
Sí, Entre nosotros Blackpink es gratis para jugar siempre y cuando tengas la versión original de Among Us instalada en tu dispositivo. Usted no necesita pagar ningún dinero para descargar o jugar este mod. Sin embargo, es posible que necesites ver algunos anuncios o hacer algunas compras en la aplicación si quieres acceder a algunas funciones o elementos en el juego original.
-
¿Puedo jugar entre nosotros Blackpink con personas que no tienen el mod?
-
No, no se puede jugar entre nosotros Blackpink con personas que no tienen el mod instalado en sus dispositivos. Esto se debe a que el mod cambia algunos aspectos del juego que son incompatibles con la versión original. Por lo tanto, solo se puede jugar entre nosotros Blackpink con las personas que tienen el mismo mod instalado en sus dispositivos.
-
¿Puedo volver a la versión original de Among Us después de jugar Among Us Blackpink?
-
Sí, puede volver a la versión original de Among Us después de jugar Among Us Blackpink. Para hacer esto, tendrá que desinstalar o eliminar el archivo mod de su dispositivo, y luego iniciar el juego original de la tienda de aplicaciones o biblioteca de su dispositivo. También puedes mantener ambas versiones del juego en tu dispositivo si tienes suficiente espacio de almacenamiento.
-
¿Cómo puedo apoyar a los desarrolladores de Among Us Blackpink?
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat/src/routes/settings/+server.ts b/spaces/BetterAPI/BetterChat/src/routes/settings/+server.ts
deleted file mode 100644
index 8073a482cb1b0ae89ce1cf2b372b6939f596e935..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat/src/routes/settings/+server.ts
+++ /dev/null
@@ -1,34 +0,0 @@
-import { collections } from "$lib/server/database.js";
-import { subMinutes } from "date-fns";
-import { z } from "zod";
-
-export async function PATCH({ locals, request }) {
- const json = await request.json();
-
- const settings = z
- .object({
- shareConversationsWithModelAuthors: z.boolean().default(true),
- ethicsModalAcceptedAt: z.optional(z.date({ coerce: true }).min(subMinutes(new Date(), 5))),
- })
- .parse(json);
-
- await collections.settings.updateOne(
- {
- sessionId: locals.sessionId,
- },
- {
- $set: {
- ...settings,
- updatedAt: new Date(),
- },
- $setOnInsert: {
- createdAt: new Date(),
- },
- },
- {
- upsert: true,
- }
- );
-
- return new Response();
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/enums.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/enums.py
deleted file mode 100644
index 5e3e198233698f2b007489dd299cecb87d971067..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/enums.py
+++ /dev/null
@@ -1,85 +0,0 @@
-"""
-All of the Enums that are used throughout the chardet package.
-
-:author: Dan Blanchard (dan.blanchard@gmail.com)
-"""
-
-from enum import Enum, Flag
-
-
-class InputState:
- """
- This enum represents the different states a universal detector can be in.
- """
-
- PURE_ASCII = 0
- ESC_ASCII = 1
- HIGH_BYTE = 2
-
-
-class LanguageFilter(Flag):
- """
- This enum represents the different language filters we can apply to a
- ``UniversalDetector``.
- """
-
- NONE = 0x00
- CHINESE_SIMPLIFIED = 0x01
- CHINESE_TRADITIONAL = 0x02
- JAPANESE = 0x04
- KOREAN = 0x08
- NON_CJK = 0x10
- ALL = 0x1F
- CHINESE = CHINESE_SIMPLIFIED | CHINESE_TRADITIONAL
- CJK = CHINESE | JAPANESE | KOREAN
-
-
-class ProbingState(Enum):
- """
- This enum represents the different states a prober can be in.
- """
-
- DETECTING = 0
- FOUND_IT = 1
- NOT_ME = 2
-
-
-class MachineState:
- """
- This enum represents the different states a state machine can be in.
- """
-
- START = 0
- ERROR = 1
- ITS_ME = 2
-
-
-class SequenceLikelihood:
- """
- This enum represents the likelihood of a character following the previous one.
- """
-
- NEGATIVE = 0
- UNLIKELY = 1
- LIKELY = 2
- POSITIVE = 3
-
- @classmethod
- def get_num_categories(cls) -> int:
- """:returns: The number of likelihood categories in the enum."""
- return 4
-
-
-class CharacterCategory:
- """
- This enum represents the different categories language models for
- ``SingleByteCharsetProber`` put characters into.
-
- Anything less than CONTROL is considered a letter.
- """
-
- UNDEFINED = 255
- LINE_BREAK = 254
- SYMBOL = 253
- DIGIT = 252
- CONTROL = 251
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_windows_renderer.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_windows_renderer.py
deleted file mode 100644
index 5ece05649e7268a75c82de6ced552619ffc093ab..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_windows_renderer.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from typing import Iterable, Sequence, Tuple, cast
-
-from pip._vendor.rich._win32_console import LegacyWindowsTerm, WindowsCoordinates
-from pip._vendor.rich.segment import ControlCode, ControlType, Segment
-
-
-def legacy_windows_render(buffer: Iterable[Segment], term: LegacyWindowsTerm) -> None:
- """Makes appropriate Windows Console API calls based on the segments in the buffer.
-
- Args:
- buffer (Iterable[Segment]): Iterable of Segments to convert to Win32 API calls.
- term (LegacyWindowsTerm): Used to call the Windows Console API.
- """
- for text, style, control in buffer:
- if not control:
- if style:
- term.write_styled(text, style)
- else:
- term.write_text(text)
- else:
- control_codes: Sequence[ControlCode] = control
- for control_code in control_codes:
- control_type = control_code[0]
- if control_type == ControlType.CURSOR_MOVE_TO:
- _, x, y = cast(Tuple[ControlType, int, int], control_code)
- term.move_cursor_to(WindowsCoordinates(row=y - 1, col=x - 1))
- elif control_type == ControlType.CARRIAGE_RETURN:
- term.write_text("\r")
- elif control_type == ControlType.HOME:
- term.move_cursor_to(WindowsCoordinates(0, 0))
- elif control_type == ControlType.CURSOR_UP:
- term.move_cursor_up()
- elif control_type == ControlType.CURSOR_DOWN:
- term.move_cursor_down()
- elif control_type == ControlType.CURSOR_FORWARD:
- term.move_cursor_forward()
- elif control_type == ControlType.CURSOR_BACKWARD:
- term.move_cursor_backward()
- elif control_type == ControlType.CURSOR_MOVE_TO_COLUMN:
- _, column = cast(Tuple[ControlType, int], control_code)
- term.move_cursor_to_column(column - 1)
- elif control_type == ControlType.HIDE_CURSOR:
- term.hide_cursor()
- elif control_type == ControlType.SHOW_CURSOR:
- term.show_cursor()
- elif control_type == ControlType.ERASE_IN_LINE:
- _, mode = cast(Tuple[ControlType, int], control_code)
- if mode == 0:
- term.erase_end_of_line()
- elif mode == 1:
- term.erase_start_of_line()
- elif mode == 2:
- term.erase_line()
- elif control_type == ControlType.SET_WINDOW_TITLE:
- _, title = cast(Tuple[ControlType, str], control_code)
- term.set_title(title)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/console.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/console.py
deleted file mode 100644
index 7c363dfdc5e8aa344c26f285cb2000c632bcce49..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/console.py
+++ /dev/null
@@ -1,2633 +0,0 @@
-import inspect
-import os
-import platform
-import sys
-import threading
-import zlib
-from abc import ABC, abstractmethod
-from dataclasses import dataclass, field
-from datetime import datetime
-from functools import wraps
-from getpass import getpass
-from html import escape
-from inspect import isclass
-from itertools import islice
-from math import ceil
-from time import monotonic
-from types import FrameType, ModuleType, TracebackType
-from typing import (
- IO,
- TYPE_CHECKING,
- Any,
- Callable,
- Dict,
- Iterable,
- List,
- Mapping,
- NamedTuple,
- Optional,
- TextIO,
- Tuple,
- Type,
- Union,
- cast,
-)
-
-from pip._vendor.rich._null_file import NULL_FILE
-
-if sys.version_info >= (3, 8):
- from typing import Literal, Protocol, runtime_checkable
-else:
- from pip._vendor.typing_extensions import (
- Literal,
- Protocol,
- runtime_checkable,
- ) # pragma: no cover
-
-from . import errors, themes
-from ._emoji_replace import _emoji_replace
-from ._export_format import CONSOLE_HTML_FORMAT, CONSOLE_SVG_FORMAT
-from ._fileno import get_fileno
-from ._log_render import FormatTimeCallable, LogRender
-from .align import Align, AlignMethod
-from .color import ColorSystem, blend_rgb
-from .control import Control
-from .emoji import EmojiVariant
-from .highlighter import NullHighlighter, ReprHighlighter
-from .markup import render as render_markup
-from .measure import Measurement, measure_renderables
-from .pager import Pager, SystemPager
-from .pretty import Pretty, is_expandable
-from .protocol import rich_cast
-from .region import Region
-from .scope import render_scope
-from .screen import Screen
-from .segment import Segment
-from .style import Style, StyleType
-from .styled import Styled
-from .terminal_theme import DEFAULT_TERMINAL_THEME, SVG_EXPORT_THEME, TerminalTheme
-from .text import Text, TextType
-from .theme import Theme, ThemeStack
-
-if TYPE_CHECKING:
- from ._windows import WindowsConsoleFeatures
- from .live import Live
- from .status import Status
-
-JUPYTER_DEFAULT_COLUMNS = 115
-JUPYTER_DEFAULT_LINES = 100
-WINDOWS = platform.system() == "Windows"
-
-HighlighterType = Callable[[Union[str, "Text"]], "Text"]
-JustifyMethod = Literal["default", "left", "center", "right", "full"]
-OverflowMethod = Literal["fold", "crop", "ellipsis", "ignore"]
-
-
-class NoChange:
- pass
-
-
-NO_CHANGE = NoChange()
-
-try:
- _STDIN_FILENO = sys.__stdin__.fileno()
-except Exception:
- _STDIN_FILENO = 0
-try:
- _STDOUT_FILENO = sys.__stdout__.fileno()
-except Exception:
- _STDOUT_FILENO = 1
-try:
- _STDERR_FILENO = sys.__stderr__.fileno()
-except Exception:
- _STDERR_FILENO = 2
-
-_STD_STREAMS = (_STDIN_FILENO, _STDOUT_FILENO, _STDERR_FILENO)
-_STD_STREAMS_OUTPUT = (_STDOUT_FILENO, _STDERR_FILENO)
-
-
-_TERM_COLORS = {
- "kitty": ColorSystem.EIGHT_BIT,
- "256color": ColorSystem.EIGHT_BIT,
- "16color": ColorSystem.STANDARD,
-}
-
-
-class ConsoleDimensions(NamedTuple):
- """Size of the terminal."""
-
- width: int
- """The width of the console in 'cells'."""
- height: int
- """The height of the console in lines."""
-
-
-@dataclass
-class ConsoleOptions:
- """Options for __rich_console__ method."""
-
- size: ConsoleDimensions
- """Size of console."""
- legacy_windows: bool
- """legacy_windows: flag for legacy windows."""
- min_width: int
- """Minimum width of renderable."""
- max_width: int
- """Maximum width of renderable."""
- is_terminal: bool
- """True if the target is a terminal, otherwise False."""
- encoding: str
- """Encoding of terminal."""
- max_height: int
- """Height of container (starts as terminal)"""
- justify: Optional[JustifyMethod] = None
- """Justify value override for renderable."""
- overflow: Optional[OverflowMethod] = None
- """Overflow value override for renderable."""
- no_wrap: Optional[bool] = False
- """Disable wrapping for text."""
- highlight: Optional[bool] = None
- """Highlight override for render_str."""
- markup: Optional[bool] = None
- """Enable markup when rendering strings."""
- height: Optional[int] = None
-
- @property
- def ascii_only(self) -> bool:
- """Check if renderables should use ascii only."""
- return not self.encoding.startswith("utf")
-
- def copy(self) -> "ConsoleOptions":
- """Return a copy of the options.
-
- Returns:
- ConsoleOptions: a copy of self.
- """
- options: ConsoleOptions = ConsoleOptions.__new__(ConsoleOptions)
- options.__dict__ = self.__dict__.copy()
- return options
-
- def update(
- self,
- *,
- width: Union[int, NoChange] = NO_CHANGE,
- min_width: Union[int, NoChange] = NO_CHANGE,
- max_width: Union[int, NoChange] = NO_CHANGE,
- justify: Union[Optional[JustifyMethod], NoChange] = NO_CHANGE,
- overflow: Union[Optional[OverflowMethod], NoChange] = NO_CHANGE,
- no_wrap: Union[Optional[bool], NoChange] = NO_CHANGE,
- highlight: Union[Optional[bool], NoChange] = NO_CHANGE,
- markup: Union[Optional[bool], NoChange] = NO_CHANGE,
- height: Union[Optional[int], NoChange] = NO_CHANGE,
- ) -> "ConsoleOptions":
- """Update values, return a copy."""
- options = self.copy()
- if not isinstance(width, NoChange):
- options.min_width = options.max_width = max(0, width)
- if not isinstance(min_width, NoChange):
- options.min_width = min_width
- if not isinstance(max_width, NoChange):
- options.max_width = max_width
- if not isinstance(justify, NoChange):
- options.justify = justify
- if not isinstance(overflow, NoChange):
- options.overflow = overflow
- if not isinstance(no_wrap, NoChange):
- options.no_wrap = no_wrap
- if not isinstance(highlight, NoChange):
- options.highlight = highlight
- if not isinstance(markup, NoChange):
- options.markup = markup
- if not isinstance(height, NoChange):
- if height is not None:
- options.max_height = height
- options.height = None if height is None else max(0, height)
- return options
-
- def update_width(self, width: int) -> "ConsoleOptions":
- """Update just the width, return a copy.
-
- Args:
- width (int): New width (sets both min_width and max_width)
-
- Returns:
- ~ConsoleOptions: New console options instance.
- """
- options = self.copy()
- options.min_width = options.max_width = max(0, width)
- return options
-
- def update_height(self, height: int) -> "ConsoleOptions":
- """Update the height, and return a copy.
-
- Args:
- height (int): New height
-
- Returns:
- ~ConsoleOptions: New Console options instance.
- """
- options = self.copy()
- options.max_height = options.height = height
- return options
-
- def reset_height(self) -> "ConsoleOptions":
- """Return a copy of the options with height set to ``None``.
-
- Returns:
- ~ConsoleOptions: New console options instance.
- """
- options = self.copy()
- options.height = None
- return options
-
- def update_dimensions(self, width: int, height: int) -> "ConsoleOptions":
- """Update the width and height, and return a copy.
-
- Args:
- width (int): New width (sets both min_width and max_width).
- height (int): New height.
-
- Returns:
- ~ConsoleOptions: New console options instance.
- """
- options = self.copy()
- options.min_width = options.max_width = max(0, width)
- options.height = options.max_height = height
- return options
-
-
-@runtime_checkable
-class RichCast(Protocol):
- """An object that may be 'cast' to a console renderable."""
-
- def __rich__(
- self,
- ) -> Union["ConsoleRenderable", "RichCast", str]: # pragma: no cover
- ...
-
-
-@runtime_checkable
-class ConsoleRenderable(Protocol):
- """An object that supports the console protocol."""
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult": # pragma: no cover
- ...
-
-
-# A type that may be rendered by Console.
-RenderableType = Union[ConsoleRenderable, RichCast, str]
-
-# The result of calling a __rich_console__ method.
-RenderResult = Iterable[Union[RenderableType, Segment]]
-
-_null_highlighter = NullHighlighter()
-
-
-class CaptureError(Exception):
- """An error in the Capture context manager."""
-
-
-class NewLine:
- """A renderable to generate new line(s)"""
-
- def __init__(self, count: int = 1) -> None:
- self.count = count
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> Iterable[Segment]:
- yield Segment("\n" * self.count)
-
-
-class ScreenUpdate:
- """Render a list of lines at a given offset."""
-
- def __init__(self, lines: List[List[Segment]], x: int, y: int) -> None:
- self._lines = lines
- self.x = x
- self.y = y
-
- def __rich_console__(
- self, console: "Console", options: ConsoleOptions
- ) -> RenderResult:
- x = self.x
- move_to = Control.move_to
- for offset, line in enumerate(self._lines, self.y):
- yield move_to(x, offset)
- yield from line
-
-
-class Capture:
- """Context manager to capture the result of printing to the console.
- See :meth:`~rich.console.Console.capture` for how to use.
-
- Args:
- console (Console): A console instance to capture output.
- """
-
- def __init__(self, console: "Console") -> None:
- self._console = console
- self._result: Optional[str] = None
-
- def __enter__(self) -> "Capture":
- self._console.begin_capture()
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- self._result = self._console.end_capture()
-
- def get(self) -> str:
- """Get the result of the capture."""
- if self._result is None:
- raise CaptureError(
- "Capture result is not available until context manager exits."
- )
- return self._result
-
-
-class ThemeContext:
- """A context manager to use a temporary theme. See :meth:`~rich.console.Console.use_theme` for usage."""
-
- def __init__(self, console: "Console", theme: Theme, inherit: bool = True) -> None:
- self.console = console
- self.theme = theme
- self.inherit = inherit
-
- def __enter__(self) -> "ThemeContext":
- self.console.push_theme(self.theme)
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- self.console.pop_theme()
-
-
-class PagerContext:
- """A context manager that 'pages' content. See :meth:`~rich.console.Console.pager` for usage."""
-
- def __init__(
- self,
- console: "Console",
- pager: Optional[Pager] = None,
- styles: bool = False,
- links: bool = False,
- ) -> None:
- self._console = console
- self.pager = SystemPager() if pager is None else pager
- self.styles = styles
- self.links = links
-
- def __enter__(self) -> "PagerContext":
- self._console._enter_buffer()
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- if exc_type is None:
- with self._console._lock:
- buffer: List[Segment] = self._console._buffer[:]
- del self._console._buffer[:]
- segments: Iterable[Segment] = buffer
- if not self.styles:
- segments = Segment.strip_styles(segments)
- elif not self.links:
- segments = Segment.strip_links(segments)
- content = self._console._render_buffer(segments)
- self.pager.show(content)
- self._console._exit_buffer()
-
-
-class ScreenContext:
- """A context manager that enables an alternative screen. See :meth:`~rich.console.Console.screen` for usage."""
-
- def __init__(
- self, console: "Console", hide_cursor: bool, style: StyleType = ""
- ) -> None:
- self.console = console
- self.hide_cursor = hide_cursor
- self.screen = Screen(style=style)
- self._changed = False
-
- def update(
- self, *renderables: RenderableType, style: Optional[StyleType] = None
- ) -> None:
- """Update the screen.
-
- Args:
- renderable (RenderableType, optional): Optional renderable to replace current renderable,
- or None for no change. Defaults to None.
- style: (Style, optional): Replacement style, or None for no change. Defaults to None.
- """
- if renderables:
- self.screen.renderable = (
- Group(*renderables) if len(renderables) > 1 else renderables[0]
- )
- if style is not None:
- self.screen.style = style
- self.console.print(self.screen, end="")
-
- def __enter__(self) -> "ScreenContext":
- self._changed = self.console.set_alt_screen(True)
- if self._changed and self.hide_cursor:
- self.console.show_cursor(False)
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- if self._changed:
- self.console.set_alt_screen(False)
- if self.hide_cursor:
- self.console.show_cursor(True)
-
-
-class Group:
- """Takes a group of renderables and returns a renderable object that renders the group.
-
- Args:
- renderables (Iterable[RenderableType]): An iterable of renderable objects.
- fit (bool, optional): Fit dimension of group to contents, or fill available space. Defaults to True.
- """
-
- def __init__(self, *renderables: "RenderableType", fit: bool = True) -> None:
- self._renderables = renderables
- self.fit = fit
- self._render: Optional[List[RenderableType]] = None
-
- @property
- def renderables(self) -> List["RenderableType"]:
- if self._render is None:
- self._render = list(self._renderables)
- return self._render
-
- def __rich_measure__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "Measurement":
- if self.fit:
- return measure_renderables(console, options, self.renderables)
- else:
- return Measurement(options.max_width, options.max_width)
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> RenderResult:
- yield from self.renderables
-
-
-def group(fit: bool = True) -> Callable[..., Callable[..., Group]]:
- """A decorator that turns an iterable of renderables in to a group.
-
- Args:
- fit (bool, optional): Fit dimension of group to contents, or fill available space. Defaults to True.
- """
-
- def decorator(
- method: Callable[..., Iterable[RenderableType]]
- ) -> Callable[..., Group]:
- """Convert a method that returns an iterable of renderables in to a Group."""
-
- @wraps(method)
- def _replace(*args: Any, **kwargs: Any) -> Group:
- renderables = method(*args, **kwargs)
- return Group(*renderables, fit=fit)
-
- return _replace
-
- return decorator
-
-
-def _is_jupyter() -> bool: # pragma: no cover
- """Check if we're running in a Jupyter notebook."""
- try:
- get_ipython # type: ignore[name-defined]
- except NameError:
- return False
- ipython = get_ipython() # type: ignore[name-defined]
- shell = ipython.__class__.__name__
- if (
- "google.colab" in str(ipython.__class__)
- or os.getenv("DATABRICKS_RUNTIME_VERSION")
- or shell == "ZMQInteractiveShell"
- ):
- return True # Jupyter notebook or qtconsole
- elif shell == "TerminalInteractiveShell":
- return False # Terminal running IPython
- else:
- return False # Other type (?)
-
-
-COLOR_SYSTEMS = {
- "standard": ColorSystem.STANDARD,
- "256": ColorSystem.EIGHT_BIT,
- "truecolor": ColorSystem.TRUECOLOR,
- "windows": ColorSystem.WINDOWS,
-}
-
-_COLOR_SYSTEMS_NAMES = {system: name for name, system in COLOR_SYSTEMS.items()}
-
-
-@dataclass
-class ConsoleThreadLocals(threading.local):
- """Thread local values for Console context."""
-
- theme_stack: ThemeStack
- buffer: List[Segment] = field(default_factory=list)
- buffer_index: int = 0
-
-
-class RenderHook(ABC):
- """Provides hooks in to the render process."""
-
- @abstractmethod
- def process_renderables(
- self, renderables: List[ConsoleRenderable]
- ) -> List[ConsoleRenderable]:
- """Called with a list of objects to render.
-
- This method can return a new list of renderables, or modify and return the same list.
-
- Args:
- renderables (List[ConsoleRenderable]): A number of renderable objects.
-
- Returns:
- List[ConsoleRenderable]: A replacement list of renderables.
- """
-
-
-_windows_console_features: Optional["WindowsConsoleFeatures"] = None
-
-
-def get_windows_console_features() -> "WindowsConsoleFeatures": # pragma: no cover
- global _windows_console_features
- if _windows_console_features is not None:
- return _windows_console_features
- from ._windows import get_windows_console_features
-
- _windows_console_features = get_windows_console_features()
- return _windows_console_features
-
-
-def detect_legacy_windows() -> bool:
- """Detect legacy Windows."""
- return WINDOWS and not get_windows_console_features().vt
-
-
-class Console:
- """A high level console interface.
-
- Args:
- color_system (str, optional): The color system supported by your terminal,
- either ``"standard"``, ``"256"`` or ``"truecolor"``. Leave as ``"auto"`` to autodetect.
- force_terminal (Optional[bool], optional): Enable/disable terminal control codes, or None to auto-detect terminal. Defaults to None.
- force_jupyter (Optional[bool], optional): Enable/disable Jupyter rendering, or None to auto-detect Jupyter. Defaults to None.
- force_interactive (Optional[bool], optional): Enable/disable interactive mode, or None to auto detect. Defaults to None.
- soft_wrap (Optional[bool], optional): Set soft wrap default on print method. Defaults to False.
- theme (Theme, optional): An optional style theme object, or ``None`` for default theme.
- stderr (bool, optional): Use stderr rather than stdout if ``file`` is not specified. Defaults to False.
- file (IO, optional): A file object where the console should write to. Defaults to stdout.
- quiet (bool, Optional): Boolean to suppress all output. Defaults to False.
- width (int, optional): The width of the terminal. Leave as default to auto-detect width.
- height (int, optional): The height of the terminal. Leave as default to auto-detect height.
- style (StyleType, optional): Style to apply to all output, or None for no style. Defaults to None.
- no_color (Optional[bool], optional): Enabled no color mode, or None to auto detect. Defaults to None.
- tab_size (int, optional): Number of spaces used to replace a tab character. Defaults to 8.
- record (bool, optional): Boolean to enable recording of terminal output,
- required to call :meth:`export_html`, :meth:`export_svg`, and :meth:`export_text`. Defaults to False.
- markup (bool, optional): Boolean to enable :ref:`console_markup`. Defaults to True.
- emoji (bool, optional): Enable emoji code. Defaults to True.
- emoji_variant (str, optional): Optional emoji variant, either "text" or "emoji". Defaults to None.
- highlight (bool, optional): Enable automatic highlighting. Defaults to True.
- log_time (bool, optional): Boolean to enable logging of time by :meth:`log` methods. Defaults to True.
- log_path (bool, optional): Boolean to enable the logging of the caller by :meth:`log`. Defaults to True.
- log_time_format (Union[str, TimeFormatterCallable], optional): If ``log_time`` is enabled, either string for strftime or callable that formats the time. Defaults to "[%X] ".
- highlighter (HighlighterType, optional): Default highlighter.
- legacy_windows (bool, optional): Enable legacy Windows mode, or ``None`` to auto detect. Defaults to ``None``.
- safe_box (bool, optional): Restrict box options that don't render on legacy Windows.
- get_datetime (Callable[[], datetime], optional): Callable that gets the current time as a datetime.datetime object (used by Console.log),
- or None for datetime.now.
- get_time (Callable[[], time], optional): Callable that gets the current time in seconds, default uses time.monotonic.
- """
-
- _environ: Mapping[str, str] = os.environ
-
- def __init__(
- self,
- *,
- color_system: Optional[
- Literal["auto", "standard", "256", "truecolor", "windows"]
- ] = "auto",
- force_terminal: Optional[bool] = None,
- force_jupyter: Optional[bool] = None,
- force_interactive: Optional[bool] = None,
- soft_wrap: bool = False,
- theme: Optional[Theme] = None,
- stderr: bool = False,
- file: Optional[IO[str]] = None,
- quiet: bool = False,
- width: Optional[int] = None,
- height: Optional[int] = None,
- style: Optional[StyleType] = None,
- no_color: Optional[bool] = None,
- tab_size: int = 8,
- record: bool = False,
- markup: bool = True,
- emoji: bool = True,
- emoji_variant: Optional[EmojiVariant] = None,
- highlight: bool = True,
- log_time: bool = True,
- log_path: bool = True,
- log_time_format: Union[str, FormatTimeCallable] = "[%X]",
- highlighter: Optional["HighlighterType"] = ReprHighlighter(),
- legacy_windows: Optional[bool] = None,
- safe_box: bool = True,
- get_datetime: Optional[Callable[[], datetime]] = None,
- get_time: Optional[Callable[[], float]] = None,
- _environ: Optional[Mapping[str, str]] = None,
- ):
- # Copy of os.environ allows us to replace it for testing
- if _environ is not None:
- self._environ = _environ
-
- self.is_jupyter = _is_jupyter() if force_jupyter is None else force_jupyter
- if self.is_jupyter:
- if width is None:
- jupyter_columns = self._environ.get("JUPYTER_COLUMNS")
- if jupyter_columns is not None and jupyter_columns.isdigit():
- width = int(jupyter_columns)
- else:
- width = JUPYTER_DEFAULT_COLUMNS
- if height is None:
- jupyter_lines = self._environ.get("JUPYTER_LINES")
- if jupyter_lines is not None and jupyter_lines.isdigit():
- height = int(jupyter_lines)
- else:
- height = JUPYTER_DEFAULT_LINES
-
- self.tab_size = tab_size
- self.record = record
- self._markup = markup
- self._emoji = emoji
- self._emoji_variant: Optional[EmojiVariant] = emoji_variant
- self._highlight = highlight
- self.legacy_windows: bool = (
- (detect_legacy_windows() and not self.is_jupyter)
- if legacy_windows is None
- else legacy_windows
- )
-
- if width is None:
- columns = self._environ.get("COLUMNS")
- if columns is not None and columns.isdigit():
- width = int(columns) - self.legacy_windows
- if height is None:
- lines = self._environ.get("LINES")
- if lines is not None and lines.isdigit():
- height = int(lines)
-
- self.soft_wrap = soft_wrap
- self._width = width
- self._height = height
-
- self._color_system: Optional[ColorSystem]
-
- self._force_terminal = None
- if force_terminal is not None:
- self._force_terminal = force_terminal
-
- self._file = file
- self.quiet = quiet
- self.stderr = stderr
-
- if color_system is None:
- self._color_system = None
- elif color_system == "auto":
- self._color_system = self._detect_color_system()
- else:
- self._color_system = COLOR_SYSTEMS[color_system]
-
- self._lock = threading.RLock()
- self._log_render = LogRender(
- show_time=log_time,
- show_path=log_path,
- time_format=log_time_format,
- )
- self.highlighter: HighlighterType = highlighter or _null_highlighter
- self.safe_box = safe_box
- self.get_datetime = get_datetime or datetime.now
- self.get_time = get_time or monotonic
- self.style = style
- self.no_color = (
- no_color if no_color is not None else "NO_COLOR" in self._environ
- )
- self.is_interactive = (
- (self.is_terminal and not self.is_dumb_terminal)
- if force_interactive is None
- else force_interactive
- )
-
- self._record_buffer_lock = threading.RLock()
- self._thread_locals = ConsoleThreadLocals(
- theme_stack=ThemeStack(themes.DEFAULT if theme is None else theme)
- )
- self._record_buffer: List[Segment] = []
- self._render_hooks: List[RenderHook] = []
- self._live: Optional["Live"] = None
- self._is_alt_screen = False
-
- def __repr__(self) -> str:
- return f""
-
- @property
- def file(self) -> IO[str]:
- """Get the file object to write to."""
- file = self._file or (sys.stderr if self.stderr else sys.stdout)
- file = getattr(file, "rich_proxied_file", file)
- if file is None:
- file = NULL_FILE
- return file
-
- @file.setter
- def file(self, new_file: IO[str]) -> None:
- """Set a new file object."""
- self._file = new_file
-
- @property
- def _buffer(self) -> List[Segment]:
- """Get a thread local buffer."""
- return self._thread_locals.buffer
-
- @property
- def _buffer_index(self) -> int:
- """Get a thread local buffer."""
- return self._thread_locals.buffer_index
-
- @_buffer_index.setter
- def _buffer_index(self, value: int) -> None:
- self._thread_locals.buffer_index = value
-
- @property
- def _theme_stack(self) -> ThemeStack:
- """Get the thread local theme stack."""
- return self._thread_locals.theme_stack
-
- def _detect_color_system(self) -> Optional[ColorSystem]:
- """Detect color system from env vars."""
- if self.is_jupyter:
- return ColorSystem.TRUECOLOR
- if not self.is_terminal or self.is_dumb_terminal:
- return None
- if WINDOWS: # pragma: no cover
- if self.legacy_windows: # pragma: no cover
- return ColorSystem.WINDOWS
- windows_console_features = get_windows_console_features()
- return (
- ColorSystem.TRUECOLOR
- if windows_console_features.truecolor
- else ColorSystem.EIGHT_BIT
- )
- else:
- color_term = self._environ.get("COLORTERM", "").strip().lower()
- if color_term in ("truecolor", "24bit"):
- return ColorSystem.TRUECOLOR
- term = self._environ.get("TERM", "").strip().lower()
- _term_name, _hyphen, colors = term.rpartition("-")
- color_system = _TERM_COLORS.get(colors, ColorSystem.STANDARD)
- return color_system
-
- def _enter_buffer(self) -> None:
- """Enter in to a buffer context, and buffer all output."""
- self._buffer_index += 1
-
- def _exit_buffer(self) -> None:
- """Leave buffer context, and render content if required."""
- self._buffer_index -= 1
- self._check_buffer()
-
- def set_live(self, live: "Live") -> None:
- """Set Live instance. Used by Live context manager.
-
- Args:
- live (Live): Live instance using this Console.
-
- Raises:
- errors.LiveError: If this Console has a Live context currently active.
- """
- with self._lock:
- if self._live is not None:
- raise errors.LiveError("Only one live display may be active at once")
- self._live = live
-
- def clear_live(self) -> None:
- """Clear the Live instance."""
- with self._lock:
- self._live = None
-
- def push_render_hook(self, hook: RenderHook) -> None:
- """Add a new render hook to the stack.
-
- Args:
- hook (RenderHook): Render hook instance.
- """
- with self._lock:
- self._render_hooks.append(hook)
-
- def pop_render_hook(self) -> None:
- """Pop the last renderhook from the stack."""
- with self._lock:
- self._render_hooks.pop()
-
- def __enter__(self) -> "Console":
- """Own context manager to enter buffer context."""
- self._enter_buffer()
- return self
-
- def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None:
- """Exit buffer context."""
- self._exit_buffer()
-
- def begin_capture(self) -> None:
- """Begin capturing console output. Call :meth:`end_capture` to exit capture mode and return output."""
- self._enter_buffer()
-
- def end_capture(self) -> str:
- """End capture mode and return captured string.
-
- Returns:
- str: Console output.
- """
- render_result = self._render_buffer(self._buffer)
- del self._buffer[:]
- self._exit_buffer()
- return render_result
-
- def push_theme(self, theme: Theme, *, inherit: bool = True) -> None:
- """Push a new theme on to the top of the stack, replacing the styles from the previous theme.
- Generally speaking, you should call :meth:`~rich.console.Console.use_theme` to get a context manager, rather
- than calling this method directly.
-
- Args:
- theme (Theme): A theme instance.
- inherit (bool, optional): Inherit existing styles. Defaults to True.
- """
- self._theme_stack.push_theme(theme, inherit=inherit)
-
- def pop_theme(self) -> None:
- """Remove theme from top of stack, restoring previous theme."""
- self._theme_stack.pop_theme()
-
- def use_theme(self, theme: Theme, *, inherit: bool = True) -> ThemeContext:
- """Use a different theme for the duration of the context manager.
-
- Args:
- theme (Theme): Theme instance to user.
- inherit (bool, optional): Inherit existing console styles. Defaults to True.
-
- Returns:
- ThemeContext: [description]
- """
- return ThemeContext(self, theme, inherit)
-
- @property
- def color_system(self) -> Optional[str]:
- """Get color system string.
-
- Returns:
- Optional[str]: "standard", "256" or "truecolor".
- """
-
- if self._color_system is not None:
- return _COLOR_SYSTEMS_NAMES[self._color_system]
- else:
- return None
-
- @property
- def encoding(self) -> str:
- """Get the encoding of the console file, e.g. ``"utf-8"``.
-
- Returns:
- str: A standard encoding string.
- """
- return (getattr(self.file, "encoding", "utf-8") or "utf-8").lower()
-
- @property
- def is_terminal(self) -> bool:
- """Check if the console is writing to a terminal.
-
- Returns:
- bool: True if the console writing to a device capable of
- understanding terminal codes, otherwise False.
- """
- if self._force_terminal is not None:
- return self._force_terminal
-
- if hasattr(sys.stdin, "__module__") and sys.stdin.__module__.startswith(
- "idlelib"
- ):
- # Return False for Idle which claims to be a tty but can't handle ansi codes
- return False
-
- if self.is_jupyter:
- # return False for Jupyter, which may have FORCE_COLOR set
- return False
-
- # If FORCE_COLOR env var has any value at all, we assume a terminal.
- force_color = self._environ.get("FORCE_COLOR")
- if force_color is not None:
- self._force_terminal = True
-
- isatty: Optional[Callable[[], bool]] = getattr(self.file, "isatty", None)
- try:
- return False if isatty is None else isatty()
- except ValueError:
- # in some situation (at the end of a pytest run for example) isatty() can raise
- # ValueError: I/O operation on closed file
- # return False because we aren't in a terminal anymore
- return False
-
- @property
- def is_dumb_terminal(self) -> bool:
- """Detect dumb terminal.
-
- Returns:
- bool: True if writing to a dumb terminal, otherwise False.
-
- """
- _term = self._environ.get("TERM", "")
- is_dumb = _term.lower() in ("dumb", "unknown")
- return self.is_terminal and is_dumb
-
- @property
- def options(self) -> ConsoleOptions:
- """Get default console options."""
- return ConsoleOptions(
- max_height=self.size.height,
- size=self.size,
- legacy_windows=self.legacy_windows,
- min_width=1,
- max_width=self.width,
- encoding=self.encoding,
- is_terminal=self.is_terminal,
- )
-
- @property
- def size(self) -> ConsoleDimensions:
- """Get the size of the console.
-
- Returns:
- ConsoleDimensions: A named tuple containing the dimensions.
- """
-
- if self._width is not None and self._height is not None:
- return ConsoleDimensions(self._width - self.legacy_windows, self._height)
-
- if self.is_dumb_terminal:
- return ConsoleDimensions(80, 25)
-
- width: Optional[int] = None
- height: Optional[int] = None
-
- if WINDOWS: # pragma: no cover
- try:
- width, height = os.get_terminal_size()
- except (AttributeError, ValueError, OSError): # Probably not a terminal
- pass
- else:
- for file_descriptor in _STD_STREAMS:
- try:
- width, height = os.get_terminal_size(file_descriptor)
- except (AttributeError, ValueError, OSError):
- pass
- else:
- break
-
- columns = self._environ.get("COLUMNS")
- if columns is not None and columns.isdigit():
- width = int(columns)
- lines = self._environ.get("LINES")
- if lines is not None and lines.isdigit():
- height = int(lines)
-
- # get_terminal_size can report 0, 0 if run from pseudo-terminal
- width = width or 80
- height = height or 25
- return ConsoleDimensions(
- width - self.legacy_windows if self._width is None else self._width,
- height if self._height is None else self._height,
- )
-
- @size.setter
- def size(self, new_size: Tuple[int, int]) -> None:
- """Set a new size for the terminal.
-
- Args:
- new_size (Tuple[int, int]): New width and height.
- """
- width, height = new_size
- self._width = width
- self._height = height
-
- @property
- def width(self) -> int:
- """Get the width of the console.
-
- Returns:
- int: The width (in characters) of the console.
- """
- return self.size.width
-
- @width.setter
- def width(self, width: int) -> None:
- """Set width.
-
- Args:
- width (int): New width.
- """
- self._width = width
-
- @property
- def height(self) -> int:
- """Get the height of the console.
-
- Returns:
- int: The height (in lines) of the console.
- """
- return self.size.height
-
- @height.setter
- def height(self, height: int) -> None:
- """Set height.
-
- Args:
- height (int): new height.
- """
- self._height = height
-
- def bell(self) -> None:
- """Play a 'bell' sound (if supported by the terminal)."""
- self.control(Control.bell())
-
- def capture(self) -> Capture:
- """A context manager to *capture* the result of print() or log() in a string,
- rather than writing it to the console.
-
- Example:
- >>> from rich.console import Console
- >>> console = Console()
- >>> with console.capture() as capture:
- ... console.print("[bold magenta]Hello World[/]")
- >>> print(capture.get())
-
- Returns:
- Capture: Context manager with disables writing to the terminal.
- """
- capture = Capture(self)
- return capture
-
- def pager(
- self, pager: Optional[Pager] = None, styles: bool = False, links: bool = False
- ) -> PagerContext:
- """A context manager to display anything printed within a "pager". The pager application
- is defined by the system and will typically support at least pressing a key to scroll.
-
- Args:
- pager (Pager, optional): A pager object, or None to use :class:`~rich.pager.SystemPager`. Defaults to None.
- styles (bool, optional): Show styles in pager. Defaults to False.
- links (bool, optional): Show links in pager. Defaults to False.
-
- Example:
- >>> from rich.console import Console
- >>> from rich.__main__ import make_test_card
- >>> console = Console()
- >>> with console.pager():
- console.print(make_test_card())
-
- Returns:
- PagerContext: A context manager.
- """
- return PagerContext(self, pager=pager, styles=styles, links=links)
-
- def line(self, count: int = 1) -> None:
- """Write new line(s).
-
- Args:
- count (int, optional): Number of new lines. Defaults to 1.
- """
-
- assert count >= 0, "count must be >= 0"
- self.print(NewLine(count))
-
- def clear(self, home: bool = True) -> None:
- """Clear the screen.
-
- Args:
- home (bool, optional): Also move the cursor to 'home' position. Defaults to True.
- """
- if home:
- self.control(Control.clear(), Control.home())
- else:
- self.control(Control.clear())
-
- def status(
- self,
- status: RenderableType,
- *,
- spinner: str = "dots",
- spinner_style: StyleType = "status.spinner",
- speed: float = 1.0,
- refresh_per_second: float = 12.5,
- ) -> "Status":
- """Display a status and spinner.
-
- Args:
- status (RenderableType): A status renderable (str or Text typically).
- spinner (str, optional): Name of spinner animation (see python -m rich.spinner). Defaults to "dots".
- spinner_style (StyleType, optional): Style of spinner. Defaults to "status.spinner".
- speed (float, optional): Speed factor for spinner animation. Defaults to 1.0.
- refresh_per_second (float, optional): Number of refreshes per second. Defaults to 12.5.
-
- Returns:
- Status: A Status object that may be used as a context manager.
- """
- from .status import Status
-
- status_renderable = Status(
- status,
- console=self,
- spinner=spinner,
- spinner_style=spinner_style,
- speed=speed,
- refresh_per_second=refresh_per_second,
- )
- return status_renderable
-
- def show_cursor(self, show: bool = True) -> bool:
- """Show or hide the cursor.
-
- Args:
- show (bool, optional): Set visibility of the cursor.
- """
- if self.is_terminal:
- self.control(Control.show_cursor(show))
- return True
- return False
-
- def set_alt_screen(self, enable: bool = True) -> bool:
- """Enables alternative screen mode.
-
- Note, if you enable this mode, you should ensure that is disabled before
- the application exits. See :meth:`~rich.Console.screen` for a context manager
- that handles this for you.
-
- Args:
- enable (bool, optional): Enable (True) or disable (False) alternate screen. Defaults to True.
-
- Returns:
- bool: True if the control codes were written.
-
- """
- changed = False
- if self.is_terminal and not self.legacy_windows:
- self.control(Control.alt_screen(enable))
- changed = True
- self._is_alt_screen = enable
- return changed
-
- @property
- def is_alt_screen(self) -> bool:
- """Check if the alt screen was enabled.
-
- Returns:
- bool: True if the alt screen was enabled, otherwise False.
- """
- return self._is_alt_screen
-
- def set_window_title(self, title: str) -> bool:
- """Set the title of the console terminal window.
-
- Warning: There is no means within Rich of "resetting" the window title to its
- previous value, meaning the title you set will persist even after your application
- exits.
-
- ``fish`` shell resets the window title before and after each command by default,
- negating this issue. Windows Terminal and command prompt will also reset the title for you.
- Most other shells and terminals, however, do not do this.
-
- Some terminals may require configuration changes before you can set the title.
- Some terminals may not support setting the title at all.
-
- Other software (including the terminal itself, the shell, custom prompts, plugins, etc.)
- may also set the terminal window title. This could result in whatever value you write
- using this method being overwritten.
-
- Args:
- title (str): The new title of the terminal window.
-
- Returns:
- bool: True if the control code to change the terminal title was
- written, otherwise False. Note that a return value of True
- does not guarantee that the window title has actually changed,
- since the feature may be unsupported/disabled in some terminals.
- """
- if self.is_terminal:
- self.control(Control.title(title))
- return True
- return False
-
- def screen(
- self, hide_cursor: bool = True, style: Optional[StyleType] = None
- ) -> "ScreenContext":
- """Context manager to enable and disable 'alternative screen' mode.
-
- Args:
- hide_cursor (bool, optional): Also hide the cursor. Defaults to False.
- style (Style, optional): Optional style for screen. Defaults to None.
-
- Returns:
- ~ScreenContext: Context which enables alternate screen on enter, and disables it on exit.
- """
- return ScreenContext(self, hide_cursor=hide_cursor, style=style or "")
-
- def measure(
- self, renderable: RenderableType, *, options: Optional[ConsoleOptions] = None
- ) -> Measurement:
- """Measure a renderable. Returns a :class:`~rich.measure.Measurement` object which contains
- information regarding the number of characters required to print the renderable.
-
- Args:
- renderable (RenderableType): Any renderable or string.
- options (Optional[ConsoleOptions], optional): Options to use when measuring, or None
- to use default options. Defaults to None.
-
- Returns:
- Measurement: A measurement of the renderable.
- """
- measurement = Measurement.get(self, options or self.options, renderable)
- return measurement
-
- def render(
- self, renderable: RenderableType, options: Optional[ConsoleOptions] = None
- ) -> Iterable[Segment]:
- """Render an object in to an iterable of `Segment` instances.
-
- This method contains the logic for rendering objects with the console protocol.
- You are unlikely to need to use it directly, unless you are extending the library.
-
- Args:
- renderable (RenderableType): An object supporting the console protocol, or
- an object that may be converted to a string.
- options (ConsoleOptions, optional): An options object, or None to use self.options. Defaults to None.
-
- Returns:
- Iterable[Segment]: An iterable of segments that may be rendered.
- """
-
- _options = options or self.options
- if _options.max_width < 1:
- # No space to render anything. This prevents potential recursion errors.
- return
- render_iterable: RenderResult
-
- renderable = rich_cast(renderable)
- if hasattr(renderable, "__rich_console__") and not isclass(renderable):
- render_iterable = renderable.__rich_console__(self, _options) # type: ignore[union-attr]
- elif isinstance(renderable, str):
- text_renderable = self.render_str(
- renderable, highlight=_options.highlight, markup=_options.markup
- )
- render_iterable = text_renderable.__rich_console__(self, _options)
- else:
- raise errors.NotRenderableError(
- f"Unable to render {renderable!r}; "
- "A str, Segment or object with __rich_console__ method is required"
- )
-
- try:
- iter_render = iter(render_iterable)
- except TypeError:
- raise errors.NotRenderableError(
- f"object {render_iterable!r} is not renderable"
- )
- _Segment = Segment
- _options = _options.reset_height()
- for render_output in iter_render:
- if isinstance(render_output, _Segment):
- yield render_output
- else:
- yield from self.render(render_output, _options)
-
- def render_lines(
- self,
- renderable: RenderableType,
- options: Optional[ConsoleOptions] = None,
- *,
- style: Optional[Style] = None,
- pad: bool = True,
- new_lines: bool = False,
- ) -> List[List[Segment]]:
- """Render objects in to a list of lines.
-
- The output of render_lines is useful when further formatting of rendered console text
- is required, such as the Panel class which draws a border around any renderable object.
-
- Args:
- renderable (RenderableType): Any object renderable in the console.
- options (Optional[ConsoleOptions], optional): Console options, or None to use self.options. Default to ``None``.
- style (Style, optional): Optional style to apply to renderables. Defaults to ``None``.
- pad (bool, optional): Pad lines shorter than render width. Defaults to ``True``.
- new_lines (bool, optional): Include "\n" characters at end of lines.
-
- Returns:
- List[List[Segment]]: A list of lines, where a line is a list of Segment objects.
- """
- with self._lock:
- render_options = options or self.options
- _rendered = self.render(renderable, render_options)
- if style:
- _rendered = Segment.apply_style(_rendered, style)
-
- render_height = render_options.height
- if render_height is not None:
- render_height = max(0, render_height)
-
- lines = list(
- islice(
- Segment.split_and_crop_lines(
- _rendered,
- render_options.max_width,
- include_new_lines=new_lines,
- pad=pad,
- style=style,
- ),
- None,
- render_height,
- )
- )
- if render_options.height is not None:
- extra_lines = render_options.height - len(lines)
- if extra_lines > 0:
- pad_line = [
- [Segment(" " * render_options.max_width, style), Segment("\n")]
- if new_lines
- else [Segment(" " * render_options.max_width, style)]
- ]
- lines.extend(pad_line * extra_lines)
-
- return lines
-
- def render_str(
- self,
- text: str,
- *,
- style: Union[str, Style] = "",
- justify: Optional[JustifyMethod] = None,
- overflow: Optional[OverflowMethod] = None,
- emoji: Optional[bool] = None,
- markup: Optional[bool] = None,
- highlight: Optional[bool] = None,
- highlighter: Optional[HighlighterType] = None,
- ) -> "Text":
- """Convert a string to a Text instance. This is called automatically if
- you print or log a string.
-
- Args:
- text (str): Text to render.
- style (Union[str, Style], optional): Style to apply to rendered text.
- justify (str, optional): Justify method: "default", "left", "center", "full", or "right". Defaults to ``None``.
- overflow (str, optional): Overflow method: "crop", "fold", or "ellipsis". Defaults to ``None``.
- emoji (Optional[bool], optional): Enable emoji, or ``None`` to use Console default.
- markup (Optional[bool], optional): Enable markup, or ``None`` to use Console default.
- highlight (Optional[bool], optional): Enable highlighting, or ``None`` to use Console default.
- highlighter (HighlighterType, optional): Optional highlighter to apply.
- Returns:
- ConsoleRenderable: Renderable object.
-
- """
- emoji_enabled = emoji or (emoji is None and self._emoji)
- markup_enabled = markup or (markup is None and self._markup)
- highlight_enabled = highlight or (highlight is None and self._highlight)
-
- if markup_enabled:
- rich_text = render_markup(
- text,
- style=style,
- emoji=emoji_enabled,
- emoji_variant=self._emoji_variant,
- )
- rich_text.justify = justify
- rich_text.overflow = overflow
- else:
- rich_text = Text(
- _emoji_replace(text, default_variant=self._emoji_variant)
- if emoji_enabled
- else text,
- justify=justify,
- overflow=overflow,
- style=style,
- )
-
- _highlighter = (highlighter or self.highlighter) if highlight_enabled else None
- if _highlighter is not None:
- highlight_text = _highlighter(str(rich_text))
- highlight_text.copy_styles(rich_text)
- return highlight_text
-
- return rich_text
-
- def get_style(
- self, name: Union[str, Style], *, default: Optional[Union[Style, str]] = None
- ) -> Style:
- """Get a Style instance by its theme name or parse a definition.
-
- Args:
- name (str): The name of a style or a style definition.
-
- Returns:
- Style: A Style object.
-
- Raises:
- MissingStyle: If no style could be parsed from name.
-
- """
- if isinstance(name, Style):
- return name
-
- try:
- style = self._theme_stack.get(name)
- if style is None:
- style = Style.parse(name)
- return style.copy() if style.link else style
- except errors.StyleSyntaxError as error:
- if default is not None:
- return self.get_style(default)
- raise errors.MissingStyle(
- f"Failed to get style {name!r}; {error}"
- ) from None
-
- def _collect_renderables(
- self,
- objects: Iterable[Any],
- sep: str,
- end: str,
- *,
- justify: Optional[JustifyMethod] = None,
- emoji: Optional[bool] = None,
- markup: Optional[bool] = None,
- highlight: Optional[bool] = None,
- ) -> List[ConsoleRenderable]:
- """Combine a number of renderables and text into one renderable.
-
- Args:
- objects (Iterable[Any]): Anything that Rich can render.
- sep (str): String to write between print data.
- end (str): String to write at end of print data.
- justify (str, optional): One of "left", "right", "center", or "full". Defaults to ``None``.
- emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default.
- markup (Optional[bool], optional): Enable markup, or ``None`` to use console default.
- highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default.
-
- Returns:
- List[ConsoleRenderable]: A list of things to render.
- """
- renderables: List[ConsoleRenderable] = []
- _append = renderables.append
- text: List[Text] = []
- append_text = text.append
-
- append = _append
- if justify in ("left", "center", "right"):
-
- def align_append(renderable: RenderableType) -> None:
- _append(Align(renderable, cast(AlignMethod, justify)))
-
- append = align_append
-
- _highlighter: HighlighterType = _null_highlighter
- if highlight or (highlight is None and self._highlight):
- _highlighter = self.highlighter
-
- def check_text() -> None:
- if text:
- sep_text = Text(sep, justify=justify, end=end)
- append(sep_text.join(text))
- text.clear()
-
- for renderable in objects:
- renderable = rich_cast(renderable)
- if isinstance(renderable, str):
- append_text(
- self.render_str(
- renderable, emoji=emoji, markup=markup, highlighter=_highlighter
- )
- )
- elif isinstance(renderable, Text):
- append_text(renderable)
- elif isinstance(renderable, ConsoleRenderable):
- check_text()
- append(renderable)
- elif is_expandable(renderable):
- check_text()
- append(Pretty(renderable, highlighter=_highlighter))
- else:
- append_text(_highlighter(str(renderable)))
-
- check_text()
-
- if self.style is not None:
- style = self.get_style(self.style)
- renderables = [Styled(renderable, style) for renderable in renderables]
-
- return renderables
-
- def rule(
- self,
- title: TextType = "",
- *,
- characters: str = "─",
- style: Union[str, Style] = "rule.line",
- align: AlignMethod = "center",
- ) -> None:
- """Draw a line with optional centered title.
-
- Args:
- title (str, optional): Text to render over the rule. Defaults to "".
- characters (str, optional): Character(s) to form the line. Defaults to "─".
- style (str, optional): Style of line. Defaults to "rule.line".
- align (str, optional): How to align the title, one of "left", "center", or "right". Defaults to "center".
- """
- from .rule import Rule
-
- rule = Rule(title=title, characters=characters, style=style, align=align)
- self.print(rule)
-
- def control(self, *control: Control) -> None:
- """Insert non-printing control codes.
-
- Args:
- control_codes (str): Control codes, such as those that may move the cursor.
- """
- if not self.is_dumb_terminal:
- with self:
- self._buffer.extend(_control.segment for _control in control)
-
- def out(
- self,
- *objects: Any,
- sep: str = " ",
- end: str = "\n",
- style: Optional[Union[str, Style]] = None,
- highlight: Optional[bool] = None,
- ) -> None:
- """Output to the terminal. This is a low-level way of writing to the terminal which unlike
- :meth:`~rich.console.Console.print` won't pretty print, wrap text, or apply markup, but will
- optionally apply highlighting and a basic style.
-
- Args:
- sep (str, optional): String to write between print data. Defaults to " ".
- end (str, optional): String to write at end of print data. Defaults to "\\\\n".
- style (Union[str, Style], optional): A style to apply to output. Defaults to None.
- highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use
- console default. Defaults to ``None``.
- """
- raw_output: str = sep.join(str(_object) for _object in objects)
- self.print(
- raw_output,
- style=style,
- highlight=highlight,
- emoji=False,
- markup=False,
- no_wrap=True,
- overflow="ignore",
- crop=False,
- end=end,
- )
-
- def print(
- self,
- *objects: Any,
- sep: str = " ",
- end: str = "\n",
- style: Optional[Union[str, Style]] = None,
- justify: Optional[JustifyMethod] = None,
- overflow: Optional[OverflowMethod] = None,
- no_wrap: Optional[bool] = None,
- emoji: Optional[bool] = None,
- markup: Optional[bool] = None,
- highlight: Optional[bool] = None,
- width: Optional[int] = None,
- height: Optional[int] = None,
- crop: bool = True,
- soft_wrap: Optional[bool] = None,
- new_line_start: bool = False,
- ) -> None:
- """Print to the console.
-
- Args:
- objects (positional args): Objects to log to the terminal.
- sep (str, optional): String to write between print data. Defaults to " ".
- end (str, optional): String to write at end of print data. Defaults to "\\\\n".
- style (Union[str, Style], optional): A style to apply to output. Defaults to None.
- justify (str, optional): Justify method: "default", "left", "right", "center", or "full". Defaults to ``None``.
- overflow (str, optional): Overflow method: "ignore", "crop", "fold", or "ellipsis". Defaults to None.
- no_wrap (Optional[bool], optional): Disable word wrapping. Defaults to None.
- emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default. Defaults to ``None``.
- markup (Optional[bool], optional): Enable markup, or ``None`` to use console default. Defaults to ``None``.
- highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to ``None``.
- width (Optional[int], optional): Width of output, or ``None`` to auto-detect. Defaults to ``None``.
- crop (Optional[bool], optional): Crop output to width of terminal. Defaults to True.
- soft_wrap (bool, optional): Enable soft wrap mode which disables word wrapping and cropping of text or ``None`` for
- Console default. Defaults to ``None``.
- new_line_start (bool, False): Insert a new line at the start if the output contains more than one line. Defaults to ``False``.
- """
- if not objects:
- objects = (NewLine(),)
-
- if soft_wrap is None:
- soft_wrap = self.soft_wrap
- if soft_wrap:
- if no_wrap is None:
- no_wrap = True
- if overflow is None:
- overflow = "ignore"
- crop = False
- render_hooks = self._render_hooks[:]
- with self:
- renderables = self._collect_renderables(
- objects,
- sep,
- end,
- justify=justify,
- emoji=emoji,
- markup=markup,
- highlight=highlight,
- )
- for hook in render_hooks:
- renderables = hook.process_renderables(renderables)
- render_options = self.options.update(
- justify=justify,
- overflow=overflow,
- width=min(width, self.width) if width is not None else NO_CHANGE,
- height=height,
- no_wrap=no_wrap,
- markup=markup,
- highlight=highlight,
- )
-
- new_segments: List[Segment] = []
- extend = new_segments.extend
- render = self.render
- if style is None:
- for renderable in renderables:
- extend(render(renderable, render_options))
- else:
- for renderable in renderables:
- extend(
- Segment.apply_style(
- render(renderable, render_options), self.get_style(style)
- )
- )
- if new_line_start:
- if (
- len("".join(segment.text for segment in new_segments).splitlines())
- > 1
- ):
- new_segments.insert(0, Segment.line())
- if crop:
- buffer_extend = self._buffer.extend
- for line in Segment.split_and_crop_lines(
- new_segments, self.width, pad=False
- ):
- buffer_extend(line)
- else:
- self._buffer.extend(new_segments)
-
- def print_json(
- self,
- json: Optional[str] = None,
- *,
- data: Any = None,
- indent: Union[None, int, str] = 2,
- highlight: bool = True,
- skip_keys: bool = False,
- ensure_ascii: bool = False,
- check_circular: bool = True,
- allow_nan: bool = True,
- default: Optional[Callable[[Any], Any]] = None,
- sort_keys: bool = False,
- ) -> None:
- """Pretty prints JSON. Output will be valid JSON.
-
- Args:
- json (Optional[str]): A string containing JSON.
- data (Any): If json is not supplied, then encode this data.
- indent (Union[None, int, str], optional): Number of spaces to indent. Defaults to 2.
- highlight (bool, optional): Enable highlighting of output: Defaults to True.
- skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False.
- ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False.
- check_circular (bool, optional): Check for circular references. Defaults to True.
- allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True.
- default (Callable, optional): A callable that converts values that can not be encoded
- in to something that can be JSON encoded. Defaults to None.
- sort_keys (bool, optional): Sort dictionary keys. Defaults to False.
- """
- from pip._vendor.rich.json import JSON
-
- if json is None:
- json_renderable = JSON.from_data(
- data,
- indent=indent,
- highlight=highlight,
- skip_keys=skip_keys,
- ensure_ascii=ensure_ascii,
- check_circular=check_circular,
- allow_nan=allow_nan,
- default=default,
- sort_keys=sort_keys,
- )
- else:
- if not isinstance(json, str):
- raise TypeError(
- f"json must be str. Did you mean print_json(data={json!r}) ?"
- )
- json_renderable = JSON(
- json,
- indent=indent,
- highlight=highlight,
- skip_keys=skip_keys,
- ensure_ascii=ensure_ascii,
- check_circular=check_circular,
- allow_nan=allow_nan,
- default=default,
- sort_keys=sort_keys,
- )
- self.print(json_renderable, soft_wrap=True)
-
- def update_screen(
- self,
- renderable: RenderableType,
- *,
- region: Optional[Region] = None,
- options: Optional[ConsoleOptions] = None,
- ) -> None:
- """Update the screen at a given offset.
-
- Args:
- renderable (RenderableType): A Rich renderable.
- region (Region, optional): Region of screen to update, or None for entire screen. Defaults to None.
- x (int, optional): x offset. Defaults to 0.
- y (int, optional): y offset. Defaults to 0.
-
- Raises:
- errors.NoAltScreen: If the Console isn't in alt screen mode.
-
- """
- if not self.is_alt_screen:
- raise errors.NoAltScreen("Alt screen must be enabled to call update_screen")
- render_options = options or self.options
- if region is None:
- x = y = 0
- render_options = render_options.update_dimensions(
- render_options.max_width, render_options.height or self.height
- )
- else:
- x, y, width, height = region
- render_options = render_options.update_dimensions(width, height)
-
- lines = self.render_lines(renderable, options=render_options)
- self.update_screen_lines(lines, x, y)
-
- def update_screen_lines(
- self, lines: List[List[Segment]], x: int = 0, y: int = 0
- ) -> None:
- """Update lines of the screen at a given offset.
-
- Args:
- lines (List[List[Segment]]): Rendered lines (as produced by :meth:`~rich.Console.render_lines`).
- x (int, optional): x offset (column no). Defaults to 0.
- y (int, optional): y offset (column no). Defaults to 0.
-
- Raises:
- errors.NoAltScreen: If the Console isn't in alt screen mode.
- """
- if not self.is_alt_screen:
- raise errors.NoAltScreen("Alt screen must be enabled to call update_screen")
- screen_update = ScreenUpdate(lines, x, y)
- segments = self.render(screen_update)
- self._buffer.extend(segments)
- self._check_buffer()
-
- def print_exception(
- self,
- *,
- width: Optional[int] = 100,
- extra_lines: int = 3,
- theme: Optional[str] = None,
- word_wrap: bool = False,
- show_locals: bool = False,
- suppress: Iterable[Union[str, ModuleType]] = (),
- max_frames: int = 100,
- ) -> None:
- """Prints a rich render of the last exception and traceback.
-
- Args:
- width (Optional[int], optional): Number of characters used to render code. Defaults to 100.
- extra_lines (int, optional): Additional lines of code to render. Defaults to 3.
- theme (str, optional): Override pygments theme used in traceback
- word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False.
- show_locals (bool, optional): Enable display of local variables. Defaults to False.
- suppress (Iterable[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback.
- max_frames (int): Maximum number of frames to show in a traceback, 0 for no maximum. Defaults to 100.
- """
- from .traceback import Traceback
-
- traceback = Traceback(
- width=width,
- extra_lines=extra_lines,
- theme=theme,
- word_wrap=word_wrap,
- show_locals=show_locals,
- suppress=suppress,
- max_frames=max_frames,
- )
- self.print(traceback)
-
- @staticmethod
- def _caller_frame_info(
- offset: int,
- currentframe: Callable[[], Optional[FrameType]] = inspect.currentframe,
- ) -> Tuple[str, int, Dict[str, Any]]:
- """Get caller frame information.
-
- Args:
- offset (int): the caller offset within the current frame stack.
- currentframe (Callable[[], Optional[FrameType]], optional): the callable to use to
- retrieve the current frame. Defaults to ``inspect.currentframe``.
-
- Returns:
- Tuple[str, int, Dict[str, Any]]: A tuple containing the filename, the line number and
- the dictionary of local variables associated with the caller frame.
-
- Raises:
- RuntimeError: If the stack offset is invalid.
- """
- # Ignore the frame of this local helper
- offset += 1
-
- frame = currentframe()
- if frame is not None:
- # Use the faster currentframe where implemented
- while offset and frame is not None:
- frame = frame.f_back
- offset -= 1
- assert frame is not None
- return frame.f_code.co_filename, frame.f_lineno, frame.f_locals
- else:
- # Fallback to the slower stack
- frame_info = inspect.stack()[offset]
- return frame_info.filename, frame_info.lineno, frame_info.frame.f_locals
-
- def log(
- self,
- *objects: Any,
- sep: str = " ",
- end: str = "\n",
- style: Optional[Union[str, Style]] = None,
- justify: Optional[JustifyMethod] = None,
- emoji: Optional[bool] = None,
- markup: Optional[bool] = None,
- highlight: Optional[bool] = None,
- log_locals: bool = False,
- _stack_offset: int = 1,
- ) -> None:
- """Log rich content to the terminal.
-
- Args:
- objects (positional args): Objects to log to the terminal.
- sep (str, optional): String to write between print data. Defaults to " ".
- end (str, optional): String to write at end of print data. Defaults to "\\\\n".
- style (Union[str, Style], optional): A style to apply to output. Defaults to None.
- justify (str, optional): One of "left", "right", "center", or "full". Defaults to ``None``.
- overflow (str, optional): Overflow method: "crop", "fold", or "ellipsis". Defaults to None.
- emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default. Defaults to None.
- markup (Optional[bool], optional): Enable markup, or ``None`` to use console default. Defaults to None.
- highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to None.
- log_locals (bool, optional): Boolean to enable logging of locals where ``log()``
- was called. Defaults to False.
- _stack_offset (int, optional): Offset of caller from end of call stack. Defaults to 1.
- """
- if not objects:
- objects = (NewLine(),)
-
- render_hooks = self._render_hooks[:]
-
- with self:
- renderables = self._collect_renderables(
- objects,
- sep,
- end,
- justify=justify,
- emoji=emoji,
- markup=markup,
- highlight=highlight,
- )
- if style is not None:
- renderables = [Styled(renderable, style) for renderable in renderables]
-
- filename, line_no, locals = self._caller_frame_info(_stack_offset)
- link_path = None if filename.startswith("<") else os.path.abspath(filename)
- path = filename.rpartition(os.sep)[-1]
- if log_locals:
- locals_map = {
- key: value
- for key, value in locals.items()
- if not key.startswith("__")
- }
- renderables.append(render_scope(locals_map, title="[i]locals"))
-
- renderables = [
- self._log_render(
- self,
- renderables,
- log_time=self.get_datetime(),
- path=path,
- line_no=line_no,
- link_path=link_path,
- )
- ]
- for hook in render_hooks:
- renderables = hook.process_renderables(renderables)
- new_segments: List[Segment] = []
- extend = new_segments.extend
- render = self.render
- render_options = self.options
- for renderable in renderables:
- extend(render(renderable, render_options))
- buffer_extend = self._buffer.extend
- for line in Segment.split_and_crop_lines(
- new_segments, self.width, pad=False
- ):
- buffer_extend(line)
-
- def _check_buffer(self) -> None:
- """Check if the buffer may be rendered. Render it if it can (e.g. Console.quiet is False)
- Rendering is supported on Windows, Unix and Jupyter environments. For
- legacy Windows consoles, the win32 API is called directly.
- This method will also record what it renders if recording is enabled via Console.record.
- """
- if self.quiet:
- del self._buffer[:]
- return
- with self._lock:
- if self.record:
- with self._record_buffer_lock:
- self._record_buffer.extend(self._buffer[:])
-
- if self._buffer_index == 0:
-
- if self.is_jupyter: # pragma: no cover
- from .jupyter import display
-
- display(self._buffer, self._render_buffer(self._buffer[:]))
- del self._buffer[:]
- else:
- if WINDOWS:
- use_legacy_windows_render = False
- if self.legacy_windows:
- fileno = get_fileno(self.file)
- if fileno is not None:
- use_legacy_windows_render = (
- fileno in _STD_STREAMS_OUTPUT
- )
-
- if use_legacy_windows_render:
- from pip._vendor.rich._win32_console import LegacyWindowsTerm
- from pip._vendor.rich._windows_renderer import legacy_windows_render
-
- buffer = self._buffer[:]
- if self.no_color and self._color_system:
- buffer = list(Segment.remove_color(buffer))
-
- legacy_windows_render(buffer, LegacyWindowsTerm(self.file))
- else:
- # Either a non-std stream on legacy Windows, or modern Windows.
- text = self._render_buffer(self._buffer[:])
- # https://bugs.python.org/issue37871
- # https://github.com/python/cpython/issues/82052
- # We need to avoid writing more than 32Kb in a single write, due to the above bug
- write = self.file.write
- # Worse case scenario, every character is 4 bytes of utf-8
- MAX_WRITE = 32 * 1024 // 4
- try:
- if len(text) <= MAX_WRITE:
- write(text)
- else:
- batch: List[str] = []
- batch_append = batch.append
- size = 0
- for line in text.splitlines(True):
- if size + len(line) > MAX_WRITE and batch:
- write("".join(batch))
- batch.clear()
- size = 0
- batch_append(line)
- size += len(line)
- if batch:
- write("".join(batch))
- batch.clear()
- except UnicodeEncodeError as error:
- error.reason = f"{error.reason}\n*** You may need to add PYTHONIOENCODING=utf-8 to your environment ***"
- raise
- else:
- text = self._render_buffer(self._buffer[:])
- try:
- self.file.write(text)
- except UnicodeEncodeError as error:
- error.reason = f"{error.reason}\n*** You may need to add PYTHONIOENCODING=utf-8 to your environment ***"
- raise
-
- self.file.flush()
- del self._buffer[:]
-
- def _render_buffer(self, buffer: Iterable[Segment]) -> str:
- """Render buffered output, and clear buffer."""
- output: List[str] = []
- append = output.append
- color_system = self._color_system
- legacy_windows = self.legacy_windows
- not_terminal = not self.is_terminal
- if self.no_color and color_system:
- buffer = Segment.remove_color(buffer)
- for text, style, control in buffer:
- if style:
- append(
- style.render(
- text,
- color_system=color_system,
- legacy_windows=legacy_windows,
- )
- )
- elif not (not_terminal and control):
- append(text)
-
- rendered = "".join(output)
- return rendered
-
- def input(
- self,
- prompt: TextType = "",
- *,
- markup: bool = True,
- emoji: bool = True,
- password: bool = False,
- stream: Optional[TextIO] = None,
- ) -> str:
- """Displays a prompt and waits for input from the user. The prompt may contain color / style.
-
- It works in the same way as Python's builtin :func:`input` function and provides elaborate line editing and history features if Python's builtin :mod:`readline` module is previously loaded.
-
- Args:
- prompt (Union[str, Text]): Text to render in the prompt.
- markup (bool, optional): Enable console markup (requires a str prompt). Defaults to True.
- emoji (bool, optional): Enable emoji (requires a str prompt). Defaults to True.
- password: (bool, optional): Hide typed text. Defaults to False.
- stream: (TextIO, optional): Optional file to read input from (rather than stdin). Defaults to None.
-
- Returns:
- str: Text read from stdin.
- """
- if prompt:
- self.print(prompt, markup=markup, emoji=emoji, end="")
- if password:
- result = getpass("", stream=stream)
- else:
- if stream:
- result = stream.readline()
- else:
- result = input()
- return result
-
- def export_text(self, *, clear: bool = True, styles: bool = False) -> str:
- """Generate text from console contents (requires record=True argument in constructor).
-
- Args:
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.
- styles (bool, optional): If ``True``, ansi escape codes will be included. ``False`` for plain text.
- Defaults to ``False``.
-
- Returns:
- str: String containing console contents.
-
- """
- assert (
- self.record
- ), "To export console contents set record=True in the constructor or instance"
-
- with self._record_buffer_lock:
- if styles:
- text = "".join(
- (style.render(text) if style else text)
- for text, style, _ in self._record_buffer
- )
- else:
- text = "".join(
- segment.text
- for segment in self._record_buffer
- if not segment.control
- )
- if clear:
- del self._record_buffer[:]
- return text
-
- def save_text(self, path: str, *, clear: bool = True, styles: bool = False) -> None:
- """Generate text from console and save to a given location (requires record=True argument in constructor).
-
- Args:
- path (str): Path to write text files.
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.
- styles (bool, optional): If ``True``, ansi style codes will be included. ``False`` for plain text.
- Defaults to ``False``.
-
- """
- text = self.export_text(clear=clear, styles=styles)
- with open(path, "wt", encoding="utf-8") as write_file:
- write_file.write(text)
-
- def export_html(
- self,
- *,
- theme: Optional[TerminalTheme] = None,
- clear: bool = True,
- code_format: Optional[str] = None,
- inline_styles: bool = False,
- ) -> str:
- """Generate HTML from console contents (requires record=True argument in constructor).
-
- Args:
- theme (TerminalTheme, optional): TerminalTheme object containing console colors.
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.
- code_format (str, optional): Format string to render HTML. In addition to '{foreground}',
- '{background}', and '{code}', should contain '{stylesheet}' if inline_styles is ``False``.
- inline_styles (bool, optional): If ``True`` styles will be inlined in to spans, which makes files
- larger but easier to cut and paste markup. If ``False``, styles will be embedded in a style tag.
- Defaults to False.
-
- Returns:
- str: String containing console contents as HTML.
- """
- assert (
- self.record
- ), "To export console contents set record=True in the constructor or instance"
- fragments: List[str] = []
- append = fragments.append
- _theme = theme or DEFAULT_TERMINAL_THEME
- stylesheet = ""
-
- render_code_format = CONSOLE_HTML_FORMAT if code_format is None else code_format
-
- with self._record_buffer_lock:
- if inline_styles:
- for text, style, _ in Segment.filter_control(
- Segment.simplify(self._record_buffer)
- ):
- text = escape(text)
- if style:
- rule = style.get_html_style(_theme)
- if style.link:
- text = f'{text}'
- text = f'{text}' if rule else text
- append(text)
- else:
- styles: Dict[str, int] = {}
- for text, style, _ in Segment.filter_control(
- Segment.simplify(self._record_buffer)
- ):
- text = escape(text)
- if style:
- rule = style.get_html_style(_theme)
- style_number = styles.setdefault(rule, len(styles) + 1)
- if style.link:
- text = f'{text}'
- else:
- text = f'{text}'
- append(text)
- stylesheet_rules: List[str] = []
- stylesheet_append = stylesheet_rules.append
- for style_rule, style_number in styles.items():
- if style_rule:
- stylesheet_append(f".r{style_number} {{{style_rule}}}")
- stylesheet = "\n".join(stylesheet_rules)
-
- rendered_code = render_code_format.format(
- code="".join(fragments),
- stylesheet=stylesheet,
- foreground=_theme.foreground_color.hex,
- background=_theme.background_color.hex,
- )
- if clear:
- del self._record_buffer[:]
- return rendered_code
-
- def save_html(
- self,
- path: str,
- *,
- theme: Optional[TerminalTheme] = None,
- clear: bool = True,
- code_format: str = CONSOLE_HTML_FORMAT,
- inline_styles: bool = False,
- ) -> None:
- """Generate HTML from console contents and write to a file (requires record=True argument in constructor).
-
- Args:
- path (str): Path to write html file.
- theme (TerminalTheme, optional): TerminalTheme object containing console colors.
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.
- code_format (str, optional): Format string to render HTML. In addition to '{foreground}',
- '{background}', and '{code}', should contain '{stylesheet}' if inline_styles is ``False``.
- inline_styles (bool, optional): If ``True`` styles will be inlined in to spans, which makes files
- larger but easier to cut and paste markup. If ``False``, styles will be embedded in a style tag.
- Defaults to False.
-
- """
- html = self.export_html(
- theme=theme,
- clear=clear,
- code_format=code_format,
- inline_styles=inline_styles,
- )
- with open(path, "wt", encoding="utf-8") as write_file:
- write_file.write(html)
-
- def export_svg(
- self,
- *,
- title: str = "Rich",
- theme: Optional[TerminalTheme] = None,
- clear: bool = True,
- code_format: str = CONSOLE_SVG_FORMAT,
- font_aspect_ratio: float = 0.61,
- unique_id: Optional[str] = None,
- ) -> str:
- """
- Generate an SVG from the console contents (requires record=True in Console constructor).
-
- Args:
- title (str, optional): The title of the tab in the output image
- theme (TerminalTheme, optional): The ``TerminalTheme`` object to use to style the terminal
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``
- code_format (str, optional): Format string used to generate the SVG. Rich will inject a number of variables
- into the string in order to form the final SVG output. The default template used and the variables
- injected by Rich can be found by inspecting the ``console.CONSOLE_SVG_FORMAT`` variable.
- font_aspect_ratio (float, optional): The width to height ratio of the font used in the ``code_format``
- string. Defaults to 0.61, which is the width to height ratio of Fira Code (the default font).
- If you aren't specifying a different font inside ``code_format``, you probably don't need this.
- unique_id (str, optional): unique id that is used as the prefix for various elements (CSS styles, node
- ids). If not set, this defaults to a computed value based on the recorded content.
- """
-
- from pip._vendor.rich.cells import cell_len
-
- style_cache: Dict[Style, str] = {}
-
- def get_svg_style(style: Style) -> str:
- """Convert a Style to CSS rules for SVG."""
- if style in style_cache:
- return style_cache[style]
- css_rules = []
- color = (
- _theme.foreground_color
- if (style.color is None or style.color.is_default)
- else style.color.get_truecolor(_theme)
- )
- bgcolor = (
- _theme.background_color
- if (style.bgcolor is None or style.bgcolor.is_default)
- else style.bgcolor.get_truecolor(_theme)
- )
- if style.reverse:
- color, bgcolor = bgcolor, color
- if style.dim:
- color = blend_rgb(color, bgcolor, 0.4)
- css_rules.append(f"fill: {color.hex}")
- if style.bold:
- css_rules.append("font-weight: bold")
- if style.italic:
- css_rules.append("font-style: italic;")
- if style.underline:
- css_rules.append("text-decoration: underline;")
- if style.strike:
- css_rules.append("text-decoration: line-through;")
-
- css = ";".join(css_rules)
- style_cache[style] = css
- return css
-
- _theme = theme or SVG_EXPORT_THEME
-
- width = self.width
- char_height = 20
- char_width = char_height * font_aspect_ratio
- line_height = char_height * 1.22
-
- margin_top = 1
- margin_right = 1
- margin_bottom = 1
- margin_left = 1
-
- padding_top = 40
- padding_right = 8
- padding_bottom = 8
- padding_left = 8
-
- padding_width = padding_left + padding_right
- padding_height = padding_top + padding_bottom
- margin_width = margin_left + margin_right
- margin_height = margin_top + margin_bottom
-
- text_backgrounds: List[str] = []
- text_group: List[str] = []
- classes: Dict[str, int] = {}
- style_no = 1
-
- def escape_text(text: str) -> str:
- """HTML escape text and replace spaces with nbsp."""
- return escape(text).replace(" ", " ")
-
- def make_tag(
- name: str, content: Optional[str] = None, **attribs: object
- ) -> str:
- """Make a tag from name, content, and attributes."""
-
- def stringify(value: object) -> str:
- if isinstance(value, (float)):
- return format(value, "g")
- return str(value)
-
- tag_attribs = " ".join(
- f'{k.lstrip("_").replace("_", "-")}="{stringify(v)}"'
- for k, v in attribs.items()
- )
- return (
- f"<{name} {tag_attribs}>{content}{name}>"
- if content
- else f"<{name} {tag_attribs}/>"
- )
-
- with self._record_buffer_lock:
- segments = list(Segment.filter_control(self._record_buffer))
- if clear:
- self._record_buffer.clear()
-
- if unique_id is None:
- unique_id = "terminal-" + str(
- zlib.adler32(
- ("".join(repr(segment) for segment in segments)).encode(
- "utf-8",
- "ignore",
- )
- + title.encode("utf-8", "ignore")
- )
- )
- y = 0
- for y, line in enumerate(Segment.split_and_crop_lines(segments, length=width)):
- x = 0
- for text, style, _control in line:
- style = style or Style()
- rules = get_svg_style(style)
- if rules not in classes:
- classes[rules] = style_no
- style_no += 1
- class_name = f"r{classes[rules]}"
-
- if style.reverse:
- has_background = True
- background = (
- _theme.foreground_color.hex
- if style.color is None
- else style.color.get_truecolor(_theme).hex
- )
- else:
- bgcolor = style.bgcolor
- has_background = bgcolor is not None and not bgcolor.is_default
- background = (
- _theme.background_color.hex
- if style.bgcolor is None
- else style.bgcolor.get_truecolor(_theme).hex
- )
-
- text_length = cell_len(text)
- if has_background:
- text_backgrounds.append(
- make_tag(
- "rect",
- fill=background,
- x=x * char_width,
- y=y * line_height + 1.5,
- width=char_width * text_length,
- height=line_height + 0.25,
- shape_rendering="crispEdges",
- )
- )
-
- if text != " " * len(text):
- text_group.append(
- make_tag(
- "text",
- escape_text(text),
- _class=f"{unique_id}-{class_name}",
- x=x * char_width,
- y=y * line_height + char_height,
- textLength=char_width * len(text),
- clip_path=f"url(#{unique_id}-line-{y})",
- )
- )
- x += cell_len(text)
-
- line_offsets = [line_no * line_height + 1.5 for line_no in range(y)]
- lines = "\n".join(
- f"""
- {make_tag("rect", x=0, y=offset, width=char_width * width, height=line_height + 0.25)}
- """
- for line_no, offset in enumerate(line_offsets)
- )
-
- styles = "\n".join(
- f".{unique_id}-r{rule_no} {{ {css} }}" for css, rule_no in classes.items()
- )
- backgrounds = "".join(text_backgrounds)
- matrix = "".join(text_group)
-
- terminal_width = ceil(width * char_width + padding_width)
- terminal_height = (y + 1) * line_height + padding_height
- chrome = make_tag(
- "rect",
- fill=_theme.background_color.hex,
- stroke="rgba(255,255,255,0.35)",
- stroke_width="1",
- x=margin_left,
- y=margin_top,
- width=terminal_width,
- height=terminal_height,
- rx=8,
- )
-
- title_color = _theme.foreground_color.hex
- if title:
- chrome += make_tag(
- "text",
- escape_text(title),
- _class=f"{unique_id}-title",
- fill=title_color,
- text_anchor="middle",
- x=terminal_width // 2,
- y=margin_top + char_height + 6,
- )
- chrome += f"""
-
-
-
-
-
- """
-
- svg = code_format.format(
- unique_id=unique_id,
- char_width=char_width,
- char_height=char_height,
- line_height=line_height,
- terminal_width=char_width * width - 1,
- terminal_height=(y + 1) * line_height - 1,
- width=terminal_width + margin_width,
- height=terminal_height + margin_height,
- terminal_x=margin_left + padding_left,
- terminal_y=margin_top + padding_top,
- styles=styles,
- chrome=chrome,
- backgrounds=backgrounds,
- matrix=matrix,
- lines=lines,
- )
- return svg
-
- def save_svg(
- self,
- path: str,
- *,
- title: str = "Rich",
- theme: Optional[TerminalTheme] = None,
- clear: bool = True,
- code_format: str = CONSOLE_SVG_FORMAT,
- font_aspect_ratio: float = 0.61,
- unique_id: Optional[str] = None,
- ) -> None:
- """Generate an SVG file from the console contents (requires record=True in Console constructor).
-
- Args:
- path (str): The path to write the SVG to.
- title (str, optional): The title of the tab in the output image
- theme (TerminalTheme, optional): The ``TerminalTheme`` object to use to style the terminal
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``
- code_format (str, optional): Format string used to generate the SVG. Rich will inject a number of variables
- into the string in order to form the final SVG output. The default template used and the variables
- injected by Rich can be found by inspecting the ``console.CONSOLE_SVG_FORMAT`` variable.
- font_aspect_ratio (float, optional): The width to height ratio of the font used in the ``code_format``
- string. Defaults to 0.61, which is the width to height ratio of Fira Code (the default font).
- If you aren't specifying a different font inside ``code_format``, you probably don't need this.
- unique_id (str, optional): unique id that is used as the prefix for various elements (CSS styles, node
- ids). If not set, this defaults to a computed value based on the recorded content.
- """
- svg = self.export_svg(
- title=title,
- theme=theme,
- clear=clear,
- code_format=code_format,
- font_aspect_ratio=font_aspect_ratio,
- unique_id=unique_id,
- )
- with open(path, "wt", encoding="utf-8") as write_file:
- write_file.write(svg)
-
-
-def _svg_hash(svg_main_code: str) -> str:
- """Returns a unique hash for the given SVG main code.
-
- Args:
- svg_main_code (str): The content we're going to inject in the SVG envelope.
-
- Returns:
- str: a hash of the given content
- """
- return str(zlib.adler32(svg_main_code.encode()))
-
-
-if __name__ == "__main__": # pragma: no cover
- console = Console(record=True)
-
- console.log(
- "JSONRPC [i]request[/i]",
- 5,
- 1.3,
- True,
- False,
- None,
- {
- "jsonrpc": "2.0",
- "method": "subtract",
- "params": {"minuend": 42, "subtrahend": 23},
- "id": 3,
- },
- )
-
- console.log("Hello, World!", "{'a': 1}", repr(console))
-
- console.print(
- {
- "name": None,
- "empty": [],
- "quiz": {
- "sport": {
- "answered": True,
- "q1": {
- "question": "Which one is correct team name in NBA?",
- "options": [
- "New York Bulls",
- "Los Angeles Kings",
- "Golden State Warriors",
- "Huston Rocket",
- ],
- "answer": "Huston Rocket",
- },
- },
- "maths": {
- "answered": False,
- "q1": {
- "question": "5 + 7 = ?",
- "options": [10, 11, 12, 13],
- "answer": 12,
- },
- "q2": {
- "question": "12 - 8 = ?",
- "options": [1, 2, 3, 4],
- "answer": 4,
- },
- },
- },
- }
- )
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/tomli/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/tomli/__init__.py
deleted file mode 100644
index 4c6ec97ec6961bcf184b6e0b2437b9924db0b9de..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/tomli/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# SPDX-License-Identifier: MIT
-# SPDX-FileCopyrightText: 2021 Taneli Hukkinen
-# Licensed to PSF under a Contributor Agreement.
-
-__all__ = ("loads", "load", "TOMLDecodeError")
-__version__ = "2.0.1" # DO NOT EDIT THIS LINE MANUALLY. LET bump2version UTILITY DO IT
-
-from ._parser import TOMLDecodeError, load, loads
-
-# Pretend this exception was created here.
-TOMLDecodeError.__module__ = __name__
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/dataset.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/dataset.py
deleted file mode 100644
index ee72ea6d82c5a1ea55b497809f8d730791d56692..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/dataset.py
+++ /dev/null
@@ -1,49 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import os
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.data.datasets import load_coco_json
-
-_URL_PREFIX = "https://dl.fbaipublicfiles.com/densepose/data/"
-
-
-def get_densepose_metadata():
- meta = {
- "thing_classes": ["person"],
- "densepose_transform_src": _URL_PREFIX + "UV_symmetry_transforms.mat",
- "densepose_smpl_subdiv": _URL_PREFIX + "SMPL_subdiv.mat",
- "densepose_smpl_subdiv_transform": _URL_PREFIX + "SMPL_SUBDIV_TRANSFORM.mat",
- }
- return meta
-
-
-SPLITS = {
- "densepose_coco_2014_train": ("coco/train2014", "coco/annotations/densepose_train2014.json"),
- "densepose_coco_2014_minival": ("coco/val2014", "coco/annotations/densepose_minival2014.json"),
- "densepose_coco_2014_minival_100": (
- "coco/val2014",
- "coco/annotations/densepose_minival2014_100.json",
- ),
- "densepose_coco_2014_valminusminival": (
- "coco/val2014",
- "coco/annotations/densepose_valminusminival2014.json",
- ),
-}
-
-DENSEPOSE_KEYS = ["dp_x", "dp_y", "dp_I", "dp_U", "dp_V", "dp_masks"]
-
-for key, (image_root, json_file) in SPLITS.items():
- # Assume pre-defined datasets live in `./datasets`.
- json_file = os.path.join("datasets", json_file)
- image_root = os.path.join("datasets", image_root)
-
- DatasetCatalog.register(
- key,
- lambda key=key, json_file=json_file, image_root=image_root: load_coco_json(
- json_file, image_root, key, extra_annotation_keys=DENSEPOSE_KEYS
- ),
- )
-
- MetadataCatalog.get(key).set(
- json_file=json_file, image_root=image_root, **get_densepose_metadata()
- )
diff --git a/spaces/CVPR/WALT/mmdet/models/dense_heads/paa_head.py b/spaces/CVPR/WALT/mmdet/models/dense_heads/paa_head.py
deleted file mode 100644
index e067b0121cf8b8230c0c9c6b8cfd41f56be4e298..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/dense_heads/paa_head.py
+++ /dev/null
@@ -1,671 +0,0 @@
-import numpy as np
-import torch
-from mmcv.runner import force_fp32
-
-from mmdet.core import multi_apply, multiclass_nms
-from mmdet.core.bbox.iou_calculators import bbox_overlaps
-from mmdet.models import HEADS
-from mmdet.models.dense_heads import ATSSHead
-
-EPS = 1e-12
-try:
- import sklearn.mixture as skm
-except ImportError:
- skm = None
-
-
-def levels_to_images(mlvl_tensor):
- """Concat multi-level feature maps by image.
-
- [feature_level0, feature_level1...] -> [feature_image0, feature_image1...]
- Convert the shape of each element in mlvl_tensor from (N, C, H, W) to
- (N, H*W , C), then split the element to N elements with shape (H*W, C), and
- concat elements in same image of all level along first dimension.
-
- Args:
- mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from
- corresponding level. Each element is of shape (N, C, H, W)
-
- Returns:
- list[torch.Tensor]: A list that contains N tensors and each tensor is
- of shape (num_elements, C)
- """
- batch_size = mlvl_tensor[0].size(0)
- batch_list = [[] for _ in range(batch_size)]
- channels = mlvl_tensor[0].size(1)
- for t in mlvl_tensor:
- t = t.permute(0, 2, 3, 1)
- t = t.view(batch_size, -1, channels).contiguous()
- for img in range(batch_size):
- batch_list[img].append(t[img])
- return [torch.cat(item, 0) for item in batch_list]
-
-
-@HEADS.register_module()
-class PAAHead(ATSSHead):
- """Head of PAAAssignment: Probabilistic Anchor Assignment with IoU
- Prediction for Object Detection.
-
- Code is modified from the `official github repo
- `_.
-
- More details can be found in the `paper
- `_ .
-
- Args:
- topk (int): Select topk samples with smallest loss in
- each level.
- score_voting (bool): Whether to use score voting in post-process.
- covariance_type : String describing the type of covariance parameters
- to be used in :class:`sklearn.mixture.GaussianMixture`.
- It must be one of:
-
- - 'full': each component has its own general covariance matrix
- - 'tied': all components share the same general covariance matrix
- - 'diag': each component has its own diagonal covariance matrix
- - 'spherical': each component has its own single variance
- Default: 'diag'. From 'full' to 'spherical', the gmm fitting
- process is faster yet the performance could be influenced. For most
- cases, 'diag' should be a good choice.
- """
-
- def __init__(self,
- *args,
- topk=9,
- score_voting=True,
- covariance_type='diag',
- **kwargs):
- # topk used in paa reassign process
- self.topk = topk
- self.with_score_voting = score_voting
- self.covariance_type = covariance_type
- super(PAAHead, self).__init__(*args, **kwargs)
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds'))
- def loss(self,
- cls_scores,
- bbox_preds,
- iou_preds,
- gt_bboxes,
- gt_labels,
- img_metas,
- gt_bboxes_ignore=None):
- """Compute losses of the head.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_anchors * 4, H, W)
- iou_preds (list[Tensor]): iou_preds for each scale
- level with shape (N, num_anchors * 1, H, W)
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]): class indices corresponding to each box
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (list[Tensor] | None): Specify which bounding
- boxes can be ignored when are computing the loss.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss gmm_assignment.
- """
-
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- assert len(featmap_sizes) == self.anchor_generator.num_levels
-
- device = cls_scores[0].device
- anchor_list, valid_flag_list = self.get_anchors(
- featmap_sizes, img_metas, device=device)
- label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
- cls_reg_targets = self.get_targets(
- anchor_list,
- valid_flag_list,
- gt_bboxes,
- img_metas,
- gt_bboxes_ignore_list=gt_bboxes_ignore,
- gt_labels_list=gt_labels,
- label_channels=label_channels,
- )
- (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds,
- pos_gt_index) = cls_reg_targets
- cls_scores = levels_to_images(cls_scores)
- cls_scores = [
- item.reshape(-1, self.cls_out_channels) for item in cls_scores
- ]
- bbox_preds = levels_to_images(bbox_preds)
- bbox_preds = [item.reshape(-1, 4) for item in bbox_preds]
- iou_preds = levels_to_images(iou_preds)
- iou_preds = [item.reshape(-1, 1) for item in iou_preds]
- pos_losses_list, = multi_apply(self.get_pos_loss, anchor_list,
- cls_scores, bbox_preds, labels,
- labels_weight, bboxes_target,
- bboxes_weight, pos_inds)
-
- with torch.no_grad():
- reassign_labels, reassign_label_weight, \
- reassign_bbox_weights, num_pos = multi_apply(
- self.paa_reassign,
- pos_losses_list,
- labels,
- labels_weight,
- bboxes_weight,
- pos_inds,
- pos_gt_index,
- anchor_list)
- num_pos = sum(num_pos)
- # convert all tensor list to a flatten tensor
- cls_scores = torch.cat(cls_scores, 0).view(-1, cls_scores[0].size(-1))
- bbox_preds = torch.cat(bbox_preds, 0).view(-1, bbox_preds[0].size(-1))
- iou_preds = torch.cat(iou_preds, 0).view(-1, iou_preds[0].size(-1))
- labels = torch.cat(reassign_labels, 0).view(-1)
- flatten_anchors = torch.cat(
- [torch.cat(item, 0) for item in anchor_list])
- labels_weight = torch.cat(reassign_label_weight, 0).view(-1)
- bboxes_target = torch.cat(bboxes_target,
- 0).view(-1, bboxes_target[0].size(-1))
-
- pos_inds_flatten = ((labels >= 0)
- &
- (labels < self.num_classes)).nonzero().reshape(-1)
-
- losses_cls = self.loss_cls(
- cls_scores,
- labels,
- labels_weight,
- avg_factor=max(num_pos, len(img_metas))) # avoid num_pos=0
- if num_pos:
- pos_bbox_pred = self.bbox_coder.decode(
- flatten_anchors[pos_inds_flatten],
- bbox_preds[pos_inds_flatten])
- pos_bbox_target = bboxes_target[pos_inds_flatten]
- iou_target = bbox_overlaps(
- pos_bbox_pred.detach(), pos_bbox_target, is_aligned=True)
- losses_iou = self.loss_centerness(
- iou_preds[pos_inds_flatten],
- iou_target.unsqueeze(-1),
- avg_factor=num_pos)
- losses_bbox = self.loss_bbox(
- pos_bbox_pred,
- pos_bbox_target,
- iou_target.clamp(min=EPS),
- avg_factor=iou_target.sum())
- else:
- losses_iou = iou_preds.sum() * 0
- losses_bbox = bbox_preds.sum() * 0
-
- return dict(
- loss_cls=losses_cls, loss_bbox=losses_bbox, loss_iou=losses_iou)
-
- def get_pos_loss(self, anchors, cls_score, bbox_pred, label, label_weight,
- bbox_target, bbox_weight, pos_inds):
- """Calculate loss of all potential positive samples obtained from first
- match process.
-
- Args:
- anchors (list[Tensor]): Anchors of each scale.
- cls_score (Tensor): Box scores of single image with shape
- (num_anchors, num_classes)
- bbox_pred (Tensor): Box energies / deltas of single image
- with shape (num_anchors, 4)
- label (Tensor): classification target of each anchor with
- shape (num_anchors,)
- label_weight (Tensor): Classification loss weight of each
- anchor with shape (num_anchors).
- bbox_target (dict): Regression target of each anchor with
- shape (num_anchors, 4).
- bbox_weight (Tensor): Bbox weight of each anchor with shape
- (num_anchors, 4).
- pos_inds (Tensor): Index of all positive samples got from
- first assign process.
-
- Returns:
- Tensor: Losses of all positive samples in single image.
- """
- if not len(pos_inds):
- return cls_score.new([]),
- anchors_all_level = torch.cat(anchors, 0)
- pos_scores = cls_score[pos_inds]
- pos_bbox_pred = bbox_pred[pos_inds]
- pos_label = label[pos_inds]
- pos_label_weight = label_weight[pos_inds]
- pos_bbox_target = bbox_target[pos_inds]
- pos_bbox_weight = bbox_weight[pos_inds]
- pos_anchors = anchors_all_level[pos_inds]
- pos_bbox_pred = self.bbox_coder.decode(pos_anchors, pos_bbox_pred)
-
- # to keep loss dimension
- loss_cls = self.loss_cls(
- pos_scores,
- pos_label,
- pos_label_weight,
- avg_factor=self.loss_cls.loss_weight,
- reduction_override='none')
-
- loss_bbox = self.loss_bbox(
- pos_bbox_pred,
- pos_bbox_target,
- pos_bbox_weight,
- avg_factor=self.loss_cls.loss_weight,
- reduction_override='none')
-
- loss_cls = loss_cls.sum(-1)
- pos_loss = loss_bbox + loss_cls
- return pos_loss,
-
- def paa_reassign(self, pos_losses, label, label_weight, bbox_weight,
- pos_inds, pos_gt_inds, anchors):
- """Fit loss to GMM distribution and separate positive, ignore, negative
- samples again with GMM model.
-
- Args:
- pos_losses (Tensor): Losses of all positive samples in
- single image.
- label (Tensor): classification target of each anchor with
- shape (num_anchors,)
- label_weight (Tensor): Classification loss weight of each
- anchor with shape (num_anchors).
- bbox_weight (Tensor): Bbox weight of each anchor with shape
- (num_anchors, 4).
- pos_inds (Tensor): Index of all positive samples got from
- first assign process.
- pos_gt_inds (Tensor): Gt_index of all positive samples got
- from first assign process.
- anchors (list[Tensor]): Anchors of each scale.
-
- Returns:
- tuple: Usually returns a tuple containing learning targets.
-
- - label (Tensor): classification target of each anchor after
- paa assign, with shape (num_anchors,)
- - label_weight (Tensor): Classification loss weight of each
- anchor after paa assign, with shape (num_anchors).
- - bbox_weight (Tensor): Bbox weight of each anchor with shape
- (num_anchors, 4).
- - num_pos (int): The number of positive samples after paa
- assign.
- """
- if not len(pos_inds):
- return label, label_weight, bbox_weight, 0
- label = label.clone()
- label_weight = label_weight.clone()
- bbox_weight = bbox_weight.clone()
- num_gt = pos_gt_inds.max() + 1
- num_level = len(anchors)
- num_anchors_each_level = [item.size(0) for item in anchors]
- num_anchors_each_level.insert(0, 0)
- inds_level_interval = np.cumsum(num_anchors_each_level)
- pos_level_mask = []
- for i in range(num_level):
- mask = (pos_inds >= inds_level_interval[i]) & (
- pos_inds < inds_level_interval[i + 1])
- pos_level_mask.append(mask)
- pos_inds_after_paa = [label.new_tensor([])]
- ignore_inds_after_paa = [label.new_tensor([])]
- for gt_ind in range(num_gt):
- pos_inds_gmm = []
- pos_loss_gmm = []
- gt_mask = pos_gt_inds == gt_ind
- for level in range(num_level):
- level_mask = pos_level_mask[level]
- level_gt_mask = level_mask & gt_mask
- value, topk_inds = pos_losses[level_gt_mask].topk(
- min(level_gt_mask.sum(), self.topk), largest=False)
- pos_inds_gmm.append(pos_inds[level_gt_mask][topk_inds])
- pos_loss_gmm.append(value)
- pos_inds_gmm = torch.cat(pos_inds_gmm)
- pos_loss_gmm = torch.cat(pos_loss_gmm)
- # fix gmm need at least two sample
- if len(pos_inds_gmm) < 2:
- continue
- device = pos_inds_gmm.device
- pos_loss_gmm, sort_inds = pos_loss_gmm.sort()
- pos_inds_gmm = pos_inds_gmm[sort_inds]
- pos_loss_gmm = pos_loss_gmm.view(-1, 1).cpu().numpy()
- min_loss, max_loss = pos_loss_gmm.min(), pos_loss_gmm.max()
- means_init = np.array([min_loss, max_loss]).reshape(2, 1)
- weights_init = np.array([0.5, 0.5])
- precisions_init = np.array([1.0, 1.0]).reshape(2, 1, 1) # full
- if self.covariance_type == 'spherical':
- precisions_init = precisions_init.reshape(2)
- elif self.covariance_type == 'diag':
- precisions_init = precisions_init.reshape(2, 1)
- elif self.covariance_type == 'tied':
- precisions_init = np.array([[1.0]])
- if skm is None:
- raise ImportError('Please run "pip install sklearn" '
- 'to install sklearn first.')
- gmm = skm.GaussianMixture(
- 2,
- weights_init=weights_init,
- means_init=means_init,
- precisions_init=precisions_init,
- covariance_type=self.covariance_type)
- gmm.fit(pos_loss_gmm)
- gmm_assignment = gmm.predict(pos_loss_gmm)
- scores = gmm.score_samples(pos_loss_gmm)
- gmm_assignment = torch.from_numpy(gmm_assignment).to(device)
- scores = torch.from_numpy(scores).to(device)
-
- pos_inds_temp, ignore_inds_temp = self.gmm_separation_scheme(
- gmm_assignment, scores, pos_inds_gmm)
- pos_inds_after_paa.append(pos_inds_temp)
- ignore_inds_after_paa.append(ignore_inds_temp)
-
- pos_inds_after_paa = torch.cat(pos_inds_after_paa)
- ignore_inds_after_paa = torch.cat(ignore_inds_after_paa)
- reassign_mask = (pos_inds.unsqueeze(1) != pos_inds_after_paa).all(1)
- reassign_ids = pos_inds[reassign_mask]
- label[reassign_ids] = self.num_classes
- label_weight[ignore_inds_after_paa] = 0
- bbox_weight[reassign_ids] = 0
- num_pos = len(pos_inds_after_paa)
- return label, label_weight, bbox_weight, num_pos
-
- def gmm_separation_scheme(self, gmm_assignment, scores, pos_inds_gmm):
- """A general separation scheme for gmm model.
-
- It separates a GMM distribution of candidate samples into three
- parts, 0 1 and uncertain areas, and you can implement other
- separation schemes by rewriting this function.
-
- Args:
- gmm_assignment (Tensor): The prediction of GMM which is of shape
- (num_samples,). The 0/1 value indicates the distribution
- that each sample comes from.
- scores (Tensor): The probability of sample coming from the
- fit GMM distribution. The tensor is of shape (num_samples,).
- pos_inds_gmm (Tensor): All the indexes of samples which are used
- to fit GMM model. The tensor is of shape (num_samples,)
-
- Returns:
- tuple[Tensor]: The indices of positive and ignored samples.
-
- - pos_inds_temp (Tensor): Indices of positive samples.
- - ignore_inds_temp (Tensor): Indices of ignore samples.
- """
- # The implementation is (c) in Fig.3 in origin paper instead of (b).
- # You can refer to issues such as
- # https://github.com/kkhoot/PAA/issues/8 and
- # https://github.com/kkhoot/PAA/issues/9.
- fgs = gmm_assignment == 0
- pos_inds_temp = fgs.new_tensor([], dtype=torch.long)
- ignore_inds_temp = fgs.new_tensor([], dtype=torch.long)
- if fgs.nonzero().numel():
- _, pos_thr_ind = scores[fgs].topk(1)
- pos_inds_temp = pos_inds_gmm[fgs][:pos_thr_ind + 1]
- ignore_inds_temp = pos_inds_gmm.new_tensor([])
- return pos_inds_temp, ignore_inds_temp
-
- def get_targets(
- self,
- anchor_list,
- valid_flag_list,
- gt_bboxes_list,
- img_metas,
- gt_bboxes_ignore_list=None,
- gt_labels_list=None,
- label_channels=1,
- unmap_outputs=True,
- ):
- """Get targets for PAA head.
-
- This method is almost the same as `AnchorHead.get_targets()`. We direct
- return the results from _get_targets_single instead map it to levels
- by images_to_levels function.
-
- Args:
- anchor_list (list[list[Tensor]]): Multi level anchors of each
- image. The outer list indicates images, and the inner list
- corresponds to feature levels of the image. Each element of
- the inner list is a tensor of shape (num_anchors, 4).
- valid_flag_list (list[list[Tensor]]): Multi level valid flags of
- each image. The outer list indicates images, and the inner list
- corresponds to feature levels of the image. Each element of
- the inner list is a tensor of shape (num_anchors, )
- gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image.
- img_metas (list[dict]): Meta info of each image.
- gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be
- ignored.
- gt_labels_list (list[Tensor]): Ground truth labels of each box.
- label_channels (int): Channel of label.
- unmap_outputs (bool): Whether to map outputs back to the original
- set of anchors.
-
- Returns:
- tuple: Usually returns a tuple containing learning targets.
-
- - labels (list[Tensor]): Labels of all anchors, each with
- shape (num_anchors,).
- - label_weights (list[Tensor]): Label weights of all anchor.
- each with shape (num_anchors,).
- - bbox_targets (list[Tensor]): BBox targets of all anchors.
- each with shape (num_anchors, 4).
- - bbox_weights (list[Tensor]): BBox weights of all anchors.
- each with shape (num_anchors, 4).
- - pos_inds (list[Tensor]): Contains all index of positive
- sample in all anchor.
- - gt_inds (list[Tensor]): Contains all gt_index of positive
- sample in all anchor.
- """
-
- num_imgs = len(img_metas)
- assert len(anchor_list) == len(valid_flag_list) == num_imgs
- concat_anchor_list = []
- concat_valid_flag_list = []
- for i in range(num_imgs):
- assert len(anchor_list[i]) == len(valid_flag_list[i])
- concat_anchor_list.append(torch.cat(anchor_list[i]))
- concat_valid_flag_list.append(torch.cat(valid_flag_list[i]))
-
- # compute targets for each image
- if gt_bboxes_ignore_list is None:
- gt_bboxes_ignore_list = [None for _ in range(num_imgs)]
- if gt_labels_list is None:
- gt_labels_list = [None for _ in range(num_imgs)]
- results = multi_apply(
- self._get_targets_single,
- concat_anchor_list,
- concat_valid_flag_list,
- gt_bboxes_list,
- gt_bboxes_ignore_list,
- gt_labels_list,
- img_metas,
- label_channels=label_channels,
- unmap_outputs=unmap_outputs)
-
- (labels, label_weights, bbox_targets, bbox_weights, valid_pos_inds,
- valid_neg_inds, sampling_result) = results
-
- # Due to valid flag of anchors, we have to calculate the real pos_inds
- # in origin anchor set.
- pos_inds = []
- for i, single_labels in enumerate(labels):
- pos_mask = (0 <= single_labels) & (
- single_labels < self.num_classes)
- pos_inds.append(pos_mask.nonzero().view(-1))
-
- gt_inds = [item.pos_assigned_gt_inds for item in sampling_result]
- return (labels, label_weights, bbox_targets, bbox_weights, pos_inds,
- gt_inds)
-
- def _get_targets_single(self,
- flat_anchors,
- valid_flags,
- gt_bboxes,
- gt_bboxes_ignore,
- gt_labels,
- img_meta,
- label_channels=1,
- unmap_outputs=True):
- """Compute regression and classification targets for anchors in a
- single image.
-
- This method is same as `AnchorHead._get_targets_single()`.
- """
- assert unmap_outputs, 'We must map outputs back to the original' \
- 'set of anchors in PAAhead'
- return super(ATSSHead, self)._get_targets_single(
- flat_anchors,
- valid_flags,
- gt_bboxes,
- gt_bboxes_ignore,
- gt_labels,
- img_meta,
- label_channels=1,
- unmap_outputs=True)
-
- def _get_bboxes(self,
- cls_scores,
- bbox_preds,
- iou_preds,
- mlvl_anchors,
- img_shapes,
- scale_factors,
- cfg,
- rescale=False,
- with_nms=True):
- """Transform outputs for a single batch item into labeled boxes.
-
- This method is almost same as `ATSSHead._get_bboxes()`.
- We use sqrt(iou_preds * cls_scores) in NMS process instead of just
- cls_scores. Besides, score voting is used when `` score_voting``
- is set to True.
- """
- assert with_nms, 'PAA only supports "with_nms=True" now'
- assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors)
- batch_size = cls_scores[0].shape[0]
-
- mlvl_bboxes = []
- mlvl_scores = []
- mlvl_iou_preds = []
- for cls_score, bbox_pred, iou_preds, anchors in zip(
- cls_scores, bbox_preds, iou_preds, mlvl_anchors):
- assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
-
- scores = cls_score.permute(0, 2, 3, 1).reshape(
- batch_size, -1, self.cls_out_channels).sigmoid()
- bbox_pred = bbox_pred.permute(0, 2, 3,
- 1).reshape(batch_size, -1, 4)
- iou_preds = iou_preds.permute(0, 2, 3, 1).reshape(batch_size,
- -1).sigmoid()
-
- nms_pre = cfg.get('nms_pre', -1)
- if nms_pre > 0 and scores.shape[1] > nms_pre:
- max_scores, _ = (scores * iou_preds[..., None]).sqrt().max(-1)
- _, topk_inds = max_scores.topk(nms_pre)
- batch_inds = torch.arange(batch_size).view(
- -1, 1).expand_as(topk_inds).long()
- anchors = anchors[topk_inds, :]
- bbox_pred = bbox_pred[batch_inds, topk_inds, :]
- scores = scores[batch_inds, topk_inds, :]
- iou_preds = iou_preds[batch_inds, topk_inds]
- else:
- anchors = anchors.expand_as(bbox_pred)
-
- bboxes = self.bbox_coder.decode(
- anchors, bbox_pred, max_shape=img_shapes)
- mlvl_bboxes.append(bboxes)
- mlvl_scores.append(scores)
- mlvl_iou_preds.append(iou_preds)
-
- batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1)
- if rescale:
- batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor(
- scale_factors).unsqueeze(1)
- batch_mlvl_scores = torch.cat(mlvl_scores, dim=1)
- # Add a dummy background class to the backend when using sigmoid
- # remind that we set FG labels to [0, num_class-1] since mmdet v2.0
- # BG cat_id: num_class
- padding = batch_mlvl_scores.new_zeros(batch_size,
- batch_mlvl_scores.shape[1], 1)
- batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1)
- batch_mlvl_iou_preds = torch.cat(mlvl_iou_preds, dim=1)
- batch_mlvl_nms_scores = (batch_mlvl_scores *
- batch_mlvl_iou_preds[..., None]).sqrt()
-
- det_results = []
- for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes,
- batch_mlvl_nms_scores):
- det_bbox, det_label = multiclass_nms(
- mlvl_bboxes,
- mlvl_scores,
- cfg.score_thr,
- cfg.nms,
- cfg.max_per_img,
- score_factors=None)
- if self.with_score_voting and len(det_bbox) > 0:
- det_bbox, det_label = self.score_voting(
- det_bbox, det_label, mlvl_bboxes, mlvl_scores,
- cfg.score_thr)
- det_results.append(tuple([det_bbox, det_label]))
-
- return det_results
-
- def score_voting(self, det_bboxes, det_labels, mlvl_bboxes,
- mlvl_nms_scores, score_thr):
- """Implementation of score voting method works on each remaining boxes
- after NMS procedure.
-
- Args:
- det_bboxes (Tensor): Remaining boxes after NMS procedure,
- with shape (k, 5), each dimension means
- (x1, y1, x2, y2, score).
- det_labels (Tensor): The label of remaining boxes, with shape
- (k, 1),Labels are 0-based.
- mlvl_bboxes (Tensor): All boxes before the NMS procedure,
- with shape (num_anchors,4).
- mlvl_nms_scores (Tensor): The scores of all boxes which is used
- in the NMS procedure, with shape (num_anchors, num_class)
- mlvl_iou_preds (Tensor): The predictions of IOU of all boxes
- before the NMS procedure, with shape (num_anchors, 1)
- score_thr (float): The score threshold of bboxes.
-
- Returns:
- tuple: Usually returns a tuple containing voting results.
-
- - det_bboxes_voted (Tensor): Remaining boxes after
- score voting procedure, with shape (k, 5), each
- dimension means (x1, y1, x2, y2, score).
- - det_labels_voted (Tensor): Label of remaining bboxes
- after voting, with shape (num_anchors,).
- """
- candidate_mask = mlvl_nms_scores > score_thr
- candidate_mask_nonzeros = candidate_mask.nonzero()
- candidate_inds = candidate_mask_nonzeros[:, 0]
- candidate_labels = candidate_mask_nonzeros[:, 1]
- candidate_bboxes = mlvl_bboxes[candidate_inds]
- candidate_scores = mlvl_nms_scores[candidate_mask]
- det_bboxes_voted = []
- det_labels_voted = []
- for cls in range(self.cls_out_channels):
- candidate_cls_mask = candidate_labels == cls
- if not candidate_cls_mask.any():
- continue
- candidate_cls_scores = candidate_scores[candidate_cls_mask]
- candidate_cls_bboxes = candidate_bboxes[candidate_cls_mask]
- det_cls_mask = det_labels == cls
- det_cls_bboxes = det_bboxes[det_cls_mask].view(
- -1, det_bboxes.size(-1))
- det_candidate_ious = bbox_overlaps(det_cls_bboxes[:, :4],
- candidate_cls_bboxes)
- for det_ind in range(len(det_cls_bboxes)):
- single_det_ious = det_candidate_ious[det_ind]
- pos_ious_mask = single_det_ious > 0.01
- pos_ious = single_det_ious[pos_ious_mask]
- pos_bboxes = candidate_cls_bboxes[pos_ious_mask]
- pos_scores = candidate_cls_scores[pos_ious_mask]
- pis = (torch.exp(-(1 - pos_ious)**2 / 0.025) *
- pos_scores)[:, None]
- voted_box = torch.sum(
- pis * pos_bboxes, dim=0) / torch.sum(
- pis, dim=0)
- voted_score = det_cls_bboxes[det_ind][-1:][None, :]
- det_bboxes_voted.append(
- torch.cat((voted_box[None, :], voted_score), dim=1))
- det_labels_voted.append(cls)
-
- det_bboxes_voted = torch.cat(det_bboxes_voted, dim=0)
- det_labels_voted = det_labels.new_tensor(det_labels_voted)
- return det_bboxes_voted, det_labels_voted
diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/clip_prompt_utils.py b/spaces/CVPR/regionclip-demo/detectron2/data/datasets/clip_prompt_utils.py
deleted file mode 100644
index 665f4adaf7093bb50dfc90c686a20d7a646a53d2..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/clip_prompt_utils.py
+++ /dev/null
@@ -1,441 +0,0 @@
-import gzip
-import html
-import os
-from functools import lru_cache
-
-import ftfy
-import regex as re
-import torch
-import numpy as np
-from typing import Union, List
-
-from .lvis_v1_categories import LVIS_CATEGORIES as LVIS_V1_CATEGORIES
-from .coco_zeroshot_categories import COCO_UNSEEN_CLS, COCO_SEEN_CLS, COCO_OVD_ALL_CLS, COCO_80_ALL_CLS
-
-# https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py
-@lru_cache()
-def default_bpe():
- return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
- cs = bs[:]
- n = 0
- for b in range(2**8):
- if b not in bs:
- bs.append(b)
- cs.append(2**8+n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-def basic_clean(text):
- text = ftfy.fix_text(text)
- text = html.unescape(html.unescape(text))
- return text.strip()
-
-
-def whitespace_clean(text):
- text = re.sub(r'\s+', ' ', text)
- text = text.strip()
- return text
-
-
-class SimpleTokenizer(object):
- def __init__(self, bpe_path: str = default_bpe()):
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
- merges = merges[1:49152-256-2+1]
- merges = [tuple(merge.split()) for merge in merges]
- vocab = list(bytes_to_unicode().values())
- vocab = vocab + [v+'' for v in vocab]
- self.vocab = vocab
- for merge in merges:
- vocab.append(''.join(merge))
- vocab.extend(['<|startoftext|>', '<|endoftext|>'])
- self.encoder = dict(zip(vocab, range(len(vocab))))
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.bpe_ranks = dict(zip(merges, range(len(merges))))
- self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
- self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token[:-1]) + ( token[-1] + '',)
- pairs = get_pairs(word)
-
- if not pairs:
- return token+''
-
- while True:
- bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except:
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word)-1 and word[i+1] == second:
- new_word.append(first+second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = ' '.join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text, return_link=False):
- bpe_tokens = []
- text = whitespace_clean(basic_clean(text)).lower()
- str2id_links = [] # link original sentence word to the tokenized ids of its subwords
- for token in re.findall(self.pat, text):
- this_link = [token]
- token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
- ids = [self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' ')]
- bpe_tokens.extend(ids)
- this_link.append(ids)
- str2id_links.append(this_link)
- if return_link:
- return bpe_tokens, str2id_links
- return bpe_tokens
-
- def decode(self, tokens):
- text = ''.join([self.decoder[token] for token in tokens])
- text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('', ' ')
- return text
-
-
-# https://github.com/openai/CLIP/blob/main/clip/clip.py
-#_tokenizer = SimpleTokenizer()
-
-def tokenize(texts: Union[str, List[str]], context_length: int = 77):
- if isinstance(texts, str):
- texts = [texts]
-
- sot_token = _tokenizer.encoder["<|startoftext|>"]
- eot_token = _tokenizer.encoder["<|endoftext|>"]
- all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
- result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
-
- for i, tokens in enumerate(all_tokens):
- if len(tokens) > context_length:
- raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
- result[i, :len(tokens)] = torch.tensor(tokens)
-
- return result
-
-
-# prompt_engineering.py
-def get_prompt_templates():
- # prompt_templates = [
- # 'There is a {} in the scene.',
- # 'There is the {} in the scene.',
- # 'a photo of a {} in the scene.',
- # 'a photo of the {} in the scene.',
- # 'a photo of one {} in the scene.',
-
- # 'itap of a {}.',
- # 'itap of my {}.', # itap: I took a picture of
- # 'itap of the {}.',
- # 'a photo of a {}.',
- # 'a photo of my {}.',
- # 'a photo of the {}.',
- # 'a photo of one {}.',
- # 'a photo of many {}.',
-
- # 'a good photo of a {}.',
- # 'a good photo of the {}.',
- # 'a bad photo of a {}.',
- # 'a bad photo of the {}.',
- # 'a photo of a nice {}.',
- # 'a photo of the nice {}.',
- # 'a photo of a cool {}.',
- # 'a photo of the cool {}.',
- # 'a photo of a weird {}.',
- # 'a photo of the weird {}.',
-
- # 'a photo of a small {}.',
- # 'a photo of the small {}.',
- # 'a photo of a large {}.',
- # 'a photo of the large {}.',
-
- # 'a photo of a clean {}.',
- # 'a photo of the clean {}.',
- # 'a photo of a dirty {}.',
- # 'a photo of the dirty {}.',
-
- # 'a bright photo of a {}.',
- # 'a bright photo of the {}.',
- # 'a dark photo of a {}.',
- # 'a dark photo of the {}.',
-
- # 'a photo of a hard to see {}.',
- # 'a photo of the hard to see {}.',
- # 'a low resolution photo of a {}.',
- # 'a low resolution photo of the {}.',
- # 'a cropped photo of a {}.',
- # 'a cropped photo of the {}.',
- # 'a close-up photo of a {}.',
- # 'a close-up photo of the {}.',
- # 'a jpeg corrupted photo of a {}.',
- # 'a jpeg corrupted photo of the {}.',
- # 'a blurry photo of a {}.',
- # 'a blurry photo of the {}.',
- # 'a pixelated photo of a {}.',
- # 'a pixelated photo of the {}.',
-
- # 'a black and white photo of the {}.',
- # 'a black and white photo of a {}.',
-
- # 'a plastic {}.',
- # 'the plastic {}.',
-
- # 'a toy {}.',
- # 'the toy {}.',
- # 'a plushie {}.',
- # 'the plushie {}.',
- # 'a cartoon {}.',
- # 'the cartoon {}.',
-
- # 'an embroidered {}.',
- # 'the embroidered {}.',
-
- # 'a painting of the {}.',
- # 'a painting of a {}.',
- # ]
-
- prompt_templates = [
- '{}.',
- 'a photo of a {}.',
- 'a bad photo of a {}.',
- 'a photo of many {}.',
- 'a sculpture of a {}.',
- 'a photo of the hard to see {}.',
- 'a low resolution photo of the {}.',
- 'a rendering of a {}.',
- 'graffiti of a {}.',
- 'a bad photo of the {}.',
- 'a cropped photo of the {}.',
- 'a tattoo of a {}.',
- 'the embroidered {}.',
- 'a photo of a hard to see {}.',
- 'a bright photo of a {}.',
- 'a photo of a clean {}.',
- 'a photo of a dirty {}.',
- 'a dark photo of the {}.',
- 'a drawing of a {}.',
- 'a photo of my {}.',
- 'the plastic {}.',
- 'a photo of the cool {}.',
- 'a close-up photo of a {}.',
- 'a black and white photo of the {}.',
- 'a painting of the {}.',
- 'a painting of a {}.',
- 'a pixelated photo of the {}.',
- 'a sculpture of the {}.',
- 'a bright photo of the {}.',
- 'a cropped photo of a {}.',
- 'a plastic {}.',
- 'a photo of the dirty {}.',
- 'a jpeg corrupted photo of a {}.',
- 'a blurry photo of the {}.',
- 'a photo of the {}.',
- 'a good photo of the {}.',
- 'a rendering of the {}.',
- 'a {} in a video game.',
- 'a photo of one {}.',
- 'a doodle of a {}.',
- 'a close-up photo of the {}.',
- 'the origami {}.',
- 'the {} in a video game.',
- 'a sketch of a {}.',
- 'a doodle of the {}.',
- 'a origami {}.',
- 'a low resolution photo of a {}.',
- 'the toy {}.',
- 'a rendition of the {}.',
- 'a photo of the clean {}.',
- 'a photo of a large {}.',
- 'a rendition of a {}.',
- 'a photo of a nice {}.',
- 'a photo of a weird {}.',
- 'a blurry photo of a {}.',
- 'a cartoon {}.',
- 'art of a {}.',
- 'a sketch of the {}.',
- 'a embroidered {}.',
- 'a pixelated photo of a {}.',
- 'itap of the {}.',
- 'a jpeg corrupted photo of the {}.',
- 'a good photo of a {}.',
- 'a plushie {}.',
- 'a photo of the nice {}.',
- 'a photo of the small {}.',
- 'a photo of the weird {}.',
- 'the cartoon {}.',
- 'art of the {}.',
- 'a drawing of the {}.',
- 'a photo of the large {}.',
- 'a black and white photo of a {}.',
- 'the plushie {}.',
- 'a dark photo of a {}.',
- 'itap of a {}.',
- 'graffiti of the {}.',
- 'a toy {}.',
- 'itap of my {}.',
- 'a photo of a cool {}.',
- 'a photo of a small {}.',
- 'a tattoo of the {}.',
- ]
- return prompt_templates
-
-def prompt_engineering(classnames, template=""):
- return template.replace('{}', classnames.replace(',', '').replace('+', ' '))
-
-# clip_img_tsv.py
-def convert_example_to_features_bpe(text, tokenizer, sot_token, eot_token, context_length=77):
- """
- Convert a raw sample (pair of sentences as tokenized strings) into a proper training sample.
- :param tokenizer: Tokenizer
- :return: List, a list containing token id, padded by 0
- """
- assert isinstance(text, str)
- input_ids = [sot_token] + tokenizer.encode(text) + [eot_token]
- if len(input_ids) > context_length:
- input_ids = input_ids[:context_length]
- input_ids = np.array(input_ids)
-
- pad_input_ids = np.zeros(context_length)
- pad_input_ids[:input_ids.shape[0]] = input_ids
-
- return pad_input_ids
-
-def get_cls_names(filter_novel=False, coco=None, from_file=False):
- """ return a list of strings with each string as name of a class
- """
- # the names are stored in a txt file
- if from_file:
- # coco_det_cls = {COCO_80_ALL_CLS[key]: key for key in COCO_80_ALL_CLS}
- # # not found in nouns {'skis': 31, 'sports ball': 33, 'hot dog': 53, 'potted plant': 59, 'scissors': 77, 'hair drier': 79}
- # coco_det_cls['ski'] = 81
- # coco_det_cls['scissor'] = 82
- # with open('/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/trained_models/concept_pool/COCO_Caption_nouns_4688.txt','w') as g:
- # with open(from_file, 'r') as f:
- # cnt = 0
- # for row in f:
- # if row.split(",")[0] not in coco_det_cls:
- # g.write(row)
- # cnt += 1
- # else:
- # coco_det_cls.pop(row.split(",")[0])
- names = []
- with open(from_file, 'r') as f:
- for row in f:
- names.append(row.split(",")[0])
- return names
- # classes' names
- if coco == 'target':
- return COCO_UNSEEN_CLS
- elif coco == 'base':
- return COCO_SEEN_CLS
- elif coco == 'all':
- return COCO_OVD_ALL_CLS
- elif coco == 'all_80':
- return [COCO_80_ALL_CLS[i+1] for i in range(80)]
- assert len(LVIS_V1_CATEGORIES) == 1203
- cat_ids = [k["id"] for k in LVIS_V1_CATEGORIES]
- assert min(cat_ids) == 1 and max(cat_ids) == len(
- cat_ids
- ), "Category ids are not in [1, #categories], as expected"
- # Ensure that the category list is sorted by id
- lvis_categories = sorted(LVIS_V1_CATEGORIES, key=lambda x: x["id"])
- if filter_novel:
- class_names = [cls_meta['name'] for cls_meta in lvis_categories if cls_meta['frequency'] != 'r']
- else:
- class_names = [cls_meta['name'] for cls_meta in lvis_categories]
-
- # remove or replace special symbols
- class_names = [cls_n.replace("_", " ") for cls_n in class_names]
- class_names = [cls_n.replace("(", "") for cls_n in class_names]
- class_names = [cls_n.replace(")", "") for cls_n in class_names]
- return class_names
-
-def pre_tokenize(class_names):
- """
- pre-tokenize class names
- :param class_names: List, a list of class names
- :param tokenizer: Tokenizer, SimpleTokenizer()
- :return: Tensor, containing all prompts for all classes, [#cls, #prompts, context_length]
- """
- # tokenizer
- tokenizer = SimpleTokenizer()
- sot_token = tokenizer.encoder["<|startoftext|>"]
- eot_token = tokenizer.encoder["<|endoftext|>"]
-
- # prompt engineering
- prompt_templates = get_prompt_templates()
- input_ids_all = []
- for k in range(len(class_names)):
- v = class_names[k]
- if isinstance(v, str):
- vs = [v]
- elif isinstance(v, list):
- vs = v
- t1s = []
- for v in vs:
- for pt in prompt_templates:
- t1s.append(prompt_engineering(v, template=pt))
- input_ids = []
- for t1 in t1s:
- this_input_ids = convert_example_to_features_bpe(t1, tokenizer, sot_token, eot_token)
- input_ids.append(torch.tensor(this_input_ids, dtype=torch.long))
-
- input_ids_all.append(torch.stack(input_ids, 0))
-
- input_ids_all_classes = torch.stack(input_ids_all, 0)
- return input_ids_all_classes
-
-
-if __name__ == "__main__":
- flatten_input_ids = pre_tokenize()
diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/transforms/transform.py b/spaces/CVPR/regionclip-demo/detectron2/data/transforms/transform.py
deleted file mode 100644
index de44b991d7ab0d920ffb769e1402f08e358d37f7..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/data/transforms/transform.py
+++ /dev/null
@@ -1,351 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-"""
-See "Data Augmentation" tutorial for an overview of the system:
-https://detectron2.readthedocs.io/tutorials/augmentation.html
-"""
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-from fvcore.transforms.transform import (
- CropTransform,
- HFlipTransform,
- NoOpTransform,
- Transform,
- TransformList,
-)
-from PIL import Image
-
-try:
- import cv2 # noqa
-except ImportError:
- # OpenCV is an optional dependency at the moment
- pass
-
-__all__ = [
- "ExtentTransform",
- "ResizeTransform",
- "RotationTransform",
- "ColorTransform",
- "PILColorTransform",
-]
-
-
-class ExtentTransform(Transform):
- """
- Extracts a subregion from the source image and scales it to the output size.
-
- The fill color is used to map pixels from the source rect that fall outside
- the source image.
-
- See: https://pillow.readthedocs.io/en/latest/PIL.html#PIL.ImageTransform.ExtentTransform
- """
-
- def __init__(self, src_rect, output_size, interp=Image.LINEAR, fill=0):
- """
- Args:
- src_rect (x0, y0, x1, y1): src coordinates
- output_size (h, w): dst image size
- interp: PIL interpolation methods
- fill: Fill color used when src_rect extends outside image
- """
- super().__init__()
- self._set_attributes(locals())
-
- def apply_image(self, img, interp=None):
- h, w = self.output_size
- if len(img.shape) > 2 and img.shape[2] == 1:
- pil_image = Image.fromarray(img[:, :, 0], mode="L")
- else:
- pil_image = Image.fromarray(img)
- pil_image = pil_image.transform(
- size=(w, h),
- method=Image.EXTENT,
- data=self.src_rect,
- resample=interp if interp else self.interp,
- fill=self.fill,
- )
- ret = np.asarray(pil_image)
- if len(img.shape) > 2 and img.shape[2] == 1:
- ret = np.expand_dims(ret, -1)
- return ret
-
- def apply_coords(self, coords):
- # Transform image center from source coordinates into output coordinates
- # and then map the new origin to the corner of the output image.
- h, w = self.output_size
- x0, y0, x1, y1 = self.src_rect
- new_coords = coords.astype(np.float32)
- new_coords[:, 0] -= 0.5 * (x0 + x1)
- new_coords[:, 1] -= 0.5 * (y0 + y1)
- new_coords[:, 0] *= w / (x1 - x0)
- new_coords[:, 1] *= h / (y1 - y0)
- new_coords[:, 0] += 0.5 * w
- new_coords[:, 1] += 0.5 * h
- return new_coords
-
- def apply_segmentation(self, segmentation):
- segmentation = self.apply_image(segmentation, interp=Image.NEAREST)
- return segmentation
-
-
-class ResizeTransform(Transform):
- """
- Resize the image to a target size.
- """
-
- def __init__(self, h, w, new_h, new_w, interp=None):
- """
- Args:
- h, w (int): original image size
- new_h, new_w (int): new image size
- interp: PIL interpolation methods, defaults to bilinear.
- """
- # TODO decide on PIL vs opencv
- super().__init__()
- if interp is None:
- interp = Image.BILINEAR
- self._set_attributes(locals())
-
- def apply_image(self, img, interp=None):
- assert img.shape[:2] == (self.h, self.w)
- assert len(img.shape) <= 4
- interp_method = interp if interp is not None else self.interp
-
- if img.dtype == np.uint8:
- if len(img.shape) > 2 and img.shape[2] == 1:
- pil_image = Image.fromarray(img[:, :, 0], mode="L")
- else:
- pil_image = Image.fromarray(img)
- pil_image = pil_image.resize((self.new_w, self.new_h), interp_method)
- ret = np.asarray(pil_image)
- if len(img.shape) > 2 and img.shape[2] == 1:
- ret = np.expand_dims(ret, -1)
- else:
- # PIL only supports uint8
- if any(x < 0 for x in img.strides):
- img = np.ascontiguousarray(img)
- img = torch.from_numpy(img)
- shape = list(img.shape)
- shape_4d = shape[:2] + [1] * (4 - len(shape)) + shape[2:]
- img = img.view(shape_4d).permute(2, 3, 0, 1) # hw(c) -> nchw
- _PIL_RESIZE_TO_INTERPOLATE_MODE = {
- Image.NEAREST: "nearest",
- Image.BILINEAR: "bilinear",
- Image.BICUBIC: "bicubic",
- }
- mode = _PIL_RESIZE_TO_INTERPOLATE_MODE[interp_method]
- align_corners = None if mode == "nearest" else False
- img = F.interpolate(
- img, (self.new_h, self.new_w), mode=mode, align_corners=align_corners
- )
- shape[:2] = (self.new_h, self.new_w)
- ret = img.permute(2, 3, 0, 1).view(shape).numpy() # nchw -> hw(c)
-
- return ret
-
- def apply_coords(self, coords):
- coords[:, 0] = coords[:, 0] * (self.new_w * 1.0 / self.w)
- coords[:, 1] = coords[:, 1] * (self.new_h * 1.0 / self.h)
- return coords
-
- def apply_segmentation(self, segmentation):
- segmentation = self.apply_image(segmentation, interp=Image.NEAREST)
- return segmentation
-
- def inverse(self):
- return ResizeTransform(self.new_h, self.new_w, self.h, self.w, self.interp)
-
-
-class RotationTransform(Transform):
- """
- This method returns a copy of this image, rotated the given
- number of degrees counter clockwise around its center.
- """
-
- def __init__(self, h, w, angle, expand=True, center=None, interp=None):
- """
- Args:
- h, w (int): original image size
- angle (float): degrees for rotation
- expand (bool): choose if the image should be resized to fit the whole
- rotated image (default), or simply cropped
- center (tuple (width, height)): coordinates of the rotation center
- if left to None, the center will be fit to the center of each image
- center has no effect if expand=True because it only affects shifting
- interp: cv2 interpolation method, default cv2.INTER_LINEAR
- """
- super().__init__()
- image_center = np.array((w / 2, h / 2))
- if center is None:
- center = image_center
- if interp is None:
- interp = cv2.INTER_LINEAR
- abs_cos, abs_sin = (abs(np.cos(np.deg2rad(angle))), abs(np.sin(np.deg2rad(angle))))
- if expand:
- # find the new width and height bounds
- bound_w, bound_h = np.rint(
- [h * abs_sin + w * abs_cos, h * abs_cos + w * abs_sin]
- ).astype(int)
- else:
- bound_w, bound_h = w, h
-
- self._set_attributes(locals())
- self.rm_coords = self.create_rotation_matrix()
- # Needed because of this problem https://github.com/opencv/opencv/issues/11784
- self.rm_image = self.create_rotation_matrix(offset=-0.5)
-
- def apply_image(self, img, interp=None):
- """
- img should be a numpy array, formatted as Height * Width * Nchannels
- """
- if len(img) == 0 or self.angle % 360 == 0:
- return img
- assert img.shape[:2] == (self.h, self.w)
- interp = interp if interp is not None else self.interp
- return cv2.warpAffine(img, self.rm_image, (self.bound_w, self.bound_h), flags=interp)
-
- def apply_coords(self, coords):
- """
- coords should be a N * 2 array-like, containing N couples of (x, y) points
- """
- coords = np.asarray(coords, dtype=float)
- if len(coords) == 0 or self.angle % 360 == 0:
- return coords
- return cv2.transform(coords[:, np.newaxis, :], self.rm_coords)[:, 0, :]
-
- def apply_segmentation(self, segmentation):
- segmentation = self.apply_image(segmentation, interp=cv2.INTER_NEAREST)
- return segmentation
-
- def create_rotation_matrix(self, offset=0):
- center = (self.center[0] + offset, self.center[1] + offset)
- rm = cv2.getRotationMatrix2D(tuple(center), self.angle, 1)
- if self.expand:
- # Find the coordinates of the center of rotation in the new image
- # The only point for which we know the future coordinates is the center of the image
- rot_im_center = cv2.transform(self.image_center[None, None, :] + offset, rm)[0, 0, :]
- new_center = np.array([self.bound_w / 2, self.bound_h / 2]) + offset - rot_im_center
- # shift the rotation center to the new coordinates
- rm[:, 2] += new_center
- return rm
-
- def inverse(self):
- """
- The inverse is to rotate it back with expand, and crop to get the original shape.
- """
- if not self.expand: # Not possible to inverse if a part of the image is lost
- raise NotImplementedError()
- rotation = RotationTransform(
- self.bound_h, self.bound_w, -self.angle, True, None, self.interp
- )
- crop = CropTransform(
- (rotation.bound_w - self.w) // 2, (rotation.bound_h - self.h) // 2, self.w, self.h
- )
- return TransformList([rotation, crop])
-
-
-class ColorTransform(Transform):
- """
- Generic wrapper for any photometric transforms.
- These transformations should only affect the color space and
- not the coordinate space of the image (e.g. annotation
- coordinates such as bounding boxes should not be changed)
- """
-
- def __init__(self, op):
- """
- Args:
- op (Callable): operation to be applied to the image,
- which takes in an ndarray and returns an ndarray.
- """
- if not callable(op):
- raise ValueError("op parameter should be callable")
- super().__init__()
- self._set_attributes(locals())
-
- def apply_image(self, img):
- return self.op(img)
-
- def apply_coords(self, coords):
- return coords
-
- def inverse(self):
- return NoOpTransform()
-
- def apply_segmentation(self, segmentation):
- return segmentation
-
-
-class PILColorTransform(ColorTransform):
- """
- Generic wrapper for PIL Photometric image transforms,
- which affect the color space and not the coordinate
- space of the image
- """
-
- def __init__(self, op):
- """
- Args:
- op (Callable): operation to be applied to the image,
- which takes in a PIL Image and returns a transformed
- PIL Image.
- For reference on possible operations see:
- - https://pillow.readthedocs.io/en/stable/
- """
- if not callable(op):
- raise ValueError("op parameter should be callable")
- super().__init__(op)
-
- def apply_image(self, img):
- img = Image.fromarray(img)
- return np.asarray(super().apply_image(img))
-
-
-def HFlip_rotated_box(transform, rotated_boxes):
- """
- Apply the horizontal flip transform on rotated boxes.
-
- Args:
- rotated_boxes (ndarray): Nx5 floating point array of
- (x_center, y_center, width, height, angle_degrees) format
- in absolute coordinates.
- """
- # Transform x_center
- rotated_boxes[:, 0] = transform.width - rotated_boxes[:, 0]
- # Transform angle
- rotated_boxes[:, 4] = -rotated_boxes[:, 4]
- return rotated_boxes
-
-
-def Resize_rotated_box(transform, rotated_boxes):
- """
- Apply the resizing transform on rotated boxes. For details of how these (approximation)
- formulas are derived, please refer to :meth:`RotatedBoxes.scale`.
-
- Args:
- rotated_boxes (ndarray): Nx5 floating point array of
- (x_center, y_center, width, height, angle_degrees) format
- in absolute coordinates.
- """
- scale_factor_x = transform.new_w * 1.0 / transform.w
- scale_factor_y = transform.new_h * 1.0 / transform.h
- rotated_boxes[:, 0] *= scale_factor_x
- rotated_boxes[:, 1] *= scale_factor_y
- theta = rotated_boxes[:, 4] * np.pi / 180.0
- c = np.cos(theta)
- s = np.sin(theta)
- rotated_boxes[:, 2] *= np.sqrt(np.square(scale_factor_x * c) + np.square(scale_factor_y * s))
- rotated_boxes[:, 3] *= np.sqrt(np.square(scale_factor_x * s) + np.square(scale_factor_y * c))
- rotated_boxes[:, 4] = np.arctan2(scale_factor_x * s, scale_factor_y * c) * 180 / np.pi
-
- return rotated_boxes
-
-
-HFlipTransform.register_type("rotated_box", HFlip_rotated_box)
-ResizeTransform.register_type("rotated_box", Resize_rotated_box)
-
-# not necessary any more with latest fvcore
-NoOpTransform.register_type("rotated_box", lambda t, x: x)
diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/poolers.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/poolers.py
deleted file mode 100644
index e5d72abf462ebc9c2ac9ad9dd7c6cc39eac4054c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/modeling/poolers.py
+++ /dev/null
@@ -1,250 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import math
-from typing import List
-import torch
-from torch import nn
-from torchvision.ops import RoIPool
-
-from detectron2.layers import ROIAlign, ROIAlignRotated, cat, nonzero_tuple
-from detectron2.structures import Boxes
-
-"""
-To export ROIPooler to torchscript, in this file, variables that should be annotated with
-`Union[List[Boxes], List[RotatedBoxes]]` are only annotated with `List[Boxes]`.
-
-TODO: Correct these annotations when torchscript support `Union`.
-https://github.com/pytorch/pytorch/issues/41412
-"""
-
-__all__ = ["ROIPooler"]
-
-
-def assign_boxes_to_levels(
- box_lists: List[Boxes],
- min_level: int,
- max_level: int,
- canonical_box_size: int,
- canonical_level: int,
-):
- """
- Map each box in `box_lists` to a feature map level index and return the assignment
- vector.
-
- Args:
- box_lists (list[Boxes] | list[RotatedBoxes]): A list of N Boxes or N RotatedBoxes,
- where N is the number of images in the batch.
- min_level (int): Smallest feature map level index. The input is considered index 0,
- the output of stage 1 is index 1, and so.
- max_level (int): Largest feature map level index.
- canonical_box_size (int): A canonical box size in pixels (sqrt(box area)).
- canonical_level (int): The feature map level index on which a canonically-sized box
- should be placed.
-
- Returns:
- A tensor of length M, where M is the total number of boxes aggregated over all
- N batch images. The memory layout corresponds to the concatenation of boxes
- from all images. Each element is the feature map index, as an offset from
- `self.min_level`, for the corresponding box (so value i means the box is at
- `self.min_level + i`).
- """
- box_sizes = torch.sqrt(cat([boxes.area() for boxes in box_lists]))
- # Eqn.(1) in FPN paper
- level_assignments = torch.floor(
- canonical_level + torch.log2(box_sizes / canonical_box_size + 1e-8)
- )
- # clamp level to (min, max), in case the box size is too large or too small
- # for the available feature maps
- level_assignments = torch.clamp(level_assignments, min=min_level, max=max_level)
- return level_assignments.to(torch.int64) - min_level
-
-
-def _fmt_box_list(box_tensor, batch_index: int):
- repeated_index = torch.full_like(
- box_tensor[:, :1], batch_index, dtype=box_tensor.dtype, device=box_tensor.device
- )
- return cat((repeated_index, box_tensor), dim=1)
-
-
-def convert_boxes_to_pooler_format(box_lists: List[Boxes]):
- """
- Convert all boxes in `box_lists` to the low-level format used by ROI pooling ops
- (see description under Returns).
-
- Args:
- box_lists (list[Boxes] | list[RotatedBoxes]):
- A list of N Boxes or N RotatedBoxes, where N is the number of images in the batch.
-
- Returns:
- When input is list[Boxes]:
- A tensor of shape (M, 5), where M is the total number of boxes aggregated over all
- N batch images.
- The 5 columns are (batch index, x0, y0, x1, y1), where batch index
- is the index in [0, N) identifying which batch image the box with corners at
- (x0, y0, x1, y1) comes from.
- When input is list[RotatedBoxes]:
- A tensor of shape (M, 6), where M is the total number of boxes aggregated over all
- N batch images.
- The 6 columns are (batch index, x_ctr, y_ctr, width, height, angle_degrees),
- where batch index is the index in [0, N) identifying which batch image the
- rotated box (x_ctr, y_ctr, width, height, angle_degrees) comes from.
- """
- pooler_fmt_boxes = cat(
- [_fmt_box_list(box_list.tensor, i) for i, box_list in enumerate(box_lists)], dim=0
- )
-
- return pooler_fmt_boxes
-
-
-class ROIPooler(nn.Module):
- """
- Region of interest feature map pooler that supports pooling from one or more
- feature maps.
- """
-
- def __init__(
- self,
- output_size,
- scales,
- sampling_ratio,
- pooler_type,
- canonical_box_size=224,
- canonical_level=4,
- ):
- """
- Args:
- output_size (int, tuple[int] or list[int]): output size of the pooled region,
- e.g., 14 x 14. If tuple or list is given, the length must be 2.
- scales (list[float]): The scale for each low-level pooling op relative to
- the input image. For a feature map with stride s relative to the input
- image, scale is defined as 1/s. The stride must be power of 2.
- When there are multiple scales, they must form a pyramid, i.e. they must be
- a monotically decreasing geometric sequence with a factor of 1/2.
- sampling_ratio (int): The `sampling_ratio` parameter for the ROIAlign op.
- pooler_type (string): Name of the type of pooling operation that should be applied.
- For instance, "ROIPool" or "ROIAlignV2".
- canonical_box_size (int): A canonical box size in pixels (sqrt(box area)). The default
- is heuristically defined as 224 pixels in the FPN paper (based on ImageNet
- pre-training).
- canonical_level (int): The feature map level index from which a canonically-sized box
- should be placed. The default is defined as level 4 (stride=16) in the FPN paper,
- i.e., a box of size 224x224 will be placed on the feature with stride=16.
- The box placement for all boxes will be determined from their sizes w.r.t
- canonical_box_size. For example, a box whose area is 4x that of a canonical box
- should be used to pool features from feature level ``canonical_level+1``.
-
- Note that the actual input feature maps given to this module may not have
- sufficiently many levels for the input boxes. If the boxes are too large or too
- small for the input feature maps, the closest level will be used.
- """
- super().__init__()
-
- if isinstance(output_size, int):
- output_size = (output_size, output_size)
- assert len(output_size) == 2
- assert isinstance(output_size[0], int) and isinstance(output_size[1], int)
- self.output_size = output_size
-
- if pooler_type == "ROIAlign":
- self.level_poolers = nn.ModuleList(
- ROIAlign(
- output_size, spatial_scale=scale, sampling_ratio=sampling_ratio, aligned=False
- )
- for scale in scales
- )
- elif pooler_type == "ROIAlignV2":
- self.level_poolers = nn.ModuleList(
- ROIAlign(
- output_size, spatial_scale=scale, sampling_ratio=sampling_ratio, aligned=True
- )
- for scale in scales
- )
- elif pooler_type == "ROIPool":
- self.level_poolers = nn.ModuleList(
- RoIPool(output_size, spatial_scale=scale) for scale in scales
- )
- elif pooler_type == "ROIAlignRotated":
- self.level_poolers = nn.ModuleList(
- ROIAlignRotated(output_size, spatial_scale=scale, sampling_ratio=sampling_ratio)
- for scale in scales
- )
- else:
- raise ValueError("Unknown pooler type: {}".format(pooler_type))
-
- # Map scale (defined as 1 / stride) to its feature map level under the
- # assumption that stride is a power of 2.
- min_level = -(math.log2(scales[0]))
- max_level = -(math.log2(scales[-1]))
- assert math.isclose(min_level, int(min_level)) and math.isclose(
- max_level, int(max_level)
- ), "Featuremap stride is not power of 2!"
- self.min_level = int(min_level)
- self.max_level = int(max_level)
- assert (
- len(scales) == self.max_level - self.min_level + 1
- ), "[ROIPooler] Sizes of input featuremaps do not form a pyramid!"
- assert 0 <= self.min_level and self.min_level <= self.max_level
- self.canonical_level = canonical_level
- assert canonical_box_size > 0
- self.canonical_box_size = canonical_box_size
-
- def forward(self, x: List[torch.Tensor], box_lists: List[Boxes]):
- """
- Args:
- x (list[Tensor]): A list of feature maps of NCHW shape, with scales matching those
- used to construct this module.
- box_lists (list[Boxes] | list[RotatedBoxes]):
- A list of N Boxes or N RotatedBoxes, where N is the number of images in the batch.
- The box coordinates are defined on the original image and
- will be scaled by the `scales` argument of :class:`ROIPooler`.
-
- Returns:
- Tensor:
- A tensor of shape (M, C, output_size, output_size) where M is the total number of
- boxes aggregated over all N batch images and C is the number of channels in `x`.
- """
- num_level_assignments = len(self.level_poolers)
-
- assert isinstance(x, list) and isinstance(
- box_lists, list
- ), "Arguments to pooler must be lists"
- assert (
- len(x) == num_level_assignments
- ), "unequal value, num_level_assignments={}, but x is list of {} Tensors".format(
- num_level_assignments, len(x)
- )
-
- assert len(box_lists) == x[0].size(
- 0
- ), "unequal value, x[0] batch dim 0 is {}, but box_list has length {}".format(
- x[0].size(0), len(box_lists)
- )
- if len(box_lists) == 0:
- return torch.zeros(
- (0, x[0].shape[1]) + self.output_size, device=x[0].device, dtype=x[0].dtype
- )
-
- pooler_fmt_boxes = convert_boxes_to_pooler_format(box_lists)
-
- if num_level_assignments == 1:
- return self.level_poolers[0](x[0], pooler_fmt_boxes)
-
- level_assignments = assign_boxes_to_levels(
- box_lists, self.min_level, self.max_level, self.canonical_box_size, self.canonical_level
- )
-
- num_boxes = pooler_fmt_boxes.size(0)
- num_channels = x[0].shape[1]
- output_size = self.output_size[0]
-
- dtype, device = x[0].dtype, x[0].device
- output = torch.zeros(
- (num_boxes, num_channels, output_size, output_size), dtype=dtype, device=device
- )
-
- for level, pooler in enumerate(self.level_poolers):
- inds = nonzero_tuple(level_assignments == level)[0]
- pooler_fmt_boxes_level = pooler_fmt_boxes[inds]
- # Use index_put_ instead of advance indexing, to avoid pytorch/issues/49852
- output.index_put_((inds,), pooler(x[level], pooler_fmt_boxes_level))
-
- return output
diff --git a/spaces/CVPR/unicl-zero-shot-img-recog/README.md b/spaces/CVPR/unicl-zero-shot-img-recog/README.md
deleted file mode 100644
index c1c4a5472fc06d9fe5dd84e335fb948567fc8778..0000000000000000000000000000000000000000
--- a/spaces/CVPR/unicl-zero-shot-img-recog/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Unicl Zero-Shot Image Recognition Demo
-emoji: 🏢
-colorFrom: red
-colorTo: purple
-sdk: gradio
-sdk_version: 3.0.13
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/datasets/transforms.py b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/datasets/transforms.py
deleted file mode 100644
index 91cf9269e4b31008a3ddca34a19b038a9b399991..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/datasets/transforms.py
+++ /dev/null
@@ -1,311 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Transforms and data augmentation for both image + bbox.
-"""
-import os
-import random
-
-import PIL
-import torch
-import torchvision.transforms as T
-import torchvision.transforms.functional as F
-
-from groundingdino.util.box_ops import box_xyxy_to_cxcywh
-from groundingdino.util.misc import interpolate
-
-
-def crop(image, target, region):
- cropped_image = F.crop(image, *region)
-
- target = target.copy()
- i, j, h, w = region
-
- # should we do something wrt the original size?
- target["size"] = torch.tensor([h, w])
-
- fields = ["labels", "area", "iscrowd", "positive_map"]
-
- if "boxes" in target:
- boxes = target["boxes"]
- max_size = torch.as_tensor([w, h], dtype=torch.float32)
- cropped_boxes = boxes - torch.as_tensor([j, i, j, i])
- cropped_boxes = torch.min(cropped_boxes.reshape(-1, 2, 2), max_size)
- cropped_boxes = cropped_boxes.clamp(min=0)
- area = (cropped_boxes[:, 1, :] - cropped_boxes[:, 0, :]).prod(dim=1)
- target["boxes"] = cropped_boxes.reshape(-1, 4)
- target["area"] = area
- fields.append("boxes")
-
- if "masks" in target:
- # FIXME should we update the area here if there are no boxes?
- target["masks"] = target["masks"][:, i : i + h, j : j + w]
- fields.append("masks")
-
- # remove elements for which the boxes or masks that have zero area
- if "boxes" in target or "masks" in target:
- # favor boxes selection when defining which elements to keep
- # this is compatible with previous implementation
- if "boxes" in target:
- cropped_boxes = target["boxes"].reshape(-1, 2, 2)
- keep = torch.all(cropped_boxes[:, 1, :] > cropped_boxes[:, 0, :], dim=1)
- else:
- keep = target["masks"].flatten(1).any(1)
-
- for field in fields:
- if field in target:
- target[field] = target[field][keep]
-
- if os.environ.get("IPDB_SHILONG_DEBUG", None) == "INFO":
- # for debug and visualization only.
- if "strings_positive" in target:
- target["strings_positive"] = [
- _i for _i, _j in zip(target["strings_positive"], keep) if _j
- ]
-
- return cropped_image, target
-
-
-def hflip(image, target):
- flipped_image = F.hflip(image)
-
- w, h = image.size
-
- target = target.copy()
- if "boxes" in target:
- boxes = target["boxes"]
- boxes = boxes[:, [2, 1, 0, 3]] * torch.as_tensor([-1, 1, -1, 1]) + torch.as_tensor(
- [w, 0, w, 0]
- )
- target["boxes"] = boxes
-
- if "masks" in target:
- target["masks"] = target["masks"].flip(-1)
-
- return flipped_image, target
-
-
-def resize(image, target, size, max_size=None):
- # size can be min_size (scalar) or (w, h) tuple
-
- def get_size_with_aspect_ratio(image_size, size, max_size=None):
- w, h = image_size
- if max_size is not None:
- min_original_size = float(min((w, h)))
- max_original_size = float(max((w, h)))
- if max_original_size / min_original_size * size > max_size:
- size = int(round(max_size * min_original_size / max_original_size))
-
- if (w <= h and w == size) or (h <= w and h == size):
- return (h, w)
-
- if w < h:
- ow = size
- oh = int(size * h / w)
- else:
- oh = size
- ow = int(size * w / h)
-
- return (oh, ow)
-
- def get_size(image_size, size, max_size=None):
- if isinstance(size, (list, tuple)):
- return size[::-1]
- else:
- return get_size_with_aspect_ratio(image_size, size, max_size)
-
- size = get_size(image.size, size, max_size)
- rescaled_image = F.resize(image, size)
-
- if target is None:
- return rescaled_image, None
-
- ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(rescaled_image.size, image.size))
- ratio_width, ratio_height = ratios
-
- target = target.copy()
- if "boxes" in target:
- boxes = target["boxes"]
- scaled_boxes = boxes * torch.as_tensor(
- [ratio_width, ratio_height, ratio_width, ratio_height]
- )
- target["boxes"] = scaled_boxes
-
- if "area" in target:
- area = target["area"]
- scaled_area = area * (ratio_width * ratio_height)
- target["area"] = scaled_area
-
- h, w = size
- target["size"] = torch.tensor([h, w])
-
- if "masks" in target:
- target["masks"] = (
- interpolate(target["masks"][:, None].float(), size, mode="nearest")[:, 0] > 0.5
- )
-
- return rescaled_image, target
-
-
-def pad(image, target, padding):
- # assumes that we only pad on the bottom right corners
- padded_image = F.pad(image, (0, 0, padding[0], padding[1]))
- if target is None:
- return padded_image, None
- target = target.copy()
- # should we do something wrt the original size?
- target["size"] = torch.tensor(padded_image.size[::-1])
- if "masks" in target:
- target["masks"] = torch.nn.functional.pad(target["masks"], (0, padding[0], 0, padding[1]))
- return padded_image, target
-
-
-class ResizeDebug(object):
- def __init__(self, size):
- self.size = size
-
- def __call__(self, img, target):
- return resize(img, target, self.size)
-
-
-class RandomCrop(object):
- def __init__(self, size):
- self.size = size
-
- def __call__(self, img, target):
- region = T.RandomCrop.get_params(img, self.size)
- return crop(img, target, region)
-
-
-class RandomSizeCrop(object):
- def __init__(self, min_size: int, max_size: int, respect_boxes: bool = False):
- # respect_boxes: True to keep all boxes
- # False to tolerence box filter
- self.min_size = min_size
- self.max_size = max_size
- self.respect_boxes = respect_boxes
-
- def __call__(self, img: PIL.Image.Image, target: dict):
- init_boxes = len(target["boxes"])
- max_patience = 10
- for i in range(max_patience):
- w = random.randint(self.min_size, min(img.width, self.max_size))
- h = random.randint(self.min_size, min(img.height, self.max_size))
- region = T.RandomCrop.get_params(img, [h, w])
- result_img, result_target = crop(img, target, region)
- if (
- not self.respect_boxes
- or len(result_target["boxes"]) == init_boxes
- or i == max_patience - 1
- ):
- return result_img, result_target
- return result_img, result_target
-
-
-class CenterCrop(object):
- def __init__(self, size):
- self.size = size
-
- def __call__(self, img, target):
- image_width, image_height = img.size
- crop_height, crop_width = self.size
- crop_top = int(round((image_height - crop_height) / 2.0))
- crop_left = int(round((image_width - crop_width) / 2.0))
- return crop(img, target, (crop_top, crop_left, crop_height, crop_width))
-
-
-class RandomHorizontalFlip(object):
- def __init__(self, p=0.5):
- self.p = p
-
- def __call__(self, img, target):
- if random.random() < self.p:
- return hflip(img, target)
- return img, target
-
-
-class RandomResize(object):
- def __init__(self, sizes, max_size=None):
- assert isinstance(sizes, (list, tuple))
- self.sizes = sizes
- self.max_size = max_size
-
- def __call__(self, img, target=None):
- size = random.choice(self.sizes)
- return resize(img, target, size, self.max_size)
-
-
-class RandomPad(object):
- def __init__(self, max_pad):
- self.max_pad = max_pad
-
- def __call__(self, img, target):
- pad_x = random.randint(0, self.max_pad)
- pad_y = random.randint(0, self.max_pad)
- return pad(img, target, (pad_x, pad_y))
-
-
-class RandomSelect(object):
- """
- Randomly selects between transforms1 and transforms2,
- with probability p for transforms1 and (1 - p) for transforms2
- """
-
- def __init__(self, transforms1, transforms2, p=0.5):
- self.transforms1 = transforms1
- self.transforms2 = transforms2
- self.p = p
-
- def __call__(self, img, target):
- if random.random() < self.p:
- return self.transforms1(img, target)
- return self.transforms2(img, target)
-
-
-class ToTensor(object):
- def __call__(self, img, target):
- return F.to_tensor(img), target
-
-
-class RandomErasing(object):
- def __init__(self, *args, **kwargs):
- self.eraser = T.RandomErasing(*args, **kwargs)
-
- def __call__(self, img, target):
- return self.eraser(img), target
-
-
-class Normalize(object):
- def __init__(self, mean, std):
- self.mean = mean
- self.std = std
-
- def __call__(self, image, target=None):
- image = F.normalize(image, mean=self.mean, std=self.std)
- if target is None:
- return image, None
- target = target.copy()
- h, w = image.shape[-2:]
- if "boxes" in target:
- boxes = target["boxes"]
- boxes = box_xyxy_to_cxcywh(boxes)
- boxes = boxes / torch.tensor([w, h, w, h], dtype=torch.float32)
- target["boxes"] = boxes
- return image, target
-
-
-class Compose(object):
- def __init__(self, transforms):
- self.transforms = transforms
-
- def __call__(self, image, target):
- for t in self.transforms:
- image, target = t(image, target)
- return image, target
-
- def __repr__(self):
- format_string = self.__class__.__name__ + "("
- for t in self.transforms:
- format_string += "\n"
- format_string += " {0}".format(t)
- format_string += "\n)"
- return format_string
diff --git a/spaces/CarlDennis/Lovelive-VITS-JPZH/transforms.py b/spaces/CarlDennis/Lovelive-VITS-JPZH/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/CarlDennis/Lovelive-VITS-JPZH/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/CikeyQI/meme-api/Dockerfile b/spaces/CikeyQI/meme-api/Dockerfile
deleted file mode 100644
index 9a787703454d32b47309d7a125c40c2bfec248d7..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/Dockerfile
+++ /dev/null
@@ -1,51 +0,0 @@
-FROM python:3.10 as tmp
-
-WORKDIR /tmp
-
-ENV PATH="${PATH}:/root/.local/bin"
-
-COPY ./pyproject.toml ./poetry.lock* /tmp/
-RUN pip install poetry \
- && poetry config virtualenvs.in-project true \
- && poetry install --only main --no-interaction --no-ansi
-
-FROM python:3.10-slim as app
-
-WORKDIR /app
-
-EXPOSE 7860
-
-VOLUME /data
-
-COPY --from=tmp /tmp/.venv /app/.venv
-
-COPY ./resources/fonts/* /usr/share/fonts/meme-fonts/
-RUN apt-get update \
- && apt-get install -y --no-install-recommends locales fontconfig fonts-noto-cjk fonts-noto-color-emoji gettext \
- && localedef -i zh_CN -c -f UTF-8 -A /usr/share/locale/locale.alias zh_CN.UTF-8 \
- && fc-cache -fv \
- && apt-get purge -y --auto-remove \
- && rm -rf /var/lib/apt/lists/*
-
-ENV TZ=Asia/Shanghai \
- LC_ALL=zh_CN.UTF-8 \
- PATH="/app/.venv/bin:${PATH}" \
- VIRTUAL_ENV="/app/.venv" \
- LOAD_BUILTIN_MEMES=true \
- MEME_DIRS="[\"/data/memes\"]" \
- MEME_DISABLED_LIST="[]" \
- GIF_MAX_SIZE=10.0 \
- GIF_MAX_FRAMES=100 \
- BAIDU_TRANS_APPID="" \
- BAIDU_TRANS_APIKEY="" \
- LOG_LEVEL="INFO"
-
-COPY ./meme_generator /app/meme_generator
-
-COPY ./docker/config.toml.template /app/config.toml.template
-COPY ./docker/start.sh /app/start.sh
-RUN mkdir -p /.config
-RUN chmod -R 777 /.config
-RUN chmod +x /app/start.sh
-
-CMD ["/app/start.sh"]
diff --git a/spaces/Codecooker/rvcapi/src/rvc.py b/spaces/Codecooker/rvcapi/src/rvc.py
deleted file mode 100644
index c1b288d82f9254a043be0c4454cdfe2149e6fd0f..0000000000000000000000000000000000000000
--- a/spaces/Codecooker/rvcapi/src/rvc.py
+++ /dev/null
@@ -1,148 +0,0 @@
-from multiprocessing import cpu_count
-from pathlib import Path
-
-import torch
-from fairseq import checkpoint_utils
-from scipy.io import wavfile
-
-from infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from my_utils import load_audio
-from vc_infer_pipeline import VC
-
-BASE_DIR = Path(__file__).resolve().parent.parent
-
-
-class Config:
- def __init__(self, device, is_half):
- self.device = device
- self.is_half = is_half
- self.n_cpu = 0
- self.gpu_name = None
- self.gpu_mem = None
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- print("16 series/10 series P40 forced single precision")
- self.is_half = False
- for config_file in ["32k.json", "40k.json", "48k.json"]:
- with open(BASE_DIR / "src" / "configs" / config_file, "r") as f:
- strr = f.read().replace("true", "false")
- with open(BASE_DIR / "src" / "configs" / config_file, "w") as f:
- f.write(strr)
- with open(BASE_DIR / "src" / "trainset_preprocess_pipeline_print.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open(BASE_DIR / "src" / "trainset_preprocess_pipeline_print.py", "w") as f:
- f.write(strr)
- else:
- self.gpu_name = None
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- if self.gpu_mem <= 4:
- with open(BASE_DIR / "src" / "trainset_preprocess_pipeline_print.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open(BASE_DIR / "src" / "trainset_preprocess_pipeline_print.py", "w") as f:
- f.write(strr)
- elif torch.backends.mps.is_available():
- print("No supported N-card found, use MPS for inference")
- self.device = "mps"
- else:
- print("No supported N-card found, use CPU for inference")
- self.device = "cpu"
- self.is_half = True
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G memory config
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G memory config
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem != None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
-
- return x_pad, x_query, x_center, x_max
-
-
-def load_hubert(device, is_half, model_path):
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task([model_path], suffix='', )
- hubert = models[0]
- hubert = hubert.to(device)
-
- if is_half:
- hubert = hubert.half()
- else:
- hubert = hubert.float()
-
- hubert.eval()
- return hubert
-
-
-def get_vc(device, is_half, config, model_path):
- cpt = torch.load(model_path, map_location='cpu')
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0]
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
-
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
-
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(device)
-
- if is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
-
- vc = VC(tgt_sr, config)
- return cpt, version, net_g, tgt_sr, vc
-
-
-def rvc_infer(index_path, index_rate, input_path, output_path, pitch_change, cpt, version, net_g, filter_radius, tgt_sr, rms_mix_rate, protect, vc, hubert_model):
- audio = load_audio(input_path, 16000)
- times = [0, 0, 0]
- if_f0 = cpt.get('f0', 1)
- audio_opt = vc.pipeline(hubert_model, net_g, 0, audio, input_path, times, pitch_change, 'rmvpe', index_path, index_rate, if_f0, filter_radius, tgt_sr, 0, rms_mix_rate, version, protect, None)
- wavfile.write(output_path, tgt_sr, audio_opt)
diff --git a/spaces/Cong723/gpt-academic-public/crazy_functions/test_project/cpp/cppipc/queue.h b/spaces/Cong723/gpt-academic-public/crazy_functions/test_project/cpp/cppipc/queue.h
deleted file mode 100644
index a21f3446e06b5826af7b554c8a7d9c5d80848b62..0000000000000000000000000000000000000000
--- a/spaces/Cong723/gpt-academic-public/crazy_functions/test_project/cpp/cppipc/queue.h
+++ /dev/null
@@ -1,216 +0,0 @@
-#pragma once
-
-#include
-#include
-#include // [[since C++14]]: std::exchange
-#include
-#include
-#include
-#include
-#include
-#include
-#include // assert
-
-#include "libipc/def.h"
-#include "libipc/shm.h"
-#include "libipc/rw_lock.h"
-
-#include "libipc/utility/log.h"
-#include "libipc/platform/detail.h"
-#include "libipc/circ/elem_def.h"
-
-namespace ipc {
-namespace detail {
-
-class queue_conn {
-protected:
- circ::cc_t connected_ = 0;
- shm::handle elems_h_;
-
- template
- Elems* open(char const * name) {
- if (name == nullptr || name[0] == '\0') {
- ipc::error("fail open waiter: name is empty!\n");
- return nullptr;
- }
- if (!elems_h_.acquire(name, sizeof(Elems))) {
- return nullptr;
- }
- auto elems = static_cast(elems_h_.get());
- if (elems == nullptr) {
- ipc::error("fail acquire elems: %s\n", name);
- return nullptr;
- }
- elems->init();
- return elems;
- }
-
- void close() {
- elems_h_.release();
- }
-
-public:
- queue_conn() = default;
- queue_conn(const queue_conn&) = delete;
- queue_conn& operator=(const queue_conn&) = delete;
-
- bool connected() const noexcept {
- return connected_ != 0;
- }
-
- circ::cc_t connected_id() const noexcept {
- return connected_;
- }
-
- template
- auto connect(Elems* elems) noexcept
- /*needs 'optional' here*/
- -> std::tuple().cursor())> {
- if (elems == nullptr) return {};
- // if it's already connected, just return
- if (connected()) return {connected(), false, 0};
- connected_ = elems->connect_receiver();
- return {connected(), true, elems->cursor()};
- }
-
- template
- bool disconnect(Elems* elems) noexcept {
- if (elems == nullptr) return false;
- // if it's already disconnected, just return false
- if (!connected()) return false;
- elems->disconnect_receiver(std::exchange(connected_, 0));
- return true;
- }
-};
-
-template
-class queue_base : public queue_conn {
- using base_t = queue_conn;
-
-public:
- using elems_t = Elems;
- using policy_t = typename elems_t::policy_t;
-
-protected:
- elems_t * elems_ = nullptr;
- decltype(std::declval().cursor()) cursor_ = 0;
- bool sender_flag_ = false;
-
-public:
- using base_t::base_t;
-
- queue_base() = default;
-
- explicit queue_base(char const * name)
- : queue_base{} {
- elems_ = open(name);
- }
-
- explicit queue_base(elems_t * elems) noexcept
- : queue_base{} {
- assert(elems != nullptr);
- elems_ = elems;
- }
-
- /* not virtual */ ~queue_base() {
- base_t::close();
- }
-
- elems_t * elems() noexcept { return elems_; }
- elems_t const * elems() const noexcept { return elems_; }
-
- bool ready_sending() noexcept {
- if (elems_ == nullptr) return false;
- return sender_flag_ || (sender_flag_ = elems_->connect_sender());
- }
-
- void shut_sending() noexcept {
- if (elems_ == nullptr) return;
- if (!sender_flag_) return;
- elems_->disconnect_sender();
- }
-
- bool connect() noexcept {
- auto tp = base_t::connect(elems_);
- if (std::get<0>(tp) && std::get<1>(tp)) {
- cursor_ = std::get<2>(tp);
- return true;
- }
- return std::get<0>(tp);
- }
-
- bool disconnect() noexcept {
- return base_t::disconnect(elems_);
- }
-
- std::size_t conn_count() const noexcept {
- return (elems_ == nullptr) ? static_cast(invalid_value) : elems_->conn_count();
- }
-
- bool valid() const noexcept {
- return elems_ != nullptr;
- }
-
- bool empty() const noexcept {
- return !valid() || (cursor_ == elems_->cursor());
- }
-
- template
- bool push(F&& prep, P&&... params) {
- if (elems_ == nullptr) return false;
- return elems_->push(this, [&](void* p) {
- if (prep(p)) ::new (p) T(std::forward
En este notebook, presentaremos un prototipo de detector de sentimientos de noticias, una aplicación emocionante de la inteligencia artificial en el campo financiero. Nuestro objetivo principal es realizar un análisis exploratorio de texto utilizando técnicas de procesamiento del lenguaje natural (NLP, por sus siglas en inglés) para clasificar las noticias según su tono emocional.
-
-
-
-
-
-
-
-
-
-
-
-
En el prototipo del detector de sentimientos de noticias, se utilizan varias bibliotecas para diferentes tareas. Aquí hay un resumen de las bibliotecas utilizadas:
-
-
os: Esta biblioteca proporciona funciones para interactuar con el sistema operativo, como la manipulación de rutas de archivos y directorios.
-
-
pathlib: Esta biblioteca proporciona una interfaz orientada a objetos para trabajar con rutas de archivos y directorios de manera más intuitiva.
-
-
matplotlib: Esta biblioteca se utiliza para visualizar datos y generar gráficos, como histogramas y gráficos de barras. En el prototipo, se utiliza para visualizar los resultados del análisis exploratorio de texto.
-
-
pandas: Esta biblioteca se utiliza para el análisis y manipulación de datos en forma de tablas. En el prototipo, se utiliza para cargar y manipular los datos de las noticias.
-
-
spacy: Esta biblioteca es una poderosa herramienta de procesamiento de lenguaje natural (NLP). En el prototipo, se utiliza para acceder a las palabras vacías (stop words) en español, que son palabras comunes pero no aportan información significativa para el análisis de sentimientos.
-
-
wordcloud: Esta biblioteca se utiliza para generar nubes de palabras, que son visualizaciones que muestran las palabras más frecuentes en un texto. En el prototipo, se utiliza para visualizar las palabras más frecuentes en las noticias recopiladas.
# Dataset y configuraciones del los proyectos
-# Configigurations
-path=Path().cwd().parent/"Dataset"
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Estas son funciones que usan para hacer análsis de texto, lo importante es entender que lo usamos para plotear nube de palabras
-
-
-
-
-
-
-
-
-
-
In [3]:
-
-
-
# Funciones usadas para plotear los datos
-defplots_world_cloud(df,title,figsize=(10,10)):
-"""This function is used to plot the world cloud"""
- text=" ".join(df)
- plt.figure(figsize=figsize)
- wordcloud=WordCloud(background_color="white",stopwords=es_stopwords).generate(text)
- plt.imshow(wordcloud,interpolation='bilinear')
- plt.axis("off")
- plt.title(title)
- plt.show()
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
El tamaño del texto es crucial al seleccionar modelos adecuados. La longitud promedio y la variabilidad determinan si se requieren enfoques basados en transformadores y el impacto en los recursos computacionales. Esto optimiza el rendimiento del análisis de sentimientos y el procesamiento de texto.
-
-
-
-
-
-
-
-
-
-
In [4]:
-
-
-
# Importamos los datasets y los usamos para explorar los datos
-df_train=pd.read_csv(path/"train.csv")
-df_test=pd.read_csv(path/"test.csv")
-
-df_train["len"]=df_train.text.apply(len)
-df_train["tag"]="train"
-df_test["len"]=df_test.text.apply(len)
-df_test["tag"]="test"
-df_train=pd.concat([df_train,df_test],axis=0)
-
El objetivo del modelo es predecir el sentimiento para las variables "target", "companies" y "consumer", donde cada una de ellas se divide en tres categorías: positiva, negativa y neutral. Esto implica que el vector objetivo será de nueve componentes, ya que cada variable tiene tres categorías posibles. El problema de clasificación se convierte en un problema de múltiples etiquetas, lo que significa que una fila en el conjunto de datos puede pertenecer a varias categorías a la vez. El objetivo del modelo será predecir las etiquetas correspondientes a cada una de las nueve combinaciones posibles de sentimientos para las tres variables.
Entiendo. Según la imagen del vector de características, se espera que la red neuronal produzca valores cercanos a 1 para indicar la asociación del texto con una determinada característica o categoría de sentimiento. Esto implica que cuanto más cercano esté el valor a 1, mayor será la probabilidad de que el texto esté relacionado con esa característica específica.
-
En un problema de clasificación de múltiples etiquetas, donde se busca predecir varias categorías simultáneamente, los valores del vector de características pueden interpretarse como la probabilidad de pertenencia a cada una de las categorías. Por lo tanto, un valor cercano a 1 indica una alta probabilidad de que el texto pertenezca a una categoría en particular.
-
Es importante tener en cuenta que la interpretación exacta de los valores depende del enfoque y la configuración específica del modelo utilizado. Algunos modelos pueden generar probabilidades directamente a través de una función de activación de salida, como la función sigmoide, mientras que otros pueden producir valores continuos que luego se convierten en probabilidades utilizando un umbral determinado.
-
En resumen, se espera que la red neuronal produzca valores cercanos a 1 en el vector de características para indicar una mayor asociación con una característica específica o categoría de sentimiento en un problema de clasificación de múltiples etiquetas. Estos valores pueden interpretarse como probabilidades de pertenencia a cada categoría.
Imagen 3: Frecuencia de las etiquetas para target_sentimet
|
-
-
-
-
-
-
-
-
-
-
-
-
La imagen 3 anterior muestra que el dataset está desbalanceado, especialmente en la variable "target_sentiment" donde se observa una mayor cantidad de registros con etiquetas positivas en comparación con las etiquetas negativas y neutrales. El desbalance de clases puede tener un impacto en el rendimiento y la capacidad de generalización de los modelos de aprendizaje automático
Imagen 4: Boxplot del tamaño del texto por etiqueta target_sentimet
|
-
-
-
-
-
-
-
-
-
-
-
-
Basados en la imagen 4, el tamaño de las oraciones también es un aspecto importante a considerar en el análisis. Según los datos proporcionados, se observa que el tamaño máximo de las oraciones es de 160 caracteres. Esto puede ser útil para determinar la longitud máxima de secuencia que se puede utilizar al entrenar los modelos.
-
Además, se menciona que los encabezados con sentimiento neutral tienden a ser más cortos en comparación con los encabezados con sentimientos positivos. Esta información es relevante, ya que la longitud de las oraciones puede influir en la forma en que se expresan los sentimientos y en cómo se relacionan con las características específicas del texto.
-
Si todos los datos en la variable "target_sentiment" son positivos y no se observan datos atípicos, esto puede indicar un desbalance en la distribución de las clases en esta categoría. Es importante tener en cuenta este desbalance al desarrollar y evaluar los modelos de análisis de sentimientos, ya que puede afectar la capacidad de generalización y la precisión de las predicciones.
-
En resumen, al analizar el tamaño de las oraciones en el conjunto de datos y considerar las características relacionadas con la variable "target_sentiment", es posible tener una comprensión más completa de los datos y tomar decisiones informadas en el desarrollo de modelos de análisis de sentimientos.
En la imagen 5, se puede observar que el tamaño de los textos varía desde un mínimo de 69 caracteres hasta un máximo de 160 caracteres. Esta información es relevante para determinar el tamaño adecuado de secuencia que se utilizará en el modelo.
-
Dado que existe una variabilidad en la longitud de los textos, es posible aplicar técnicas de padding para estandarizar el tamaño y asegurarse de que todos los textos tengan la misma longitud. El padding implica agregar tokens especiales (como ceros) al inicio o final de los textos más cortos para igualar el tamaño de los textos más largos. Esto es especialmente útil cuando se trabaja con modelos basados en transformadores, ya que estos modelos generalmente requieren que todas las secuencias de entrada tengan la misma longitud.
-
Aplicar padding garantiza que el modelo pueda procesar todos los textos de manera uniforme y mantener la coherencia en la estructura de entrada. Además, esto permite aprovechar al máximo el poder de los modelos de lenguaje, que están diseñados para capturar patrones y relaciones en datos secuenciales.
En conclusión, al analizar los datos, se pueden destacar los siguientes puntos:
-
-
Desbalance de datos: Existe un desbalance en las etiquetas de la variable "target_sentiment", con una mayor cantidad de registros etiquetados como positivos en comparación con las etiquetas negativas y neutrales. Este desbalance puede requerir técnicas de manejo de clases desbalanceadas durante el entrenamiento del modelo.
-
-
Tamaño de los datos: El tamaño de los textos en el conjunto de datos es relativamente pequeño, con una longitud máxima de 160 caracteres. Esto es útil para determinar la longitud máxima de secuencia que se puede utilizar en los modelos de análisis de sentimientos.
-
-
Análisis de palabras: Se puede realizar un análisis exploratorio para identificar las palabras más comúnmente utilizadas en el texto y examinar las características asociadas con cada etiqueta de sentimiento.
-
-
Padding: Dado que existe variabilidad en la longitud de los textos, se puede aplicar padding para estandarizar el tamaño y asegurarse de que todos los textos tengan la misma longitud. Esto facilita el procesamiento y análisis uniforme de los textos en el modelo de análisis de sentimientos.
Además de los puntos mencionados anteriormente, hay algunas observaciones adicionales sobre el contexto de las noticias:
-
-
Contexto europeo: Las noticias se sitúan en un contexto europeo, como se puede inferir de la mención de la zona euro y diferentes empresas europeas. Esto puede ser relevante para comprender el enfoque y las perspectivas económicas presentes en las noticias.
-
-
Enfoque económico: Todas las noticias parecen estar centradas en aspectos económicos. Esto sugiere que el contenido se relaciona principalmente con eventos, tendencias y noticias relacionadas con la economía y los mercados financieros.
-
-
Sesgo temporal: Se intuye que algunas noticias pueden ser de la época de Trump, lo que indica un sesgo temporal en los datos. Es importante tener en cuenta este sesgo temporal al interpretar y generalizar los resultados del análisis de sentimientos, ya que las condiciones económicas y los eventos pueden haber cambiado desde entonces.
-
-
Palabras asociadas a sentimientos negativos y positivos: Algunas palabras en un contexto negativo, como "huelga", "coronavirus" y "crisis", pueden indicar aspectos desfavorables o eventos problemáticos mencionados en las noticias. Por otro lado, palabras relacionadas con sentimientos positivos, como "empleo", "subida" y "recuperación", sugieren una visión más optimista y perspectivas positivas en el contenido.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/all_atom.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/all_atom.py
deleted file mode 100644
index c8ebe8b08c068876d8903fb5b1cc861e71e9095c..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/all_atom.py
+++ /dev/null
@@ -1,1141 +0,0 @@
-# Copyright 2021 DeepMind Technologies Limited
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Ops for all atom representations.
-
-Generally we employ two different representations for all atom coordinates,
-one is atom37 where each heavy atom corresponds to a given position in a 37
-dimensional array, This mapping is non amino acid specific, but each slot
-corresponds to an atom of a given name, for example slot 12 always corresponds
-to 'C delta 1', positions that are not present for a given amino acid are
-zeroed out and denoted by a mask.
-The other representation we employ is called atom14, this is a more dense way
-of representing atoms with 14 slots. Here a given slot will correspond to a
-different kind of atom depending on amino acid type, for example slot 5
-corresponds to 'N delta 2' for Aspargine, but to 'C delta 1' for Isoleucine.
-14 is chosen because it is the maximum number of heavy atoms for any standard
-amino acid.
-The order of slots can be found in 'residue_constants.residue_atoms'.
-Internally the model uses the atom14 representation because it is
-computationally more efficient.
-The internal atom14 representation is turned into the atom37 at the output of
-the network to facilitate easier conversion to existing protein datastructures.
-"""
-
-from typing import Dict, Optional
-from alphafold.common import residue_constants
-
-from alphafold.model import r3
-from alphafold.model import utils
-import jax
-import jax.numpy as jnp
-import numpy as np
-
-
-def squared_difference(x, y):
- return jnp.square(x - y)
-
-
-def get_chi_atom_indices():
- """Returns atom indices needed to compute chi angles for all residue types.
-
- Returns:
- A tensor of shape [residue_types=21, chis=4, atoms=4]. The residue types are
- in the order specified in residue_constants.restypes + unknown residue type
- at the end. For chi angles which are not defined on the residue, the
- positions indices are by default set to 0.
- """
- chi_atom_indices = []
- for residue_name in residue_constants.restypes:
- residue_name = residue_constants.restype_1to3[residue_name]
- residue_chi_angles = residue_constants.chi_angles_atoms[residue_name]
- atom_indices = []
- for chi_angle in residue_chi_angles:
- atom_indices.append(
- [residue_constants.atom_order[atom] for atom in chi_angle])
- for _ in range(4 - len(atom_indices)):
- atom_indices.append([0, 0, 0, 0]) # For chi angles not defined on the AA.
- chi_atom_indices.append(atom_indices)
-
- chi_atom_indices.append([[0, 0, 0, 0]] * 4) # For UNKNOWN residue.
-
- return jnp.asarray(chi_atom_indices)
-
-
-def atom14_to_atom37(atom14_data: jnp.ndarray, # (N, 14, ...)
- batch: Dict[str, jnp.ndarray]
- ) -> jnp.ndarray: # (N, 37, ...)
- """Convert atom14 to atom37 representation."""
- assert len(atom14_data.shape) in [2, 3]
- assert 'residx_atom37_to_atom14' in batch
- assert 'atom37_atom_exists' in batch
-
- atom37_data = utils.batched_gather(atom14_data,
- batch['residx_atom37_to_atom14'],
- batch_dims=1)
- if len(atom14_data.shape) == 2:
- atom37_data *= batch['atom37_atom_exists']
- elif len(atom14_data.shape) == 3:
- atom37_data *= batch['atom37_atom_exists'][:, :,
- None].astype(atom37_data.dtype)
- return atom37_data
-
-
-def atom37_to_atom14(
- atom37_data: jnp.ndarray, # (N, 37, ...)
- batch: Dict[str, jnp.ndarray]) -> jnp.ndarray: # (N, 14, ...)
- """Convert atom14 to atom37 representation."""
- assert len(atom37_data.shape) in [2, 3]
- assert 'residx_atom14_to_atom37' in batch
- assert 'atom14_atom_exists' in batch
-
- atom14_data = utils.batched_gather(atom37_data,
- batch['residx_atom14_to_atom37'],
- batch_dims=1)
- if len(atom37_data.shape) == 2:
- atom14_data *= batch['atom14_atom_exists'].astype(atom14_data.dtype)
- elif len(atom37_data.shape) == 3:
- atom14_data *= batch['atom14_atom_exists'][:, :,
- None].astype(atom14_data.dtype)
- return atom14_data
-
-
-def atom37_to_frames(
- aatype: jnp.ndarray, # (...)
- all_atom_positions: jnp.ndarray, # (..., 37, 3)
- all_atom_mask: jnp.ndarray, # (..., 37)
-) -> Dict[str, jnp.ndarray]:
- """Computes the frames for the up to 8 rigid groups for each residue.
-
- The rigid groups are defined by the possible torsions in a given amino acid.
- We group the atoms according to their dependence on the torsion angles into
- "rigid groups". E.g., the position of atoms in the chi2-group depend on
- chi1 and chi2, but do not depend on chi3 or chi4.
- Jumper et al. (2021) Suppl. Table 2 and corresponding text.
-
- Args:
- aatype: Amino acid type, given as array with integers.
- all_atom_positions: atom37 representation of all atom coordinates.
- all_atom_mask: atom37 representation of mask on all atom coordinates.
- Returns:
- Dictionary containing:
- * 'rigidgroups_gt_frames': 8 Frames corresponding to 'all_atom_positions'
- represented as flat 12 dimensional array.
- * 'rigidgroups_gt_exists': Mask denoting whether the atom positions for
- the given frame are available in the ground truth, e.g. if they were
- resolved in the experiment.
- * 'rigidgroups_group_exists': Mask denoting whether given group is in
- principle present for given amino acid type.
- * 'rigidgroups_group_is_ambiguous': Mask denoting whether frame is
- affected by naming ambiguity.
- * 'rigidgroups_alt_gt_frames': 8 Frames with alternative atom renaming
- corresponding to 'all_atom_positions' represented as flat
- 12 dimensional array.
- """
- # 0: 'backbone group',
- # 1: 'pre-omega-group', (empty)
- # 2: 'phi-group', (currently empty, because it defines only hydrogens)
- # 3: 'psi-group',
- # 4,5,6,7: 'chi1,2,3,4-group'
- aatype_in_shape = aatype.shape
-
- # If there is a batch axis, just flatten it away, and reshape everything
- # back at the end of the function.
- aatype = jnp.reshape(aatype, [-1])
- all_atom_positions = jnp.reshape(all_atom_positions, [-1, 37, 3])
- all_atom_mask = jnp.reshape(all_atom_mask, [-1, 37])
-
- # Create an array with the atom names.
- # shape (num_restypes, num_rigidgroups, 3_atoms): (21, 8, 3)
- restype_rigidgroup_base_atom_names = np.full([21, 8, 3], '', dtype=object)
-
- # 0: backbone frame
- restype_rigidgroup_base_atom_names[:, 0, :] = ['C', 'CA', 'N']
-
- # 3: 'psi-group'
- restype_rigidgroup_base_atom_names[:, 3, :] = ['CA', 'C', 'O']
-
- # 4,5,6,7: 'chi1,2,3,4-group'
- for restype, restype_letter in enumerate(residue_constants.restypes):
- resname = residue_constants.restype_1to3[restype_letter]
- for chi_idx in range(4):
- if residue_constants.chi_angles_mask[restype][chi_idx]:
- atom_names = residue_constants.chi_angles_atoms[resname][chi_idx]
- restype_rigidgroup_base_atom_names[
- restype, chi_idx + 4, :] = atom_names[1:]
-
- # Create mask for existing rigid groups.
- restype_rigidgroup_mask = np.zeros([21, 8], dtype=np.float32)
- restype_rigidgroup_mask[:, 0] = 1
- restype_rigidgroup_mask[:, 3] = 1
- restype_rigidgroup_mask[:20, 4:] = residue_constants.chi_angles_mask
-
- # Translate atom names into atom37 indices.
- lookuptable = residue_constants.atom_order.copy()
- lookuptable[''] = 0
- restype_rigidgroup_base_atom37_idx = np.vectorize(lambda x: lookuptable[x])(
- restype_rigidgroup_base_atom_names)
-
- # Compute the gather indices for all residues in the chain.
- # shape (N, 8, 3)
- residx_rigidgroup_base_atom37_idx = utils.batched_gather(
- restype_rigidgroup_base_atom37_idx, aatype)
-
- # Gather the base atom positions for each rigid group.
- base_atom_pos = utils.batched_gather(
- all_atom_positions,
- residx_rigidgroup_base_atom37_idx,
- batch_dims=1)
-
- # Compute the Rigids.
- gt_frames = r3.rigids_from_3_points(
- point_on_neg_x_axis=r3.vecs_from_tensor(base_atom_pos[:, :, 0, :]),
- origin=r3.vecs_from_tensor(base_atom_pos[:, :, 1, :]),
- point_on_xy_plane=r3.vecs_from_tensor(base_atom_pos[:, :, 2, :])
- )
-
- # Compute a mask whether the group exists.
- # (N, 8)
- group_exists = utils.batched_gather(restype_rigidgroup_mask, aatype)
-
- # Compute a mask whether ground truth exists for the group
- gt_atoms_exist = utils.batched_gather( # shape (N, 8, 3)
- all_atom_mask.astype(jnp.float32),
- residx_rigidgroup_base_atom37_idx,
- batch_dims=1)
- gt_exists = jnp.min(gt_atoms_exist, axis=-1) * group_exists # (N, 8)
-
- # Adapt backbone frame to old convention (mirror x-axis and z-axis).
- rots = np.tile(np.eye(3, dtype=np.float32), [8, 1, 1])
- rots[0, 0, 0] = -1
- rots[0, 2, 2] = -1
- gt_frames = r3.rigids_mul_rots(gt_frames, r3.rots_from_tensor3x3(rots))
-
- # The frames for ambiguous rigid groups are just rotated by 180 degree around
- # the x-axis. The ambiguous group is always the last chi-group.
- restype_rigidgroup_is_ambiguous = np.zeros([21, 8], dtype=np.float32)
- restype_rigidgroup_rots = np.tile(np.eye(3, dtype=np.float32), [21, 8, 1, 1])
-
- for resname, _ in residue_constants.residue_atom_renaming_swaps.items():
- restype = residue_constants.restype_order[
- residue_constants.restype_3to1[resname]]
- chi_idx = int(sum(residue_constants.chi_angles_mask[restype]) - 1)
- restype_rigidgroup_is_ambiguous[restype, chi_idx + 4] = 1
- restype_rigidgroup_rots[restype, chi_idx + 4, 1, 1] = -1
- restype_rigidgroup_rots[restype, chi_idx + 4, 2, 2] = -1
-
- # Gather the ambiguity information for each residue.
- residx_rigidgroup_is_ambiguous = utils.batched_gather(
- restype_rigidgroup_is_ambiguous, aatype)
- residx_rigidgroup_ambiguity_rot = utils.batched_gather(
- restype_rigidgroup_rots, aatype)
-
- # Create the alternative ground truth frames.
- alt_gt_frames = r3.rigids_mul_rots(
- gt_frames, r3.rots_from_tensor3x3(residx_rigidgroup_ambiguity_rot))
-
- gt_frames_flat12 = r3.rigids_to_tensor_flat12(gt_frames)
- alt_gt_frames_flat12 = r3.rigids_to_tensor_flat12(alt_gt_frames)
-
- # reshape back to original residue layout
- gt_frames_flat12 = jnp.reshape(gt_frames_flat12, aatype_in_shape + (8, 12))
- gt_exists = jnp.reshape(gt_exists, aatype_in_shape + (8,))
- group_exists = jnp.reshape(group_exists, aatype_in_shape + (8,))
- gt_frames_flat12 = jnp.reshape(gt_frames_flat12, aatype_in_shape + (8, 12))
- residx_rigidgroup_is_ambiguous = jnp.reshape(residx_rigidgroup_is_ambiguous,
- aatype_in_shape + (8,))
- alt_gt_frames_flat12 = jnp.reshape(alt_gt_frames_flat12,
- aatype_in_shape + (8, 12,))
-
- return {
- 'rigidgroups_gt_frames': gt_frames_flat12, # (..., 8, 12)
- 'rigidgroups_gt_exists': gt_exists, # (..., 8)
- 'rigidgroups_group_exists': group_exists, # (..., 8)
- 'rigidgroups_group_is_ambiguous':
- residx_rigidgroup_is_ambiguous, # (..., 8)
- 'rigidgroups_alt_gt_frames': alt_gt_frames_flat12, # (..., 8, 12)
- }
-
-
-def atom37_to_torsion_angles(
- aatype: jnp.ndarray, # (B, N)
- all_atom_pos: jnp.ndarray, # (B, N, 37, 3)
- all_atom_mask: jnp.ndarray, # (B, N, 37)
- placeholder_for_undefined=False,
-) -> Dict[str, jnp.ndarray]:
- """Computes the 7 torsion angles (in sin, cos encoding) for each residue.
-
- The 7 torsion angles are in the order
- '[pre_omega, phi, psi, chi_1, chi_2, chi_3, chi_4]',
- here pre_omega denotes the omega torsion angle between the given amino acid
- and the previous amino acid.
-
- Args:
- aatype: Amino acid type, given as array with integers.
- all_atom_pos: atom37 representation of all atom coordinates.
- all_atom_mask: atom37 representation of mask on all atom coordinates.
- placeholder_for_undefined: flag denoting whether to set masked torsion
- angles to zero.
- Returns:
- Dict containing:
- * 'torsion_angles_sin_cos': Array with shape (B, N, 7, 2) where the final
- 2 dimensions denote sin and cos respectively
- * 'alt_torsion_angles_sin_cos': same as 'torsion_angles_sin_cos', but
- with the angle shifted by pi for all chi angles affected by the naming
- ambiguities.
- * 'torsion_angles_mask': Mask for which chi angles are present.
- """
-
- # Map aatype > 20 to 'Unknown' (20).
- aatype = jnp.minimum(aatype, 20)
-
- # Compute the backbone angles.
- num_batch, num_res = aatype.shape
-
- pad = jnp.zeros([num_batch, 1, 37, 3], jnp.float32)
- prev_all_atom_pos = jnp.concatenate([pad, all_atom_pos[:, :-1, :, :]], axis=1)
-
- pad = jnp.zeros([num_batch, 1, 37], jnp.float32)
- prev_all_atom_mask = jnp.concatenate([pad, all_atom_mask[:, :-1, :]], axis=1)
-
- # For each torsion angle collect the 4 atom positions that define this angle.
- # shape (B, N, atoms=4, xyz=3)
- pre_omega_atom_pos = jnp.concatenate(
- [prev_all_atom_pos[:, :, 1:3, :], # prev CA, C
- all_atom_pos[:, :, 0:2, :] # this N, CA
- ], axis=-2)
- phi_atom_pos = jnp.concatenate(
- [prev_all_atom_pos[:, :, 2:3, :], # prev C
- all_atom_pos[:, :, 0:3, :] # this N, CA, C
- ], axis=-2)
- psi_atom_pos = jnp.concatenate(
- [all_atom_pos[:, :, 0:3, :], # this N, CA, C
- all_atom_pos[:, :, 4:5, :] # this O
- ], axis=-2)
-
- # Collect the masks from these atoms.
- # Shape [batch, num_res]
- pre_omega_mask = (
- jnp.prod(prev_all_atom_mask[:, :, 1:3], axis=-1) # prev CA, C
- * jnp.prod(all_atom_mask[:, :, 0:2], axis=-1)) # this N, CA
- phi_mask = (
- prev_all_atom_mask[:, :, 2] # prev C
- * jnp.prod(all_atom_mask[:, :, 0:3], axis=-1)) # this N, CA, C
- psi_mask = (
- jnp.prod(all_atom_mask[:, :, 0:3], axis=-1) * # this N, CA, C
- all_atom_mask[:, :, 4]) # this O
-
- # Collect the atoms for the chi-angles.
- # Compute the table of chi angle indices. Shape: [restypes, chis=4, atoms=4].
- chi_atom_indices = get_chi_atom_indices()
- # Select atoms to compute chis. Shape: [batch, num_res, chis=4, atoms=4].
- atom_indices = utils.batched_gather(
- params=chi_atom_indices, indices=aatype, axis=0, batch_dims=0)
- # Gather atom positions. Shape: [batch, num_res, chis=4, atoms=4, xyz=3].
- chis_atom_pos = utils.batched_gather(
- params=all_atom_pos, indices=atom_indices, axis=-2,
- batch_dims=2)
-
- # Copy the chi angle mask, add the UNKNOWN residue. Shape: [restypes, 4].
- chi_angles_mask = list(residue_constants.chi_angles_mask)
- chi_angles_mask.append([0.0, 0.0, 0.0, 0.0])
- chi_angles_mask = jnp.asarray(chi_angles_mask)
-
- # Compute the chi angle mask. I.e. which chis angles exist according to the
- # aatype. Shape [batch, num_res, chis=4].
- chis_mask = utils.batched_gather(params=chi_angles_mask, indices=aatype,
- axis=0, batch_dims=0)
-
- # Constrain the chis_mask to those chis, where the ground truth coordinates of
- # all defining four atoms are available.
- # Gather the chi angle atoms mask. Shape: [batch, num_res, chis=4, atoms=4].
- chi_angle_atoms_mask = utils.batched_gather(
- params=all_atom_mask, indices=atom_indices, axis=-1,
- batch_dims=2)
- # Check if all 4 chi angle atoms were set. Shape: [batch, num_res, chis=4].
- chi_angle_atoms_mask = jnp.prod(chi_angle_atoms_mask, axis=[-1])
- chis_mask = chis_mask * (chi_angle_atoms_mask).astype(jnp.float32)
-
- # Stack all torsion angle atom positions.
- # Shape (B, N, torsions=7, atoms=4, xyz=3)
- torsions_atom_pos = jnp.concatenate(
- [pre_omega_atom_pos[:, :, None, :, :],
- phi_atom_pos[:, :, None, :, :],
- psi_atom_pos[:, :, None, :, :],
- chis_atom_pos
- ], axis=2)
-
- # Stack up masks for all torsion angles.
- # shape (B, N, torsions=7)
- torsion_angles_mask = jnp.concatenate(
- [pre_omega_mask[:, :, None],
- phi_mask[:, :, None],
- psi_mask[:, :, None],
- chis_mask
- ], axis=2)
-
- # Create a frame from the first three atoms:
- # First atom: point on x-y-plane
- # Second atom: point on negative x-axis
- # Third atom: origin
- # r3.Rigids (B, N, torsions=7)
- torsion_frames = r3.rigids_from_3_points(
- point_on_neg_x_axis=r3.vecs_from_tensor(torsions_atom_pos[:, :, :, 1, :]),
- origin=r3.vecs_from_tensor(torsions_atom_pos[:, :, :, 2, :]),
- point_on_xy_plane=r3.vecs_from_tensor(torsions_atom_pos[:, :, :, 0, :]))
-
- # Compute the position of the forth atom in this frame (y and z coordinate
- # define the chi angle)
- # r3.Vecs (B, N, torsions=7)
- forth_atom_rel_pos = r3.rigids_mul_vecs(
- r3.invert_rigids(torsion_frames),
- r3.vecs_from_tensor(torsions_atom_pos[:, :, :, 3, :]))
-
- # Normalize to have the sin and cos of the torsion angle.
- # jnp.ndarray (B, N, torsions=7, sincos=2)
- torsion_angles_sin_cos = jnp.stack(
- [forth_atom_rel_pos.z, forth_atom_rel_pos.y], axis=-1)
- torsion_angles_sin_cos /= jnp.sqrt(
- jnp.sum(jnp.square(torsion_angles_sin_cos), axis=-1, keepdims=True)
- + 1e-8)
-
- # Mirror psi, because we computed it from the Oxygen-atom.
- torsion_angles_sin_cos *= jnp.asarray(
- [1., 1., -1., 1., 1., 1., 1.])[None, None, :, None]
-
- # Create alternative angles for ambiguous atom names.
- chi_is_ambiguous = utils.batched_gather(
- jnp.asarray(residue_constants.chi_pi_periodic), aatype)
- mirror_torsion_angles = jnp.concatenate(
- [jnp.ones([num_batch, num_res, 3]),
- 1.0 - 2.0 * chi_is_ambiguous], axis=-1)
- alt_torsion_angles_sin_cos = (
- torsion_angles_sin_cos * mirror_torsion_angles[:, :, :, None])
-
- if placeholder_for_undefined:
- # Add placeholder torsions in place of undefined torsion angles
- # (e.g. N-terminus pre-omega)
- placeholder_torsions = jnp.stack([
- jnp.ones(torsion_angles_sin_cos.shape[:-1]),
- jnp.zeros(torsion_angles_sin_cos.shape[:-1])
- ], axis=-1)
- torsion_angles_sin_cos = torsion_angles_sin_cos * torsion_angles_mask[
- ..., None] + placeholder_torsions * (1 - torsion_angles_mask[..., None])
- alt_torsion_angles_sin_cos = alt_torsion_angles_sin_cos * torsion_angles_mask[
- ..., None] + placeholder_torsions * (1 - torsion_angles_mask[..., None])
-
- return {
- 'torsion_angles_sin_cos': torsion_angles_sin_cos, # (B, N, 7, 2)
- 'alt_torsion_angles_sin_cos': alt_torsion_angles_sin_cos, # (B, N, 7, 2)
- 'torsion_angles_mask': torsion_angles_mask # (B, N, 7)
- }
-
-
-def torsion_angles_to_frames(
- aatype: jnp.ndarray, # (N)
- backb_to_global: r3.Rigids, # (N)
- torsion_angles_sin_cos: jnp.ndarray # (N, 7, 2)
-) -> r3.Rigids: # (N, 8)
- """Compute rigid group frames from torsion angles.
-
- Jumper et al. (2021) Suppl. Alg. 24 "computeAllAtomCoordinates" lines 2-10
- Jumper et al. (2021) Suppl. Alg. 25 "makeRotX"
-
- Args:
- aatype: aatype for each residue
- backb_to_global: Rigid transformations describing transformation from
- backbone frame to global frame.
- torsion_angles_sin_cos: sin and cosine of the 7 torsion angles
- Returns:
- Frames corresponding to all the Sidechain Rigid Transforms
- """
- assert len(aatype.shape) == 1
- assert len(backb_to_global.rot.xx.shape) == 1
- assert len(torsion_angles_sin_cos.shape) == 3
- assert torsion_angles_sin_cos.shape[1] == 7
- assert torsion_angles_sin_cos.shape[2] == 2
-
- # Gather the default frames for all rigid groups.
- # r3.Rigids with shape (N, 8)
- m = utils.batched_gather(residue_constants.restype_rigid_group_default_frame,
- aatype)
- default_frames = r3.rigids_from_tensor4x4(m)
-
- # Create the rotation matrices according to the given angles (each frame is
- # defined such that its rotation is around the x-axis).
- sin_angles = torsion_angles_sin_cos[..., 0]
- cos_angles = torsion_angles_sin_cos[..., 1]
-
- # insert zero rotation for backbone group.
- num_residues, = aatype.shape
- sin_angles = jnp.concatenate([jnp.zeros([num_residues, 1]), sin_angles],
- axis=-1)
- cos_angles = jnp.concatenate([jnp.ones([num_residues, 1]), cos_angles],
- axis=-1)
- zeros = jnp.zeros_like(sin_angles)
- ones = jnp.ones_like(sin_angles)
-
- # all_rots are r3.Rots with shape (N, 8)
- all_rots = r3.Rots(ones, zeros, zeros,
- zeros, cos_angles, -sin_angles,
- zeros, sin_angles, cos_angles)
-
- # Apply rotations to the frames.
- all_frames = r3.rigids_mul_rots(default_frames, all_rots)
-
- # chi2, chi3, and chi4 frames do not transform to the backbone frame but to
- # the previous frame. So chain them up accordingly.
- chi2_frame_to_frame = jax.tree_map(lambda x: x[:, 5], all_frames)
- chi3_frame_to_frame = jax.tree_map(lambda x: x[:, 6], all_frames)
- chi4_frame_to_frame = jax.tree_map(lambda x: x[:, 7], all_frames)
-
- chi1_frame_to_backb = jax.tree_map(lambda x: x[:, 4], all_frames)
- chi2_frame_to_backb = r3.rigids_mul_rigids(chi1_frame_to_backb,
- chi2_frame_to_frame)
- chi3_frame_to_backb = r3.rigids_mul_rigids(chi2_frame_to_backb,
- chi3_frame_to_frame)
- chi4_frame_to_backb = r3.rigids_mul_rigids(chi3_frame_to_backb,
- chi4_frame_to_frame)
-
- # Recombine them to a r3.Rigids with shape (N, 8).
- def _concat_frames(xall, x5, x6, x7):
- return jnp.concatenate(
- [xall[:, 0:5], x5[:, None], x6[:, None], x7[:, None]], axis=-1)
-
- all_frames_to_backb = jax.tree_map(
- _concat_frames,
- all_frames,
- chi2_frame_to_backb,
- chi3_frame_to_backb,
- chi4_frame_to_backb)
-
- # Create the global frames.
- # shape (N, 8)
- all_frames_to_global = r3.rigids_mul_rigids(
- jax.tree_map(lambda x: x[:, None], backb_to_global),
- all_frames_to_backb)
-
- return all_frames_to_global
-
-
-def frames_and_literature_positions_to_atom14_pos(
- aatype: jnp.ndarray, # (N)
- all_frames_to_global: r3.Rigids # (N, 8)
-) -> r3.Vecs: # (N, 14)
- """Put atom literature positions (atom14 encoding) in each rigid group.
-
- Jumper et al. (2021) Suppl. Alg. 24 "computeAllAtomCoordinates" line 11
-
- Args:
- aatype: aatype for each residue.
- all_frames_to_global: All per residue coordinate frames.
- Returns:
- Positions of all atom coordinates in global frame.
- """
-
- # Pick the appropriate transform for every atom.
- residx_to_group_idx = utils.batched_gather(
- residue_constants.restype_atom14_to_rigid_group, aatype)
- group_mask = jax.nn.one_hot(
- residx_to_group_idx, num_classes=8) # shape (N, 14, 8)
-
- # r3.Rigids with shape (N, 14)
- map_atoms_to_global = jax.tree_map(
- lambda x: jnp.sum(x[:, None, :] * group_mask, axis=-1),
- all_frames_to_global)
-
- # Gather the literature atom positions for each residue.
- # r3.Vecs with shape (N, 14)
- lit_positions = r3.vecs_from_tensor(
- utils.batched_gather(
- residue_constants.restype_atom14_rigid_group_positions, aatype))
-
- # Transform each atom from its local frame to the global frame.
- # r3.Vecs with shape (N, 14)
- pred_positions = r3.rigids_mul_vecs(map_atoms_to_global, lit_positions)
-
- # Mask out non-existing atoms.
- mask = utils.batched_gather(residue_constants.restype_atom14_mask, aatype)
- pred_positions = jax.tree_map(lambda x: x * mask, pred_positions)
-
- return pred_positions
-
-
-def extreme_ca_ca_distance_violations(
- pred_atom_positions: jnp.ndarray, # (N, 37(14), 3)
- pred_atom_mask: jnp.ndarray, # (N, 37(14))
- residue_index: jnp.ndarray, # (N)
- max_angstrom_tolerance=1.5
- ) -> jnp.ndarray:
- """Counts residues whose Ca is a large distance from its neighbour.
-
- Measures the fraction of CA-CA pairs between consecutive amino acids that are
- more than 'max_angstrom_tolerance' apart.
-
- Args:
- pred_atom_positions: Atom positions in atom37/14 representation
- pred_atom_mask: Atom mask in atom37/14 representation
- residue_index: Residue index for given amino acid, this is assumed to be
- monotonically increasing.
- max_angstrom_tolerance: Maximum distance allowed to not count as violation.
- Returns:
- Fraction of consecutive CA-CA pairs with violation.
- """
- this_ca_pos = pred_atom_positions[:-1, 1, :] # (N - 1, 3)
- this_ca_mask = pred_atom_mask[:-1, 1] # (N - 1)
- next_ca_pos = pred_atom_positions[1:, 1, :] # (N - 1, 3)
- next_ca_mask = pred_atom_mask[1:, 1] # (N - 1)
- has_no_gap_mask = ((residue_index[1:] - residue_index[:-1]) == 1.0).astype(
- jnp.float32)
- ca_ca_distance = jnp.sqrt(
- 1e-6 + jnp.sum(squared_difference(this_ca_pos, next_ca_pos), axis=-1))
- violations = (ca_ca_distance -
- residue_constants.ca_ca) > max_angstrom_tolerance
- mask = this_ca_mask * next_ca_mask * has_no_gap_mask
- return utils.mask_mean(mask=mask, value=violations)
-
-
-def between_residue_bond_loss(
- pred_atom_positions: jnp.ndarray, # (N, 37(14), 3)
- pred_atom_mask: jnp.ndarray, # (N, 37(14))
- residue_index: jnp.ndarray, # (N)
- aatype: jnp.ndarray, # (N)
- tolerance_factor_soft=12.0,
- tolerance_factor_hard=12.0
-) -> Dict[str, jnp.ndarray]:
- """Flat-bottom loss to penalize structural violations between residues.
-
- This is a loss penalizing any violation of the geometry around the peptide
- bond between consecutive amino acids. This loss corresponds to
- Jumper et al. (2021) Suppl. Sec. 1.9.11, eq 44, 45.
-
- Args:
- pred_atom_positions: Atom positions in atom37/14 representation
- pred_atom_mask: Atom mask in atom37/14 representation
- residue_index: Residue index for given amino acid, this is assumed to be
- monotonically increasing.
- aatype: Amino acid type of given residue
- tolerance_factor_soft: soft tolerance factor measured in standard deviations
- of pdb distributions
- tolerance_factor_hard: hard tolerance factor measured in standard deviations
- of pdb distributions
-
- Returns:
- Dict containing:
- * 'c_n_loss_mean': Loss for peptide bond length violations
- * 'ca_c_n_loss_mean': Loss for violations of bond angle around C spanned
- by CA, C, N
- * 'c_n_ca_loss_mean': Loss for violations of bond angle around N spanned
- by C, N, CA
- * 'per_residue_loss_sum': sum of all losses for each residue
- * 'per_residue_violation_mask': mask denoting all residues with violation
- present.
- """
- assert len(pred_atom_positions.shape) == 3
- assert len(pred_atom_mask.shape) == 2
- assert len(residue_index.shape) == 1
- assert len(aatype.shape) == 1
-
- # Get the positions of the relevant backbone atoms.
- this_ca_pos = pred_atom_positions[:-1, 1, :] # (N - 1, 3)
- this_ca_mask = pred_atom_mask[:-1, 1] # (N - 1)
- this_c_pos = pred_atom_positions[:-1, 2, :] # (N - 1, 3)
- this_c_mask = pred_atom_mask[:-1, 2] # (N - 1)
- next_n_pos = pred_atom_positions[1:, 0, :] # (N - 1, 3)
- next_n_mask = pred_atom_mask[1:, 0] # (N - 1)
- next_ca_pos = pred_atom_positions[1:, 1, :] # (N - 1, 3)
- next_ca_mask = pred_atom_mask[1:, 1] # (N - 1)
- has_no_gap_mask = ((residue_index[1:] - residue_index[:-1]) == 1.0).astype(
- jnp.float32)
-
- # Compute loss for the C--N bond.
- c_n_bond_length = jnp.sqrt(
- 1e-6 + jnp.sum(squared_difference(this_c_pos, next_n_pos), axis=-1))
-
- # The C-N bond to proline has slightly different length because of the ring.
- next_is_proline = (
- aatype[1:] == residue_constants.resname_to_idx['PRO']).astype(jnp.float32)
- gt_length = (
- (1. - next_is_proline) * residue_constants.between_res_bond_length_c_n[0]
- + next_is_proline * residue_constants.between_res_bond_length_c_n[1])
- gt_stddev = (
- (1. - next_is_proline) *
- residue_constants.between_res_bond_length_stddev_c_n[0] +
- next_is_proline * residue_constants.between_res_bond_length_stddev_c_n[1])
- c_n_bond_length_error = jnp.sqrt(1e-6 +
- jnp.square(c_n_bond_length - gt_length))
- c_n_loss_per_residue = jax.nn.relu(
- c_n_bond_length_error - tolerance_factor_soft * gt_stddev)
- mask = this_c_mask * next_n_mask * has_no_gap_mask
- c_n_loss = jnp.sum(mask * c_n_loss_per_residue) / (jnp.sum(mask) + 1e-6)
- c_n_violation_mask = mask * (
- c_n_bond_length_error > (tolerance_factor_hard * gt_stddev))
-
- # Compute loss for the angles.
- ca_c_bond_length = jnp.sqrt(1e-6 + jnp.sum(
- squared_difference(this_ca_pos, this_c_pos), axis=-1))
- n_ca_bond_length = jnp.sqrt(1e-6 + jnp.sum(
- squared_difference(next_n_pos, next_ca_pos), axis=-1))
-
- c_ca_unit_vec = (this_ca_pos - this_c_pos) / ca_c_bond_length[:, None]
- c_n_unit_vec = (next_n_pos - this_c_pos) / c_n_bond_length[:, None]
- n_ca_unit_vec = (next_ca_pos - next_n_pos) / n_ca_bond_length[:, None]
-
- ca_c_n_cos_angle = jnp.sum(c_ca_unit_vec * c_n_unit_vec, axis=-1)
- gt_angle = residue_constants.between_res_cos_angles_ca_c_n[0]
- gt_stddev = residue_constants.between_res_bond_length_stddev_c_n[0]
- ca_c_n_cos_angle_error = jnp.sqrt(
- 1e-6 + jnp.square(ca_c_n_cos_angle - gt_angle))
- ca_c_n_loss_per_residue = jax.nn.relu(
- ca_c_n_cos_angle_error - tolerance_factor_soft * gt_stddev)
- mask = this_ca_mask * this_c_mask * next_n_mask * has_no_gap_mask
- ca_c_n_loss = jnp.sum(mask * ca_c_n_loss_per_residue) / (jnp.sum(mask) + 1e-6)
- ca_c_n_violation_mask = mask * (ca_c_n_cos_angle_error >
- (tolerance_factor_hard * gt_stddev))
-
- c_n_ca_cos_angle = jnp.sum((-c_n_unit_vec) * n_ca_unit_vec, axis=-1)
- gt_angle = residue_constants.between_res_cos_angles_c_n_ca[0]
- gt_stddev = residue_constants.between_res_cos_angles_c_n_ca[1]
- c_n_ca_cos_angle_error = jnp.sqrt(
- 1e-6 + jnp.square(c_n_ca_cos_angle - gt_angle))
- c_n_ca_loss_per_residue = jax.nn.relu(
- c_n_ca_cos_angle_error - tolerance_factor_soft * gt_stddev)
- mask = this_c_mask * next_n_mask * next_ca_mask * has_no_gap_mask
- c_n_ca_loss = jnp.sum(mask * c_n_ca_loss_per_residue) / (jnp.sum(mask) + 1e-6)
- c_n_ca_violation_mask = mask * (
- c_n_ca_cos_angle_error > (tolerance_factor_hard * gt_stddev))
-
- # Compute a per residue loss (equally distribute the loss to both
- # neighbouring residues).
- per_residue_loss_sum = (c_n_loss_per_residue +
- ca_c_n_loss_per_residue +
- c_n_ca_loss_per_residue)
- per_residue_loss_sum = 0.5 * (jnp.pad(per_residue_loss_sum, [[0, 1]]) +
- jnp.pad(per_residue_loss_sum, [[1, 0]]))
-
- # Compute hard violations.
- violation_mask = jnp.max(
- jnp.stack([c_n_violation_mask,
- ca_c_n_violation_mask,
- c_n_ca_violation_mask]), axis=0)
- violation_mask = jnp.maximum(
- jnp.pad(violation_mask, [[0, 1]]),
- jnp.pad(violation_mask, [[1, 0]]))
-
- return {'c_n_loss_mean': c_n_loss, # shape ()
- 'ca_c_n_loss_mean': ca_c_n_loss, # shape ()
- 'c_n_ca_loss_mean': c_n_ca_loss, # shape ()
- 'per_residue_loss_sum': per_residue_loss_sum, # shape (N)
- 'per_residue_violation_mask': violation_mask # shape (N)
- }
-
-
-def between_residue_clash_loss(
- atom14_pred_positions: jnp.ndarray, # (N, 14, 3)
- atom14_atom_exists: jnp.ndarray, # (N, 14)
- atom14_atom_radius: jnp.ndarray, # (N, 14)
- residue_index: jnp.ndarray, # (N)
- overlap_tolerance_soft=1.5,
- overlap_tolerance_hard=1.5
-) -> Dict[str, jnp.ndarray]:
- """Loss to penalize steric clashes between residues.
-
- This is a loss penalizing any steric clashes due to non bonded atoms in
- different peptides coming too close. This loss corresponds to the part with
- different residues of
- Jumper et al. (2021) Suppl. Sec. 1.9.11, eq 46.
-
- Args:
- atom14_pred_positions: Predicted positions of atoms in
- global prediction frame
- atom14_atom_exists: Mask denoting whether atom at positions exists for given
- amino acid type
- atom14_atom_radius: Van der Waals radius for each atom.
- residue_index: Residue index for given amino acid.
- overlap_tolerance_soft: Soft tolerance factor.
- overlap_tolerance_hard: Hard tolerance factor.
-
- Returns:
- Dict containing:
- * 'mean_loss': average clash loss
- * 'per_atom_loss_sum': sum of all clash losses per atom, shape (N, 14)
- * 'per_atom_clash_mask': mask whether atom clashes with any other atom
- shape (N, 14)
- """
- assert len(atom14_pred_positions.shape) == 3
- assert len(atom14_atom_exists.shape) == 2
- assert len(atom14_atom_radius.shape) == 2
- assert len(residue_index.shape) == 1
-
- # Create the distance matrix.
- # (N, N, 14, 14)
- dists = jnp.sqrt(1e-10 + jnp.sum(
- squared_difference(
- atom14_pred_positions[:, None, :, None, :],
- atom14_pred_positions[None, :, None, :, :]),
- axis=-1))
-
- # Create the mask for valid distances.
- # shape (N, N, 14, 14)
- dists_mask = (atom14_atom_exists[:, None, :, None] *
- atom14_atom_exists[None, :, None, :])
-
- # Mask out all the duplicate entries in the lower triangular matrix.
- # Also mask out the diagonal (atom-pairs from the same residue) -- these atoms
- # are handled separately.
- dists_mask *= (
- residue_index[:, None, None, None] < residue_index[None, :, None, None])
-
- # Backbone C--N bond between subsequent residues is no clash.
- c_one_hot = jax.nn.one_hot(2, num_classes=14)
- n_one_hot = jax.nn.one_hot(0, num_classes=14)
- neighbour_mask = ((residue_index[:, None, None, None] +
- 1) == residue_index[None, :, None, None])
- c_n_bonds = neighbour_mask * c_one_hot[None, None, :,
- None] * n_one_hot[None, None, None, :]
- dists_mask *= (1. - c_n_bonds)
-
- # Disulfide bridge between two cysteines is no clash.
- cys_sg_idx = residue_constants.restype_name_to_atom14_names['CYS'].index('SG')
- cys_sg_one_hot = jax.nn.one_hot(cys_sg_idx, num_classes=14)
- disulfide_bonds = (cys_sg_one_hot[None, None, :, None] *
- cys_sg_one_hot[None, None, None, :])
- dists_mask *= (1. - disulfide_bonds)
-
- # Compute the lower bound for the allowed distances.
- # shape (N, N, 14, 14)
- dists_lower_bound = dists_mask * (atom14_atom_radius[:, None, :, None] +
- atom14_atom_radius[None, :, None, :])
-
- # Compute the error.
- # shape (N, N, 14, 14)
- dists_to_low_error = dists_mask * jax.nn.relu(
- dists_lower_bound - overlap_tolerance_soft - dists)
-
- # Compute the mean loss.
- # shape ()
- mean_loss = (jnp.sum(dists_to_low_error)
- / (1e-6 + jnp.sum(dists_mask)))
-
- # Compute the per atom loss sum.
- # shape (N, 14)
- per_atom_loss_sum = (jnp.sum(dists_to_low_error, axis=[0, 2]) +
- jnp.sum(dists_to_low_error, axis=[1, 3]))
-
- # Compute the hard clash mask.
- # shape (N, N, 14, 14)
- clash_mask = dists_mask * (
- dists < (dists_lower_bound - overlap_tolerance_hard))
-
- # Compute the per atom clash.
- # shape (N, 14)
- per_atom_clash_mask = jnp.maximum(
- jnp.max(clash_mask, axis=[0, 2]),
- jnp.max(clash_mask, axis=[1, 3]))
-
- return {'mean_loss': mean_loss, # shape ()
- 'per_atom_loss_sum': per_atom_loss_sum, # shape (N, 14)
- 'per_atom_clash_mask': per_atom_clash_mask # shape (N, 14)
- }
-
-
-def within_residue_violations(
- atom14_pred_positions: jnp.ndarray, # (N, 14, 3)
- atom14_atom_exists: jnp.ndarray, # (N, 14)
- atom14_dists_lower_bound: jnp.ndarray, # (N, 14, 14)
- atom14_dists_upper_bound: jnp.ndarray, # (N, 14, 14)
- tighten_bounds_for_loss=0.0,
-) -> Dict[str, jnp.ndarray]:
- """Loss to penalize steric clashes within residues.
-
- This is a loss penalizing any steric violations or clashes of non-bonded atoms
- in a given peptide. This loss corresponds to the part with
- the same residues of
- Jumper et al. (2021) Suppl. Sec. 1.9.11, eq 46.
-
- Args:
- atom14_pred_positions: Predicted positions of atoms in
- global prediction frame
- atom14_atom_exists: Mask denoting whether atom at positions exists for given
- amino acid type
- atom14_dists_lower_bound: Lower bound on allowed distances.
- atom14_dists_upper_bound: Upper bound on allowed distances
- tighten_bounds_for_loss: Extra factor to tighten loss
-
- Returns:
- Dict containing:
- * 'per_atom_loss_sum': sum of all clash losses per atom, shape (N, 14)
- * 'per_atom_clash_mask': mask whether atom clashes with any other atom
- shape (N, 14)
- """
- assert len(atom14_pred_positions.shape) == 3
- assert len(atom14_atom_exists.shape) == 2
- assert len(atom14_dists_lower_bound.shape) == 3
- assert len(atom14_dists_upper_bound.shape) == 3
-
- # Compute the mask for each residue.
- # shape (N, 14, 14)
- dists_masks = (1. - jnp.eye(14, 14)[None])
- dists_masks *= (atom14_atom_exists[:, :, None] *
- atom14_atom_exists[:, None, :])
-
- # Distance matrix
- # shape (N, 14, 14)
- dists = jnp.sqrt(1e-10 + jnp.sum(
- squared_difference(
- atom14_pred_positions[:, :, None, :],
- atom14_pred_positions[:, None, :, :]),
- axis=-1))
-
- # Compute the loss.
- # shape (N, 14, 14)
- dists_to_low_error = jax.nn.relu(
- atom14_dists_lower_bound + tighten_bounds_for_loss - dists)
- dists_to_high_error = jax.nn.relu(
- dists - (atom14_dists_upper_bound - tighten_bounds_for_loss))
- loss = dists_masks * (dists_to_low_error + dists_to_high_error)
-
- # Compute the per atom loss sum.
- # shape (N, 14)
- per_atom_loss_sum = (jnp.sum(loss, axis=1) +
- jnp.sum(loss, axis=2))
-
- # Compute the violations mask.
- # shape (N, 14, 14)
- violations = dists_masks * ((dists < atom14_dists_lower_bound) |
- (dists > atom14_dists_upper_bound))
-
- # Compute the per atom violations.
- # shape (N, 14)
- per_atom_violations = jnp.maximum(
- jnp.max(violations, axis=1), jnp.max(violations, axis=2))
-
- return {'per_atom_loss_sum': per_atom_loss_sum, # shape (N, 14)
- 'per_atom_violations': per_atom_violations # shape (N, 14)
- }
-
-
-def find_optimal_renaming(
- atom14_gt_positions: jnp.ndarray, # (N, 14, 3)
- atom14_alt_gt_positions: jnp.ndarray, # (N, 14, 3)
- atom14_atom_is_ambiguous: jnp.ndarray, # (N, 14)
- atom14_gt_exists: jnp.ndarray, # (N, 14)
- atom14_pred_positions: jnp.ndarray, # (N, 14, 3)
- atom14_atom_exists: jnp.ndarray, # (N, 14)
-) -> jnp.ndarray: # (N):
- """Find optimal renaming for ground truth that maximizes LDDT.
-
- Jumper et al. (2021) Suppl. Alg. 26
- "renameSymmetricGroundTruthAtoms" lines 1-5
-
- Args:
- atom14_gt_positions: Ground truth positions in global frame of ground truth.
- atom14_alt_gt_positions: Alternate ground truth positions in global frame of
- ground truth with coordinates of ambiguous atoms swapped relative to
- 'atom14_gt_positions'.
- atom14_atom_is_ambiguous: Mask denoting whether atom is among ambiguous
- atoms, see Jumper et al. (2021) Suppl. Table 3
- atom14_gt_exists: Mask denoting whether atom at positions exists in ground
- truth.
- atom14_pred_positions: Predicted positions of atoms in
- global prediction frame
- atom14_atom_exists: Mask denoting whether atom at positions exists for given
- amino acid type
-
- Returns:
- Float array of shape [N] with 1. where atom14_alt_gt_positions is closer to
- prediction and 0. otherwise
- """
- assert len(atom14_gt_positions.shape) == 3
- assert len(atom14_alt_gt_positions.shape) == 3
- assert len(atom14_atom_is_ambiguous.shape) == 2
- assert len(atom14_gt_exists.shape) == 2
- assert len(atom14_pred_positions.shape) == 3
- assert len(atom14_atom_exists.shape) == 2
-
- # Create the pred distance matrix.
- # shape (N, N, 14, 14)
- pred_dists = jnp.sqrt(1e-10 + jnp.sum(
- squared_difference(
- atom14_pred_positions[:, None, :, None, :],
- atom14_pred_positions[None, :, None, :, :]),
- axis=-1))
-
- # Compute distances for ground truth with original and alternative names.
- # shape (N, N, 14, 14)
- gt_dists = jnp.sqrt(1e-10 + jnp.sum(
- squared_difference(
- atom14_gt_positions[:, None, :, None, :],
- atom14_gt_positions[None, :, None, :, :]),
- axis=-1))
- alt_gt_dists = jnp.sqrt(1e-10 + jnp.sum(
- squared_difference(
- atom14_alt_gt_positions[:, None, :, None, :],
- atom14_alt_gt_positions[None, :, None, :, :]),
- axis=-1))
-
- # Compute LDDT's.
- # shape (N, N, 14, 14)
- lddt = jnp.sqrt(1e-10 + squared_difference(pred_dists, gt_dists))
- alt_lddt = jnp.sqrt(1e-10 + squared_difference(pred_dists, alt_gt_dists))
-
- # Create a mask for ambiguous atoms in rows vs. non-ambiguous atoms
- # in cols.
- # shape (N ,N, 14, 14)
- mask = (atom14_gt_exists[:, None, :, None] * # rows
- atom14_atom_is_ambiguous[:, None, :, None] * # rows
- atom14_gt_exists[None, :, None, :] * # cols
- (1. - atom14_atom_is_ambiguous[None, :, None, :])) # cols
-
- # Aggregate distances for each residue to the non-amibuguous atoms.
- # shape (N)
- per_res_lddt = jnp.sum(mask * lddt, axis=[1, 2, 3])
- alt_per_res_lddt = jnp.sum(mask * alt_lddt, axis=[1, 2, 3])
-
- # Decide for each residue, whether alternative naming is better.
- # shape (N)
- alt_naming_is_better = (alt_per_res_lddt < per_res_lddt).astype(jnp.float32)
-
- return alt_naming_is_better # shape (N)
-
-
-def frame_aligned_point_error(
- pred_frames: r3.Rigids, # shape (num_frames)
- target_frames: r3.Rigids, # shape (num_frames)
- frames_mask: jnp.ndarray, # shape (num_frames)
- pred_positions: r3.Vecs, # shape (num_positions)
- target_positions: r3.Vecs, # shape (num_positions)
- positions_mask: jnp.ndarray, # shape (num_positions)
- length_scale: float,
- l1_clamp_distance: Optional[float] = None,
- epsilon=1e-4) -> jnp.ndarray: # shape ()
- """Measure point error under different alignments.
-
- Jumper et al. (2021) Suppl. Alg. 28 "computeFAPE"
-
- Computes error between two structures with B points under A alignments derived
- from the given pairs of frames.
- Args:
- pred_frames: num_frames reference frames for 'pred_positions'.
- target_frames: num_frames reference frames for 'target_positions'.
- frames_mask: Mask for frame pairs to use.
- pred_positions: num_positions predicted positions of the structure.
- target_positions: num_positions target positions of the structure.
- positions_mask: Mask on which positions to score.
- length_scale: length scale to divide loss by.
- l1_clamp_distance: Distance cutoff on error beyond which gradients will
- be zero.
- epsilon: small value used to regularize denominator for masked average.
- Returns:
- Masked Frame Aligned Point Error.
- """
- assert pred_frames.rot.xx.ndim == 1
- assert target_frames.rot.xx.ndim == 1
- assert frames_mask.ndim == 1, frames_mask.ndim
- assert pred_positions.x.ndim == 1
- assert target_positions.x.ndim == 1
- assert positions_mask.ndim == 1
-
- # Compute array of predicted positions in the predicted frames.
- # r3.Vecs (num_frames, num_positions)
- local_pred_pos = r3.rigids_mul_vecs(
- jax.tree_map(lambda r: r[:, None], r3.invert_rigids(pred_frames)),
- jax.tree_map(lambda x: x[None, :], pred_positions))
-
- # Compute array of target positions in the target frames.
- # r3.Vecs (num_frames, num_positions)
- local_target_pos = r3.rigids_mul_vecs(
- jax.tree_map(lambda r: r[:, None], r3.invert_rigids(target_frames)),
- jax.tree_map(lambda x: x[None, :], target_positions))
-
- # Compute errors between the structures.
- # jnp.ndarray (num_frames, num_positions)
- error_dist = jnp.sqrt(
- r3.vecs_squared_distance(local_pred_pos, local_target_pos)
- + epsilon)
-
- if l1_clamp_distance:
- error_dist = jnp.clip(error_dist, 0, l1_clamp_distance)
-
- normed_error = error_dist / length_scale
- normed_error *= jnp.expand_dims(frames_mask, axis=-1)
- normed_error *= jnp.expand_dims(positions_mask, axis=-2)
-
- normalization_factor = (
- jnp.sum(frames_mask, axis=-1) *
- jnp.sum(positions_mask, axis=-1))
- return (jnp.sum(normed_error, axis=(-2, -1)) /
- (epsilon + normalization_factor))
-
-
-def _make_renaming_matrices():
- """Matrices to map atoms to symmetry partners in ambiguous case."""
- # As the atom naming is ambiguous for 7 of the 20 amino acids, provide
- # alternative groundtruth coordinates where the naming is swapped
- restype_3 = [
- residue_constants.restype_1to3[res] for res in residue_constants.restypes
- ]
- restype_3 += ['UNK']
- # Matrices for renaming ambiguous atoms.
- all_matrices = {res: np.eye(14, dtype=np.float32) for res in restype_3}
- for resname, swap in residue_constants.residue_atom_renaming_swaps.items():
- correspondences = np.arange(14)
- for source_atom_swap, target_atom_swap in swap.items():
- source_index = residue_constants.restype_name_to_atom14_names[
- resname].index(source_atom_swap)
- target_index = residue_constants.restype_name_to_atom14_names[
- resname].index(target_atom_swap)
- correspondences[source_index] = target_index
- correspondences[target_index] = source_index
- renaming_matrix = np.zeros((14, 14), dtype=np.float32)
- for index, correspondence in enumerate(correspondences):
- renaming_matrix[index, correspondence] = 1.
- all_matrices[resname] = renaming_matrix.astype(np.float32)
- renaming_matrices = np.stack([all_matrices[restype] for restype in restype_3])
- return renaming_matrices
-
-
-RENAMING_MATRICES = _make_renaming_matrices()
-
-
-def get_alt_atom14(aatype, positions, mask):
- """Get alternative atom14 positions.
-
- Constructs renamed atom positions for ambiguous residues.
-
- Jumper et al. (2021) Suppl. Table 3 "Ambiguous atom names due to 180 degree-
- rotation-symmetry"
-
- Args:
- aatype: Amino acid at given position
- positions: Atom positions as r3.Vecs in atom14 representation, (N, 14)
- mask: Atom masks in atom14 representation, (N, 14)
- Returns:
- renamed atom positions, renamed atom mask
- """
- # pick the transformation matrices for the given residue sequence
- # shape (num_res, 14, 14)
- renaming_transform = utils.batched_gather(
- jnp.asarray(RENAMING_MATRICES), aatype)
-
- positions = jax.tree_map(lambda x: x[:, :, None], positions)
- alternative_positions = jax.tree_map(
- lambda x: jnp.sum(x, axis=1), positions * renaming_transform)
-
- # Create the mask for the alternative ground truth (differs from the
- # ground truth mask, if only one of the atoms in an ambiguous pair has a
- # ground truth position)
- alternative_mask = jnp.sum(mask[..., None] * renaming_transform, axis=1)
-
- return alternative_positions, alternative_mask
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/carafe/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/carafe/README.md
deleted file mode 100644
index d9ca6644f997f88723210e0caa23e1bd70759f09..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/carafe/README.md
+++ /dev/null
@@ -1,32 +0,0 @@
-# CARAFE: Content-Aware ReAssembly of FEatures
-
-## Introduction
-
-[ALGORITHM]
-
-We provide config files to reproduce the object detection & instance segmentation results in the ICCV 2019 Oral paper for [CARAFE: Content-Aware ReAssembly of FEatures](https://arxiv.org/abs/1905.02188).
-
-```
-@inproceedings{Wang_2019_ICCV,
- title = {CARAFE: Content-Aware ReAssembly of FEatures},
- author = {Wang, Jiaqi and Chen, Kai and Xu, Rui and Liu, Ziwei and Loy, Chen Change and Lin, Dahua},
- booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
- month = {October},
- year = {2019}
-}
-```
-
-## Results and Models
-
-The results on COCO 2017 val is shown in the below table.
-
-| Method | Backbone | Style | Lr schd | Test Proposal Num | Inf time (fps) | Box AP | Mask AP | Config | Download |
-|:--------------------:|:--------:|:-------:|:-------:|:-----------------:|:--------------:|:------:|:-------:|:------:|:--------:|
-| Faster R-CNN w/ CARAFE | R-50-FPN | pytorch | 1x | 1000 | 16.5 | 38.6 | 38.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/carafe/faster_rcnn_r50_fpn_carafe_1x_coco/faster_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.386_20200504_175733-385a75b7.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/carafe/faster_rcnn_r50_fpn_carafe_1x_coco/faster_rcnn_r50_fpn_carafe_1x_coco_20200504_175733.log.json) |
-| - | - | - | - | 2000 | | | | |
-| Mask R-CNN w/ CARAFE | R-50-FPN | pytorch | 1x | 1000 | 14.0 | 39.3 | 35.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/carafe/mask_rcnn_r50_fpn_carafe_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/carafe/mask_rcnn_r50_fpn_carafe_1x_coco/mask_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.393__segm_mAP-0.358_20200503_135957-8687f195.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/carafe/mask_rcnn_r50_fpn_carafe_1x_coco/mask_rcnn_r50_fpn_carafe_1x_coco_20200503_135957.log.json) |
-| - | - | - | - | 2000 | | | | |
-
-## Implementation
-
-The CUDA implementation of CARAFE can be find at https://github.com/myownskyW7/CARAFE.
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/pisa/pisa_faster_rcnn_r50_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/pisa/pisa_faster_rcnn_r50_fpn_1x_coco.py
deleted file mode 100644
index 71e65b0b2bc72379f4db73e491f76fc767cb786b..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/pisa/pisa_faster_rcnn_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,30 +0,0 @@
-_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
-
-model = dict(
- roi_head=dict(
- type='PISARoIHead',
- bbox_head=dict(
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))),
- train_cfg=dict(
- rpn_proposal=dict(
- nms_pre=2000,
- max_per_img=2000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- sampler=dict(
- type='ScoreHLRSampler',
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True,
- k=0.5,
- bias=0.),
- isr=dict(k=2, bias=0),
- carl=dict(k=1, bias=0.2))),
- test_cfg=dict(
- rpn=dict(
- nms_pre=2000,
- max_per_img=2000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0)))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r101-d16_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r101-d16_512x1024_40k_cityscapes.py
deleted file mode 100644
index aec4254c8f4ae835cdfbe785bb0c375173d1e232..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r101-d16_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './fcn_d6_r50-d16_512x1024_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/summary/pretrain_bart_summary.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/summary/pretrain_bart_summary.sh
deleted file mode 100644
index f8a6af24f935cc563891922b8a50cd293231367b..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/summary/pretrain_bart_summary.sh
+++ /dev/null
@@ -1,124 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=bart_summary
-#SBATCH --nodes=1
-#SBATCH --ntasks-per-node=4
-#SBATCH --gres=gpu:4 # number of gpus
-#SBATCH -o %x-%j.log
-
-set -x -e
-
-echo "START TIME: $(date)"
-MODEL_NAME=bart-base
-MICRO_BATCH_SIZE=16
-ROOT_DIR=/cognitive_comp/dongxiaoqun/finetune/${MODEL_NAME}
-
-ZERO_STAGE=1
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/dongxiaoqun/torch_extendsions
-config_json="./ds_config.${MODEL_NAME}.json"
-
-# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size()
-cat < $config_json
-{
- "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE},
- "steps_per_print": 100,
- "gradient_clipping": 1.0,
- "zero_optimization": {
- "stage": $ZERO_STAGE,
- "contiguous_gradients": false,
- "overlap_comm": true,
- "reduce_scatter": true,
- "reduce_bucket_size": 50000000,
- "allgather_bucket_size": 500000000
- },
- "optimizer": {
- "type": "Adam",
- "params": {
- "lr": 1e-4,
- "betas": [
- 0.9,
- 0.95
- ],
- "eps": 1e-8,
- "weight_decay": 5e-2
- }
- },
- "scheduler": {
- "type": "WarmupLR",
- "params":{
- "warmup_min_lr": 5e-6,
- "warmup_max_lr": 1e-4
- }
- },
- "zero_allow_untested_optimizer": false,
- "fp16": {
- "enabled": true,
- "loss_scale": 0,
- "loss_scale_window": 1000,
- "hysteresis": 2,
- "min_loss_scale": 1
- },
- "activation_checkpointing": {
- "partition_activations": false,
- "contiguous_memory_optimization": false
- },
- "wall_clock_breakdown": false
-}
-EOT
-
-# export PL_DEEPSPEED_CONFIG_PATH=$config_json
-
-TRAINER_ARGS="
- --max_epochs 2 \
- --gpus 1 \
- --num_nodes 1 \
- --strategy deepspeed_stage_${ZERO_STAGE} \
- --default_root_dir $ROOT_DIR \
- --dirpath $ROOT_DIR/ckpt \
- --save_top_k 3 \
- --monitor val_loss \
- --mode min \
- --save_last \
- --every_n_train_steps 0 \
- --val_check_interval 0.1 \
-"
-
-prompt='"'
-DATA_ARGS="
- --datasets_name lcsts \
- --num_workers 8 \
- --train_batchsize $MICRO_BATCH_SIZE \
- --val_batchsize $MICRO_BATCH_SIZE \
- --test_batchsize $MICRO_BATCH_SIZE \
- --max_enc_length 128 \
- --max_dec_length 64 \
- --val_datasets_field val \
- --prompt $prompt \
-"
-
-MODEL_ARGS="
- --pretrained_model_path /cognitive_comp/gaoxinyu/pretrained_model/bart-base \
- --output_save_path $ROOT_DIR/${MODEL_NAME}_predict_lcsts.json \
- --learning_rate 1e-4 \
- --weight_decay 0.1 \
- --precision 16 \
-"
-
-SCRIPTS_PATH=seq2seq_summary.py
-
-export CMD=" \
- $SCRIPTS_PATH \
- $TRAINER_ARGS \
- $MODEL_ARGS \
- $DATA_ARGS \
- "
-
-echo $CMD
-
-#singularity exec --nv -B /cognitive_comp/ganruyi/Megatron/:/cognitive_comp/ganruyi/Megatron/,/cognitive_comp/gaoxinyu/:/cognitive_comp/gaoxinyu/ $SINGULARITY_PATH python $CMD
-
-# to debug - add echo (it exits and prints what it would have launched)
-#run_cmd="$PY_LAUNCHER $CMD"
-# srun --nodes=1 --gres=gpu:4 --ntasks-per-node=4 --cpus-per-gpu=20
-source activate
-conda activate torchnew
-srun --nodes=1 --ntasks-per-node=1 --gres=gpu:1 --cpus-per-task=30 -o ${MODEL_NAME}-%J.log --jobid=229623 bash -c 'python3 $SCRIPT_PATH $CMD'
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/linformer/linformer_src/models/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/linformer/linformer_src/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/distributed_fairseq_model.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/distributed_fairseq_model.py
deleted file mode 100644
index 5eda2276404ca686be124901674ddfe36bd6dfd1..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/distributed_fairseq_model.py
+++ /dev/null
@@ -1,146 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import signal
-import threading
-
-import torch
-import torch.nn as nn
-from torch.nn.parallel import DistributedDataParallel
-
-from fairseq.distributed import (
- DistributedTimeoutWrapper,
- LegacyDistributedDataParallel,
- ModuleProxyWrapper,
- TPUDistributedDataParallel,
-)
-
-
-logger = logging.getLogger(__name__)
-
-
-_GOSSIP_DISABLED = False
-try:
- import gossip
-except ImportError:
- _GOSSIP_DISABLED = True
-
-
-def DistributedFairseqModel(args, model, process_group, device):
- """
- Wrap a *model* to support distributed data parallel training.
-
- This is similar to the built-in DistributedDataParallel, but allows
- additional configuration of the DistributedDataParallel class to
- use, and also provides easier access to the wrapped model by
- forwarding requests for missing attributes to the wrapped model.
-
- Args:
- args (argparse.Namespace): fairseq args
- model (BaseFairseqModel): model to wrap
- process_group: the c10d process group to be used for distributed data
- parallel all-reduction.
- device: device to move model to
- """
- assert isinstance(model, nn.Module)
- if args.tpu:
- wrapped_model = TPUDistributedDataParallel(
- module=model.to(device),
- process_group=process_group,
- )
- # forward missing getattr and state_dict/load_state_dict to orig model
- wrapped_model = ModuleProxyWrapper(wrapped_model)
- elif args.ddp_backend in {"c10d", "pytorch_ddp"}:
- wrapped_model = DistributedDataParallel(
- module=model.to(device),
- device_ids=[args.device_id],
- output_device=args.device_id,
- broadcast_buffers=args.broadcast_buffers,
- bucket_cap_mb=args.bucket_cap_mb,
- process_group=process_group,
- find_unused_parameters=args.find_unused_parameters,
- gradient_as_bucket_view=args.gradient_as_bucket_view,
- )
- if args.ddp_comm_hook == "fp16":
- logger.info("enable fp16 communication hook in DDP")
- try:
- from torch.distributed.algorithms.ddp_comm_hooks import (
- register_ddp_comm_hook,
- DDPCommHookType,
- )
- except:
- logger.error(
- "Could not import from torch.distributed.algorithms.ddp_comm_hooks; you may need to update your pytorch version"
- )
- raise
-
- register_ddp_comm_hook(DDPCommHookType.FP16_COMPRESS, wrapped_model)
- # forward missing getattr and state_dict/load_state_dict to orig model
- wrapped_model = ModuleProxyWrapper(wrapped_model)
- elif args.ddp_backend in {"no_c10d", "legacy_ddp"}:
- wrapped_model = LegacyDistributedDataParallel(
- module=model.to(device),
- buffer_size=2 ** 28,
- process_group=process_group,
- )
- # forward missing getattr and state_dict/load_state_dict to orig model
- wrapped_model = ModuleProxyWrapper(wrapped_model)
- elif args.ddp_backend == "slow_mo":
- if _GOSSIP_DISABLED:
- raise ImportError(
- "Cannot find gossip library. Please install from: "
- "github.com/facebookresearch/stochastic_gradient_push"
- )
-
- # The values of slowmo_momentum below were obtained by tuning on the
- # En-De 16 dataset by training the transformer_wmt_en_de_large model
- if args.slowmo_momentum is None:
- if args.distributed_world_size <= 16:
- args.slowmo_momentum = 0.0
- elif args.distributed_world_size <= 32:
- args.slowmo_momentum = 0.2
- elif args.distributed_world_size <= 64:
- args.slowmo_momentum = 0.5
- else:
- args.slowmo_momentum = 0.6
-
- wrapped_model = gossip.GossipDataParallel(
- module=model.to(device),
- device_ids=[args.device_id],
- output_device=args.device_id,
- broadcast_buffers=args.broadcast_buffers,
- nprocs_per_node=args.nprocs_per_node,
- slowmo_momentum=args.slowmo_momentum,
- localsgd=(args.slowmo_algorithm == "LocalSGD"),
- localsgd_frequency=args.localsgd_frequency,
- )
- # forward missing getattr and state_dict/load_state_dict to orig model
- wrapped_model = ModuleProxyWrapper(wrapped_model)
- elif args.ddp_backend == "fully_sharded":
- try:
- from fairscale.nn.data_parallel import FullyShardedDataParallel as FSDP
- except ImportError:
- raise ImportError(
- "Cannot find FullyShardedDataParallel. "
- "Please install fairscale with: pip install fairscale"
- )
- assert isinstance(model, FSDP), "expected model to already be wrapped in FSDP"
- wrapped_model = model
- if args.memory_efficient_fp16:
- wrapped_model = wrapped_model.half()
- if not args.cpu_offload:
- wrapped_model = wrapped_model.to(device=device)
- else:
- raise ValueError("Unknown --ddp-backend: " + args.ddp_backend)
-
- # kill hung distributed jobs after a timeout
- if getattr(args, "heartbeat_timeout", -1) > 0:
- wrapped_model = DistributedTimeoutWrapper(
- wrapped_model, timeout=getattr(args, "heartbeat_timeout", -1)
- )
-
- return wrapped_model
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/masked_lm.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/masked_lm.py
deleted file mode 100644
index 5cb49dd77cc3514e6c1383c4286e90979f6edb34..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/masked_lm.py
+++ /dev/null
@@ -1,404 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.models import (
- FairseqEncoder,
- FairseqEncoderModel,
- register_model,
- register_model_architecture,
-)
-from fairseq.modules import (
- LayerNorm,
- SinusoidalPositionalEmbedding,
- TransformerSentenceEncoder,
-)
-from fairseq.modules.transformer_sentence_encoder import init_bert_params
-from fairseq.utils import safe_hasattr
-
-
-logger = logging.getLogger(__name__)
-
-
-@register_model("masked_lm")
-class MaskedLMModel(FairseqEncoderModel):
- """
- Class for training a Masked Language Model. It also supports an
- additional sentence level prediction if the sent-loss argument is set.
- """
-
- def __init__(self, args, encoder):
- super().__init__(encoder)
- self.args = args
-
- # if specified then apply bert initialization on the model. We need
- # to explictly call this to make sure that the output embeddings
- # and projection layers are also correctly initialized
- if getattr(args, "apply_bert_init", False):
- self.apply(init_bert_params)
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- # Arguments related to dropout
- parser.add_argument(
- "--dropout", type=float, metavar="D", help="dropout probability"
- )
- parser.add_argument(
- "--attention-dropout",
- type=float,
- metavar="D",
- help="dropout probability for" " attention weights",
- )
- parser.add_argument(
- "--act-dropout",
- type=float,
- metavar="D",
- help="dropout probability after" " activation in FFN",
- )
-
- # Arguments related to hidden states and self-attention
- parser.add_argument(
- "--encoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--encoder-layers", type=int, metavar="N", help="num encoder layers"
- )
- parser.add_argument(
- "--encoder-attention-heads",
- type=int,
- metavar="N",
- help="num encoder attention heads",
- )
-
- # Arguments related to input and output embeddings
- parser.add_argument(
- "--encoder-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension",
- )
- parser.add_argument(
- "--share-encoder-input-output-embed",
- action="store_true",
- help="share encoder input" " and output embeddings",
- )
- parser.add_argument(
- "--encoder-learned-pos",
- action="store_true",
- help="use learned positional embeddings in the encoder",
- )
- parser.add_argument(
- "--no-token-positional-embeddings",
- action="store_true",
- help="if set, disables positional embeddings" " (outside self attention)",
- )
- parser.add_argument(
- "--num-segment", type=int, metavar="N", help="num segment in the input"
- )
- parser.add_argument(
- "--max-positions", type=int, help="number of positional embeddings to learn"
- )
-
- # Arguments related to sentence level prediction
- parser.add_argument(
- "--sentence-class-num",
- type=int,
- metavar="N",
- help="number of classes for sentence task",
- )
- parser.add_argument(
- "--sent-loss",
- action="store_true",
- help="if set," " calculate sentence level predictions",
- )
-
- # Arguments related to parameter initialization
- parser.add_argument(
- "--apply-bert-init",
- action="store_true",
- help="use custom param initialization for BERT",
- )
-
- # misc params
- parser.add_argument(
- "--activation-fn",
- choices=utils.get_available_activation_fns(),
- help="activation function to use",
- )
- parser.add_argument(
- "--pooler-activation-fn",
- choices=utils.get_available_activation_fns(),
- help="Which activation function to use for pooler layer.",
- )
- parser.add_argument(
- "--encoder-normalize-before",
- action="store_true",
- help="apply layernorm before each encoder block",
- )
-
- def forward(self, src_tokens, segment_labels=None, **kwargs):
- return self.encoder(src_tokens, segment_labels=segment_labels, **kwargs)
-
- def max_positions(self):
- return self.encoder.max_positions
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
- # make sure all arguments are present in older models
- base_architecture(args)
-
- if not safe_hasattr(args, "max_positions"):
- args.max_positions = args.tokens_per_sample
-
- logger.info(args)
-
- encoder = MaskedLMEncoder(args, task.dictionary)
- return cls(args, encoder)
-
-
-class MaskedLMEncoder(FairseqEncoder):
- """
- Encoder for Masked Language Modelling.
- """
-
- def __init__(self, args, dictionary):
- super().__init__(dictionary)
-
- self.padding_idx = dictionary.pad()
- self.vocab_size = dictionary.__len__()
- self.max_positions = args.max_positions
-
- self.sentence_encoder = TransformerSentenceEncoder(
- padding_idx=self.padding_idx,
- vocab_size=self.vocab_size,
- num_encoder_layers=args.encoder_layers,
- embedding_dim=args.encoder_embed_dim,
- ffn_embedding_dim=args.encoder_ffn_embed_dim,
- num_attention_heads=args.encoder_attention_heads,
- dropout=args.dropout,
- attention_dropout=args.attention_dropout,
- activation_dropout=args.act_dropout,
- max_seq_len=self.max_positions,
- num_segments=args.num_segment,
- use_position_embeddings=not args.no_token_positional_embeddings,
- encoder_normalize_before=args.encoder_normalize_before,
- apply_bert_init=args.apply_bert_init,
- activation_fn=args.activation_fn,
- learned_pos_embedding=args.encoder_learned_pos,
- )
-
- self.share_input_output_embed = args.share_encoder_input_output_embed
- self.embed_out = None
- self.sentence_projection_layer = None
- self.sentence_out_dim = args.sentence_class_num
- self.lm_output_learned_bias = None
-
- # Remove head is set to true during fine-tuning
- self.load_softmax = not getattr(args, "remove_head", False)
-
- self.masked_lm_pooler = nn.Linear(
- args.encoder_embed_dim, args.encoder_embed_dim
- )
- self.pooler_activation = utils.get_activation_fn(args.pooler_activation_fn)
-
- self.lm_head_transform_weight = nn.Linear(
- args.encoder_embed_dim, args.encoder_embed_dim
- )
- self.activation_fn = utils.get_activation_fn(args.activation_fn)
- self.layer_norm = LayerNorm(args.encoder_embed_dim)
-
- self.lm_output_learned_bias = None
- if self.load_softmax:
- self.lm_output_learned_bias = nn.Parameter(torch.zeros(self.vocab_size))
-
- if not self.share_input_output_embed:
- self.embed_out = nn.Linear(
- args.encoder_embed_dim, self.vocab_size, bias=False
- )
-
- if args.sent_loss:
- self.sentence_projection_layer = nn.Linear(
- args.encoder_embed_dim, self.sentence_out_dim, bias=False
- )
-
- def forward(self, src_tokens, segment_labels=None, masked_tokens=None, **unused):
- """
- Forward pass for Masked LM encoder. This first computes the token
- embedding using the token embedding matrix, position embeddings (if
- specified) and segment embeddings (if specified).
-
- Here we assume that the sentence representation corresponds to the
- output of the classification_token (see bert_task or cross_lingual_lm
- task for more details).
- Args:
- - src_tokens: B x T matrix representing sentences
- - segment_labels: B x T matrix representing segment label for tokens
- Returns:
- - a tuple of the following:
- - logits for predictions in format B x T x C to be used in
- softmax afterwards
- - a dictionary of additional data, where 'pooled_output' contains
- the representation for classification_token and 'inner_states'
- is a list of internal model states used to compute the
- predictions (similar in ELMO). 'sentence_logits'
- is the prediction logit for NSP task and is only computed if
- this is specified in the input arguments.
- """
-
- inner_states, sentence_rep = self.sentence_encoder(
- src_tokens,
- segment_labels=segment_labels,
- )
-
- x = inner_states[-1].transpose(0, 1)
- # project masked tokens only
- if masked_tokens is not None:
- x = x[masked_tokens, :]
- x = self.layer_norm(self.activation_fn(self.lm_head_transform_weight(x)))
-
- pooled_output = self.pooler_activation(self.masked_lm_pooler(sentence_rep))
-
- # project back to size of vocabulary
- if self.share_input_output_embed and hasattr(
- self.sentence_encoder.embed_tokens, "weight"
- ):
- x = F.linear(x, self.sentence_encoder.embed_tokens.weight)
- elif self.embed_out is not None:
- x = self.embed_out(x)
- if self.lm_output_learned_bias is not None:
- x = x + self.lm_output_learned_bias
- sentence_logits = None
- if self.sentence_projection_layer:
- sentence_logits = self.sentence_projection_layer(pooled_output)
-
- return x, {
- "inner_states": inner_states,
- "pooled_output": pooled_output,
- "sentence_logits": sentence_logits,
- }
-
- def max_positions(self):
- """Maximum output length supported by the encoder."""
- return self.max_positions
-
- def upgrade_state_dict_named(self, state_dict, name):
- if isinstance(
- self.sentence_encoder.embed_positions, SinusoidalPositionalEmbedding
- ):
- state_dict[
- name + ".sentence_encoder.embed_positions._float_tensor"
- ] = torch.FloatTensor(1)
- if not self.load_softmax:
- for k in list(state_dict.keys()):
- if (
- "embed_out.weight" in k
- or "sentence_projection_layer.weight" in k
- or "lm_output_learned_bias" in k
- ):
- del state_dict[k]
- return state_dict
-
-
-@register_model_architecture("masked_lm", "masked_lm")
-def base_architecture(args):
- args.dropout = getattr(args, "dropout", 0.1)
- args.attention_dropout = getattr(args, "attention_dropout", 0.1)
- args.act_dropout = getattr(args, "act_dropout", 0.0)
-
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096)
- args.encoder_layers = getattr(args, "encoder_layers", 6)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8)
-
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
- args.share_encoder_input_output_embed = getattr(
- args, "share_encoder_input_output_embed", False
- )
- args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False)
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
- args.num_segment = getattr(args, "num_segment", 2)
-
- args.sentence_class_num = getattr(args, "sentence_class_num", 2)
- args.sent_loss = getattr(args, "sent_loss", False)
-
- args.apply_bert_init = getattr(args, "apply_bert_init", False)
-
- args.activation_fn = getattr(args, "activation_fn", "relu")
- args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh")
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
-
-
-@register_model_architecture("masked_lm", "bert_base")
-def bert_base_architecture(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 768)
- args.share_encoder_input_output_embed = getattr(
- args, "share_encoder_input_output_embed", True
- )
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
- args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True)
- args.num_segment = getattr(args, "num_segment", 2)
-
- args.encoder_layers = getattr(args, "encoder_layers", 12)
-
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 12)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 3072)
-
- args.sentence_class_num = getattr(args, "sentence_class_num", 2)
- args.sent_loss = getattr(args, "sent_loss", True)
-
- args.apply_bert_init = getattr(args, "apply_bert_init", True)
-
- args.activation_fn = getattr(args, "activation_fn", "gelu")
- args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh")
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True)
- base_architecture(args)
-
-
-@register_model_architecture("masked_lm", "bert_large")
-def bert_large_architecture(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
- args.encoder_layers = getattr(args, "encoder_layers", 24)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096)
- bert_base_architecture(args)
-
-
-@register_model_architecture("masked_lm", "xlm_base")
-def xlm_architecture(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
- args.share_encoder_input_output_embed = getattr(
- args, "share_encoder_input_output_embed", True
- )
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
- args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True)
- args.num_segment = getattr(args, "num_segment", 1)
-
- args.encoder_layers = getattr(args, "encoder_layers", 6)
-
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096)
-
- args.sent_loss = getattr(args, "sent_loss", False)
-
- args.activation_fn = getattr(args, "activation_fn", "gelu")
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
- args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh")
- args.apply_bert_init = getattr(args, "apply_bert_init", True)
- base_architecture(args)
diff --git a/spaces/HuggingFaceH4/open_llm_leaderboard/README.md b/spaces/HuggingFaceH4/open_llm_leaderboard/README.md
deleted file mode 100644
index fa53f9f8ac6a4c0a7e1e80543537db644ea0e0b5..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceH4/open_llm_leaderboard/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Open LLM Leaderboard
-emoji: 🏆
-colorFrom: green
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.43.2
-app_file: app.py
-pinned: true
-license: apache-2.0
-duplicated_from: HuggingFaceH4/open_llm_leaderboard
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/modules/transformer_layer.py b/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/modules/transformer_layer.py
deleted file mode 100644
index 7ab53c6e5f12f15562717effb86ab8cb8d6b4fa3..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/modules/transformer_layer.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq.model_parallel.modules import ModelParallelMultiheadAttention
-from fairseq.modules import TransformerDecoderLayer, TransformerEncoderLayer
-
-
-try:
- from fairseq.model_parallel.megatron.mpu import (
- ColumnParallelLinear,
- RowParallelLinear,
- )
-
- has_megatron_submodule = True
-except (ImportError, ModuleNotFoundError):
- has_megatron_submodule = False
-
-
-class ModelParallelTransformerEncoderLayer(TransformerEncoderLayer):
- """Encoder layer block over multiple gpus.
-
- See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details.
- """
-
- def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
- if q_noise > 0:
- raise NotImplementedError
- return ColumnParallelLinear(input_dim, output_dim, gather_output=False)
-
- def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
- if q_noise > 0:
- raise NotImplementedError
- return RowParallelLinear(input_dim, output_dim, input_is_parallel=True)
-
- def build_self_attention(self, embed_dim, args, **unused_kwargs):
- return ModelParallelMultiheadAttention(
- embed_dim,
- args.encoder_attention_heads,
- dropout=args.attention_dropout,
- self_attention=True,
- )
-
-
-class ModelParallelTransformerDecoderLayer(TransformerDecoderLayer):
- """Decoder layer block.
-
- See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details.
- """
-
- def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
- if q_noise > 0:
- raise NotImplementedError
- return ColumnParallelLinear(input_dim, output_dim, gather_output=False)
-
- def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
- if q_noise > 0:
- raise NotImplementedError
- return RowParallelLinear(input_dim, output_dim, input_is_parallel=True)
-
- def build_self_attention(self, embed_dim, args, **unused_kwargs):
- return ModelParallelMultiheadAttention(
- embed_dim=embed_dim,
- num_heads=args.decoder_attention_heads,
- dropout=args.attention_dropout,
- self_attention=not getattr(args, "cross_self_attention", False),
- )
-
- def build_encoder_attention(self, embed_dim, args, **unused_kwargs):
- return ModelParallelMultiheadAttention(
- embed_dim=embed_dim,
- num_heads=args.decoder_attention_heads,
- kdim=getattr(args, "encoder_embed_dim", None),
- vdim=getattr(args, "encoder_embed_dim", None),
- dropout=args.attention_dropout,
- encoder_decoder_attention=True,
- )
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/dynamicconv_layer/setup.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/dynamicconv_layer/setup.py
deleted file mode 100644
index 6a21f7e2ee0840a3b251522275a0b32a856951d7..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/dynamicconv_layer/setup.py
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from setuptools import setup
-from torch.utils.cpp_extension import BuildExtension, CUDAExtension
-
-
-setup(
- name="dynamicconv_layer",
- ext_modules=[
- CUDAExtension(
- name="dynamicconv_cuda",
- sources=[
- "dynamicconv_cuda.cpp",
- "dynamicconv_cuda_kernel.cu",
- ],
- ),
- ],
- cmdclass={"build_ext": BuildExtension},
-)
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/scalar/modules/qact.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/scalar/modules/qact.py
deleted file mode 100644
index c5dd1d63362423ab0cfc381dddabb547a3b44c72..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/scalar/modules/qact.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from ..ops import emulate_int
-
-
-class ActivationQuantizer:
- """
- Fake scalar quantization of the activations using a forward hook.
-
- Args:
- - module. a nn.Module for which we quantize the *post-activations*
- - p: proportion of activations to quantize, set by default to 1
- - update_step: to recompute quantization parameters
- - bits: number of bits for quantization
- - method: choose among {"tensor", "histogram", "channel"}
- - clamp_threshold: to prevent gradients overflow
-
- Remarks:
- - Parameters scale and zero_point are recomputed every update_step
- forward pass to reduce the overhead
- - For the list of quantization methods and number of bits, see ops.py
- - To remove the hook from the module, simply call self.handle.remove()
- - At test time, the activations are fully quantized
- - We use the straight-through estimator so that the gradients
- back-propagate nicely in the network, this is implemented with
- the detach() trick
- - The activations are hard-clamped in [-clamp_threshold, clamp_threshold]
- to prevent overflow during the backward pass
- """
-
- def __init__(
- self,
- module,
- p=1,
- update_step=1000,
- bits=8,
- method="histogram",
- clamp_threshold=5,
- ):
- self.module = module
- self.p = p
- self.update_step = update_step
- self.counter = 0
- self.bits = bits
- self.method = method
- self.clamp_threshold = clamp_threshold
- self.handle = None
- self.register_hook()
-
- def register_hook(self):
- # forward hook
- def quantize_hook(module, x, y):
-
- # update parameters every 1000 iterations
- if self.counter % self.update_step == 0:
- self.scale = None
- self.zero_point = None
- self.counter += 1
-
- # train with QuantNoise and evaluate the fully quantized network
- p = self.p if self.module.training else 1
-
- # quantize activations
- y_q, self.scale, self.zero_point = emulate_int(
- y.detach(),
- bits=self.bits,
- method=self.method,
- scale=self.scale,
- zero_point=self.zero_point,
- )
-
- # mask to apply noise
- mask = torch.zeros_like(y)
- mask.bernoulli_(1 - p)
- noise = (y_q - y).masked_fill(mask.bool(), 0)
-
- # using straight-through estimator (STE)
- clamp_low = -self.scale * self.zero_point
- clamp_high = self.scale * (2 ** self.bits - 1 - self.zero_point)
- return torch.clamp(y, clamp_low.item(), clamp_high.item()) + noise.detach()
-
- # register hook
- self.handle = self.module.register_forward_hook(quantize_hook)
diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/segment/plots.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/segment/plots.py
deleted file mode 100644
index 9b90900b3772fe23dbd57deb64221f98e563b069..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/segment/plots.py
+++ /dev/null
@@ -1,143 +0,0 @@
-import contextlib
-import math
-from pathlib import Path
-
-import cv2
-import matplotlib.pyplot as plt
-import numpy as np
-import pandas as pd
-import torch
-
-from .. import threaded
-from ..general import xywh2xyxy
-from ..plots import Annotator, colors
-
-
-@threaded
-def plot_images_and_masks(images, targets, masks, paths=None, fname='images.jpg', names=None):
- # Plot image grid with labels
- if isinstance(images, torch.Tensor):
- images = images.cpu().float().numpy()
- if isinstance(targets, torch.Tensor):
- targets = targets.cpu().numpy()
- if isinstance(masks, torch.Tensor):
- masks = masks.cpu().numpy().astype(int)
-
- max_size = 1920 # max image size
- max_subplots = 16 # max image subplots, i.e. 4x4
- bs, _, h, w = images.shape # batch size, _, height, width
- bs = min(bs, max_subplots) # limit plot images
- ns = np.ceil(bs ** 0.5) # number of subplots (square)
- if np.max(images[0]) <= 1:
- images *= 255 # de-normalise (optional)
-
- # Build Image
- mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init
- for i, im in enumerate(images):
- if i == max_subplots: # if last batch has fewer images than we expect
- break
- x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin
- im = im.transpose(1, 2, 0)
- mosaic[y:y + h, x:x + w, :] = im
-
- # Resize (optional)
- scale = max_size / ns / max(h, w)
- if scale < 1:
- h = math.ceil(scale * h)
- w = math.ceil(scale * w)
- mosaic = cv2.resize(mosaic, tuple(int(x * ns) for x in (w, h)))
-
- # Annotate
- fs = int((h + w) * ns * 0.01) # font size
- annotator = Annotator(mosaic, line_width=round(fs / 10), font_size=fs, pil=True, example=names)
- for i in range(i + 1):
- x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin
- annotator.rectangle([x, y, x + w, y + h], None, (255, 255, 255), width=2) # borders
- if paths:
- annotator.text((x + 5, y + 5 + h), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames
- if len(targets) > 0:
- idx = targets[:, 0] == i
- ti = targets[idx] # image targets
-
- boxes = xywh2xyxy(ti[:, 2:6]).T
- classes = ti[:, 1].astype('int')
- labels = ti.shape[1] == 6 # labels if no conf column
- conf = None if labels else ti[:, 6] # check for confidence presence (label vs pred)
-
- if boxes.shape[1]:
- if boxes.max() <= 1.01: # if normalized with tolerance 0.01
- boxes[[0, 2]] *= w # scale to pixels
- boxes[[1, 3]] *= h
- elif scale < 1: # absolute coords need scale if image scales
- boxes *= scale
- boxes[[0, 2]] += x
- boxes[[1, 3]] += y
- for j, box in enumerate(boxes.T.tolist()):
- cls = classes[j]
- color = colors(cls)
- cls = names[cls] if names else cls
- if labels or conf[j] > 0.25: # 0.25 conf thresh
- label = f'{cls}' if labels else f'{cls} {conf[j]:.1f}'
- annotator.box_label(box, label, color=color)
-
- # Plot masks
- if len(masks):
- if masks.max() > 1.0: # mean that masks are overlap
- image_masks = masks[[i]] # (1, 640, 640)
- nl = len(ti)
- index = np.arange(nl).reshape(nl, 1, 1) + 1
- image_masks = np.repeat(image_masks, nl, axis=0)
- image_masks = np.where(image_masks == index, 1.0, 0.0)
- else:
- image_masks = masks[idx]
-
- im = np.asarray(annotator.im).copy()
- for j, box in enumerate(boxes.T.tolist()):
- if labels or conf[j] > 0.25: # 0.25 conf thresh
- color = colors(classes[j])
- mh, mw = image_masks[j].shape
- if mh != h or mw != w:
- mask = image_masks[j].astype(np.uint8)
- mask = cv2.resize(mask, (w, h))
- mask = mask.astype(bool)
- else:
- mask = image_masks[j].astype(bool)
- with contextlib.suppress(Exception):
- im[y:y + h, x:x + w, :][mask] = im[y:y + h, x:x + w, :][mask] * 0.4 + np.array(color) * 0.6
- annotator.fromarray(im)
- annotator.im.save(fname) # save
-
-
-def plot_results_with_masks(file="path/to/results.csv", dir="", best=True):
- # Plot training results.csv. Usage: from utils.plots import *; plot_results('path/to/results.csv')
- save_dir = Path(file).parent if file else Path(dir)
- fig, ax = plt.subplots(2, 8, figsize=(18, 6), tight_layout=True)
- ax = ax.ravel()
- files = list(save_dir.glob("results*.csv"))
- assert len(files), f"No results.csv files found in {save_dir.resolve()}, nothing to plot."
- for f in files:
- try:
- data = pd.read_csv(f)
- index = np.argmax(0.9 * data.values[:, 8] + 0.1 * data.values[:, 7] + 0.9 * data.values[:, 12] +
- 0.1 * data.values[:, 11])
- s = [x.strip() for x in data.columns]
- x = data.values[:, 0]
- for i, j in enumerate([1, 2, 3, 4, 5, 6, 9, 10, 13, 14, 15, 16, 7, 8, 11, 12]):
- y = data.values[:, j]
- # y[y == 0] = np.nan # don't show zero values
- ax[i].plot(x, y, marker=".", label=f.stem, linewidth=2, markersize=2)
- if best:
- # best
- ax[i].scatter(index, y[index], color="r", label=f"best:{index}", marker="*", linewidth=3)
- ax[i].set_title(s[j] + f"\n{round(y[index], 5)}")
- else:
- # last
- ax[i].scatter(x[-1], y[-1], color="r", label="last", marker="*", linewidth=3)
- ax[i].set_title(s[j] + f"\n{round(y[-1], 5)}")
- # if j in [8, 9, 10]: # share train and val loss y axes
- # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5])
- except Exception as e:
- print(f"Warning: Plotting error for {f}: {e}")
- ax[1].legend()
- fig.savefig(save_dir / "results.png", dpi=200)
- plt.close()
diff --git a/spaces/InpaintAI/Inpaint-Anything/utils/crop_for_replacing.py b/spaces/InpaintAI/Inpaint-Anything/utils/crop_for_replacing.py
deleted file mode 100644
index 635b8650282bb082b9e29a1b4dee3e7063eb63fb..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/utils/crop_for_replacing.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import cv2
-import numpy as np
-from typing import Tuple
-
-def resize_and_pad(image: np.ndarray, mask: np.ndarray, target_size: int = 512) -> Tuple[np.ndarray, np.ndarray]:
- """
- Resizes an image and its corresponding mask to have the longer side equal to `target_size` and pads them to make them
- both have the same size. The resulting image and mask have dimensions (target_size, target_size).
-
- Args:
- image: A numpy array representing the image to resize and pad.
- mask: A numpy array representing the mask to resize and pad.
- target_size: An integer specifying the desired size of the longer side after resizing.
-
- Returns:
- A tuple containing two numpy arrays - the resized and padded image and the resized and padded mask.
- """
- height, width, _ = image.shape
- max_dim = max(height, width)
- scale = target_size / max_dim
- new_height = int(height * scale)
- new_width = int(width * scale)
- image_resized = cv2.resize(image, (new_width, new_height), interpolation=cv2.INTER_LINEAR)
- mask_resized = cv2.resize(mask, (new_width, new_height), interpolation=cv2.INTER_LINEAR)
- pad_height = target_size - new_height
- pad_width = target_size - new_width
- top_pad = pad_height // 2
- bottom_pad = pad_height - top_pad
- left_pad = pad_width // 2
- right_pad = pad_width - left_pad
- image_padded = np.pad(image_resized, ((top_pad, bottom_pad), (left_pad, right_pad), (0, 0)), mode='constant')
- mask_padded = np.pad(mask_resized, ((top_pad, bottom_pad), (left_pad, right_pad)), mode='constant')
- return image_padded, mask_padded, (top_pad, bottom_pad, left_pad, right_pad)
-
-def recover_size(image_padded: np.ndarray, mask_padded: np.ndarray, orig_size: Tuple[int, int],
- padding_factors: Tuple[int, int, int, int]) -> Tuple[np.ndarray, np.ndarray]:
- """
- Resizes a padded and resized image and mask to the original size.
-
- Args:
- image_padded: A numpy array representing the padded and resized image.
- mask_padded: A numpy array representing the padded and resized mask.
- orig_size: A tuple containing two integers - the original height and width of the image before resizing and padding.
-
- Returns:
- A tuple containing two numpy arrays - the recovered image and the recovered mask with dimensions `orig_size`.
- """
- h,w,c = image_padded.shape
- top_pad, bottom_pad, left_pad, right_pad = padding_factors
- image = image_padded[top_pad:h-bottom_pad, left_pad:w-right_pad, :]
- mask = mask_padded[top_pad:h-bottom_pad, left_pad:w-right_pad]
- image_resized = cv2.resize(image, orig_size[::-1], interpolation=cv2.INTER_LINEAR)
- mask_resized = cv2.resize(mask, orig_size[::-1], interpolation=cv2.INTER_LINEAR)
- return image_resized, mask_resized
-
-
-
-
-if __name__ == '__main__':
-
- # image = cv2.imread('example/boat.jpg')
- # mask = cv2.imread('example/boat_mask_2.png', cv2.IMREAD_GRAYSCALE)
- # image = cv2.imread('example/groceries.jpg')
- # mask = cv2.imread('example/groceries_mask_2.png', cv2.IMREAD_GRAYSCALE)
- # image = cv2.imread('example/bridge.jpg')
- # mask = cv2.imread('example/bridge_mask_2.png', cv2.IMREAD_GRAYSCALE)
- # image = cv2.imread('example/person_umbrella.jpg')
- # mask = cv2.imread('example/person_umbrella_mask_2.png', cv2.IMREAD_GRAYSCALE)
- # image = cv2.imread('example/hippopotamus.jpg')
- # mask = cv2.imread('example/hippopotamus_mask_1.png', cv2.IMREAD_GRAYSCALE)
- image = cv2.imread('/data1/yutao/projects/IAM/Inpaint-Anything/example/fill-anything/sample5.jpeg')
- mask = cv2.imread('/data1/yutao/projects/IAM/Inpaint-Anything/example/fill-anything/sample5/mask.png', cv2.IMREAD_GRAYSCALE)
- print(image.shape)
- print(mask.shape)
- cv2.imwrite('original_image.jpg', image)
- cv2.imwrite('original_mask.jpg', mask)
- image_padded, mask_padded, padding_factors = resize_and_pad(image, mask)
- cv2.imwrite('padded_image.png', image_padded)
- cv2.imwrite('padded_mask.png', mask_padded)
- print(image_padded.shape, mask_padded.shape, padding_factors)
-
- # ^ ------------------------------------------------------------------------------------
- # ^ Please conduct inpainting or filling here on the cropped image with the cropped mask
- # ^ ------------------------------------------------------------------------------------
-
- # resize and pad the image and mask
-
- # perform some operation on the 512x512 image and mask
- # ...
-
- # recover the image and mask to the original size
- height, width, _ = image.shape
- image_resized, mask_resized = recover_size(image_padded, mask_padded, (height, width), padding_factors)
-
- # save the resized and recovered image and mask
- cv2.imwrite('resized_and_padded_image.png', image_padded)
- cv2.imwrite('resized_and_padded_mask.png', mask_padded)
- cv2.imwrite('recovered_image.png', image_resized)
- cv2.imwrite('recovered_mask.png', mask_resized)
-
-
\ No newline at end of file
diff --git a/spaces/Izal887/rvc-hutao/infer_pack/models_onnx.py b/spaces/Izal887/rvc-hutao/infer_pack/models_onnx.py
deleted file mode 100644
index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000
--- a/spaces/Izal887/rvc-hutao/infer_pack/models_onnx.py
+++ /dev/null
@@ -1,849 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/JUNGU/VToonify/vtoonify/model/stylegan/lpips/__init__.py b/spaces/JUNGU/VToonify/vtoonify/model/stylegan/lpips/__init__.py
deleted file mode 100644
index 8b3c9cdc35a03a4e4585bd6bbc9c793331eb1723..0000000000000000000000000000000000000000
--- a/spaces/JUNGU/VToonify/vtoonify/model/stylegan/lpips/__init__.py
+++ /dev/null
@@ -1,161 +0,0 @@
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import numpy as np
-#from skimage.measure import compare_ssim
-from skimage.metrics import structural_similarity as compare_ssim
-import torch
-from torch.autograd import Variable
-
-from model.stylegan.lpips import dist_model
-
-class PerceptualLoss(torch.nn.Module):
- def __init__(self, model='net-lin', net='alex', colorspace='rgb', spatial=False, use_gpu=True, gpu_ids=[0]): # VGG using our perceptually-learned weights (LPIPS metric)
- # def __init__(self, model='net', net='vgg', use_gpu=True): # "default" way of using VGG as a perceptual loss
- super(PerceptualLoss, self).__init__()
- print('Setting up Perceptual loss...')
- self.use_gpu = use_gpu
- self.spatial = spatial
- self.gpu_ids = gpu_ids
- self.model = dist_model.DistModel()
- self.model.initialize(model=model, net=net, use_gpu=use_gpu, colorspace=colorspace, spatial=self.spatial, gpu_ids=gpu_ids)
- print('...[%s] initialized'%self.model.name())
- print('...Done')
-
- def forward(self, pred, target, normalize=False):
- """
- Pred and target are Variables.
- If normalize is True, assumes the images are between [0,1] and then scales them between [-1,+1]
- If normalize is False, assumes the images are already between [-1,+1]
-
- Inputs pred and target are Nx3xHxW
- Output pytorch Variable N long
- """
-
- if normalize:
- target = 2 * target - 1
- pred = 2 * pred - 1
-
- return self.model.forward(target, pred)
-
-def normalize_tensor(in_feat,eps=1e-10):
- norm_factor = torch.sqrt(torch.sum(in_feat**2,dim=1,keepdim=True))
- return in_feat/(norm_factor+eps)
-
-def l2(p0, p1, range=255.):
- return .5*np.mean((p0 / range - p1 / range)**2)
-
-def psnr(p0, p1, peak=255.):
- return 10*np.log10(peak**2/np.mean((1.*p0-1.*p1)**2))
-
-def dssim(p0, p1, range=255.):
- return (1 - compare_ssim(p0, p1, data_range=range, multichannel=True)) / 2.
-
-def rgb2lab(in_img,mean_cent=False):
- from skimage import color
- img_lab = color.rgb2lab(in_img)
- if(mean_cent):
- img_lab[:,:,0] = img_lab[:,:,0]-50
- return img_lab
-
-def tensor2np(tensor_obj):
- # change dimension of a tensor object into a numpy array
- return tensor_obj[0].cpu().float().numpy().transpose((1,2,0))
-
-def np2tensor(np_obj):
- # change dimenion of np array into tensor array
- return torch.Tensor(np_obj[:, :, :, np.newaxis].transpose((3, 2, 0, 1)))
-
-def tensor2tensorlab(image_tensor,to_norm=True,mc_only=False):
- # image tensor to lab tensor
- from skimage import color
-
- img = tensor2im(image_tensor)
- img_lab = color.rgb2lab(img)
- if(mc_only):
- img_lab[:,:,0] = img_lab[:,:,0]-50
- if(to_norm and not mc_only):
- img_lab[:,:,0] = img_lab[:,:,0]-50
- img_lab = img_lab/100.
-
- return np2tensor(img_lab)
-
-def tensorlab2tensor(lab_tensor,return_inbnd=False):
- from skimage import color
- import warnings
- warnings.filterwarnings("ignore")
-
- lab = tensor2np(lab_tensor)*100.
- lab[:,:,0] = lab[:,:,0]+50
-
- rgb_back = 255.*np.clip(color.lab2rgb(lab.astype('float')),0,1)
- if(return_inbnd):
- # convert back to lab, see if we match
- lab_back = color.rgb2lab(rgb_back.astype('uint8'))
- mask = 1.*np.isclose(lab_back,lab,atol=2.)
- mask = np2tensor(np.prod(mask,axis=2)[:,:,np.newaxis])
- return (im2tensor(rgb_back),mask)
- else:
- return im2tensor(rgb_back)
-
-def rgb2lab(input):
- from skimage import color
- return color.rgb2lab(input / 255.)
-
-def tensor2im(image_tensor, imtype=np.uint8, cent=1., factor=255./2.):
- image_numpy = image_tensor[0].cpu().float().numpy()
- image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + cent) * factor
- return image_numpy.astype(imtype)
-
-def im2tensor(image, imtype=np.uint8, cent=1., factor=255./2.):
- return torch.Tensor((image / factor - cent)
- [:, :, :, np.newaxis].transpose((3, 2, 0, 1)))
-
-def tensor2vec(vector_tensor):
- return vector_tensor.data.cpu().numpy()[:, :, 0, 0]
-
-def voc_ap(rec, prec, use_07_metric=False):
- """ ap = voc_ap(rec, prec, [use_07_metric])
- Compute VOC AP given precision and recall.
- If use_07_metric is true, uses the
- VOC 07 11 point method (default:False).
- """
- if use_07_metric:
- # 11 point metric
- ap = 0.
- for t in np.arange(0., 1.1, 0.1):
- if np.sum(rec >= t) == 0:
- p = 0
- else:
- p = np.max(prec[rec >= t])
- ap = ap + p / 11.
- else:
- # correct AP calculation
- # first append sentinel values at the end
- mrec = np.concatenate(([0.], rec, [1.]))
- mpre = np.concatenate(([0.], prec, [0.]))
-
- # compute the precision envelope
- for i in range(mpre.size - 1, 0, -1):
- mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
-
- # to calculate area under PR curve, look for points
- # where X axis (recall) changes value
- i = np.where(mrec[1:] != mrec[:-1])[0]
-
- # and sum (\Delta recall) * prec
- ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
- return ap
-
-def tensor2im(image_tensor, imtype=np.uint8, cent=1., factor=255./2.):
-# def tensor2im(image_tensor, imtype=np.uint8, cent=1., factor=1.):
- image_numpy = image_tensor[0].cpu().float().numpy()
- image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + cent) * factor
- return image_numpy.astype(imtype)
-
-def im2tensor(image, imtype=np.uint8, cent=1., factor=255./2.):
-# def im2tensor(image, imtype=np.uint8, cent=1., factor=1.):
- return torch.Tensor((image / factor - cent)
- [:, :, :, np.newaxis].transpose((3, 2, 0, 1)))
diff --git a/spaces/Jackflack09/finetuned_diffusion2/utils.py b/spaces/Jackflack09/finetuned_diffusion2/utils.py
deleted file mode 100644
index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/finetuned_diffusion2/utils.py
+++ /dev/null
@@ -1,6 +0,0 @@
-def is_google_colab():
- try:
- import google.colab
- return True
- except:
- return False
\ No newline at end of file
diff --git a/spaces/Jamkonams/AutoGPT/autogpt/speech/eleven_labs.py b/spaces/Jamkonams/AutoGPT/autogpt/speech/eleven_labs.py
deleted file mode 100644
index ea84efd8ca9489b40919ecd571813fe954b078e3..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/autogpt/speech/eleven_labs.py
+++ /dev/null
@@ -1,86 +0,0 @@
-"""ElevenLabs speech module"""
-import os
-
-import requests
-from playsound import playsound
-
-from autogpt.config import Config
-from autogpt.speech.base import VoiceBase
-
-PLACEHOLDERS = {"your-voice-id"}
-
-
-class ElevenLabsSpeech(VoiceBase):
- """ElevenLabs speech class"""
-
- def _setup(self) -> None:
- """Set up the voices, API key, etc.
-
- Returns:
- None: None
- """
-
- cfg = Config()
- default_voices = ["ErXwobaYiN019PkySvjV", "EXAVITQu4vr4xnSDxMaL"]
- voice_options = {
- "Rachel": "21m00Tcm4TlvDq8ikWAM",
- "Domi": "AZnzlk1XvdvUeBnXmlld",
- "Bella": "EXAVITQu4vr4xnSDxMaL",
- "Antoni": "ErXwobaYiN019PkySvjV",
- "Elli": "MF3mGyEYCl7XYWbV9V6O",
- "Josh": "TxGEqnHWrfWFTfGW9XjX",
- "Arnold": "VR6AewLTigWG4xSOukaG",
- "Adam": "pNInz6obpgDQGcFmaJgB",
- "Sam": "yoZ06aMxZJJ28mfd3POQ",
- }
- self._headers = {
- "Content-Type": "application/json",
- "xi-api-key": cfg.elevenlabs_api_key,
- }
- self._voices = default_voices.copy()
- if cfg.elevenlabs_voice_1_id in voice_options:
- cfg.elevenlabs_voice_1_id = voice_options[cfg.elevenlabs_voice_1_id]
- if cfg.elevenlabs_voice_2_id in voice_options:
- cfg.elevenlabs_voice_2_id = voice_options[cfg.elevenlabs_voice_2_id]
- self._use_custom_voice(cfg.elevenlabs_voice_1_id, 0)
- self._use_custom_voice(cfg.elevenlabs_voice_2_id, 1)
-
- def _use_custom_voice(self, voice, voice_index) -> None:
- """Use a custom voice if provided and not a placeholder
-
- Args:
- voice (str): The voice ID
- voice_index (int): The voice index
-
- Returns:
- None: None
- """
- # Placeholder values that should be treated as empty
- if voice and voice not in PLACEHOLDERS:
- self._voices[voice_index] = voice
-
- def _speech(self, text: str, voice_index: int = 0) -> bool:
- """Speak text using elevenlabs.io's API
-
- Args:
- text (str): The text to speak
- voice_index (int, optional): The voice to use. Defaults to 0.
-
- Returns:
- bool: True if the request was successful, False otherwise
- """
- tts_url = (
- f"https://api.elevenlabs.io/v1/text-to-speech/{self._voices[voice_index]}"
- )
- response = requests.post(tts_url, headers=self._headers, json={"text": text})
-
- if response.status_code == 200:
- with open("speech.mpeg", "wb") as f:
- f.write(response.content)
- playsound("speech.mpeg", True)
- os.remove("speech.mpeg")
- return True
- else:
- print("Request failed with status code:", response.status_code)
- print("Response content:", response.content)
- return False
diff --git a/spaces/Jamphus/G/README.md b/spaces/Jamphus/G/README.md
deleted file mode 100644
index 0ca42d2548bdb1266096a4bc8337c72a52d2665c..0000000000000000000000000000000000000000
--- a/spaces/Jamphus/G/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: G
-emoji: 👁
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.20.1
-app_file: app.py
-pinned: false
-license: gpl
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/JimmyTarbender/GPT2HistoryEvents/README.md b/spaces/JimmyTarbender/GPT2HistoryEvents/README.md
deleted file mode 100644
index 22afa8294e0c4bb00400f3af60f4fb25b67ea800..0000000000000000000000000000000000000000
--- a/spaces/JimmyTarbender/GPT2HistoryEvents/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: DanToniGPT2FormalInformal
-emoji: 💻
-colorFrom: indigo
-colorTo: red
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-duplicated_from: ToniDan/DanToniGPT2FormalInformal
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/KarmKarma/genshinimpact-rvc-models-v2/app.py b/spaces/KarmKarma/genshinimpact-rvc-models-v2/app.py
deleted file mode 100644
index 0e824b8a46372455e2fcaef64495520c41ef0f24..0000000000000000000000000000000000000000
--- a/spaces/KarmKarma/genshinimpact-rvc-models-v2/app.py
+++ /dev/null
@@ -1,503 +0,0 @@
-import os
-import glob
-import json
-import traceback
-import logging
-import gradio as gr
-import numpy as np
-import librosa
-import torch
-import asyncio
-import edge_tts
-import yt_dlp
-import ffmpeg
-import subprocess
-import sys
-import io
-import wave
-from datetime import datetime
-from fairseq import checkpoint_utils
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from vc_infer_pipeline import VC
-from config import Config
-config = Config()
-logging.getLogger("numba").setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces"
-
-audio_mode = []
-f0method_mode = []
-f0method_info = ""
-if limitation is True:
- audio_mode = ["Upload audio", "TTS Audio"]
- f0method_mode = ["pm", "crepe", "harvest"]
- f0method_info = "PM is fast, Crepe or harvest is good but it was extremely slow (Default: PM)"
-else:
- audio_mode = ["Upload audio", "Youtube", "TTS Audio"]
- f0method_mode = ["pm", "crepe", "harvest"]
- f0method_info = "PM is fast, Crepe or harvest is good but it was extremely slow (Default: PM))"
-def create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, file_index):
- def vc_fn(
- vc_audio_mode,
- vc_input,
- vc_upload,
- tts_text,
- tts_voice,
- f0_up_key,
- f0_method,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- ):
- try:
- if vc_audio_mode == "Input path" or "Youtube" and vc_input != "":
- audio, sr = librosa.load(vc_input, sr=16000, mono=True)
- elif vc_audio_mode == "Upload audio":
- if vc_upload is None:
- return "You need to upload an audio", None
- sampling_rate, audio = vc_upload
- duration = audio.shape[0] / sampling_rate
- if duration > 360 and limitation:
- return "Please upload an audio file that is less than 1 minute.", None
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- elif vc_audio_mode == "TTS Audio":
- if len(tts_text) > 600 and limitation:
- return "Text is too long", None
- if tts_text is None or tts_voice is None:
- return "You need to enter text and select a voice", None
- asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3"))
- audio, sr = librosa.load("tts.mp3", sr=16000, mono=True)
- vc_input = "tts.mp3"
- times = [0, 0, 0]
- f0_up_key = int(f0_up_key)
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- 0,
- audio,
- vc_input,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- f0_file=None,
- )
- info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s"
- print(f"{model_title} | {info}")
- return info, (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- return info, (None, None)
- return vc_fn
-
-def load_model():
- categories = []
- with open("weights/folder_info.json", "r", encoding="utf-8") as f:
- folder_info = json.load(f)
- for category_name, category_info in folder_info.items():
- if not category_info['enable']:
- continue
- category_title = category_info['title']
- category_folder = category_info['folder_path']
- models = []
- with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for character_name, info in models_info.items():
- if not info['enable']:
- continue
- model_title = info['title']
- model_name = info['model_path']
- model_author = info.get("author", None)
- model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}"
- model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}"
- cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- model_version = "V1"
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- model_version = "V2"
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(config.device)
- if config.is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})")
- models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_title, tgt_sr, net_g, vc, if_f0, version, model_index)))
- categories.append([category_title, category_folder, models])
- return categories
-
-def cut_vocal_and_inst(url, audio_provider, split_model):
- if url != "":
- if not os.path.exists("dl_audio"):
- os.mkdir("dl_audio")
- if audio_provider == "Youtube":
- ydl_opts = {
- 'format': 'bestaudio/best',
- 'postprocessors': [{
- 'key': 'FFmpegExtractAudio',
- 'preferredcodec': 'wav',
- }],
- "outtmpl": 'dl_audio/youtube_audio',
- }
- with yt_dlp.YoutubeDL(ydl_opts) as ydl:
- ydl.download([url])
- audio_path = "dl_audio/youtube_audio.wav"
- else:
- # Spotify doesnt work.
- # Need to find other solution soon.
- '''
- command = f"spotdl download {url} --output dl_audio/.wav"
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- audio_path = "dl_audio/spotify_audio.wav"
- '''
- if split_model == "htdemucs":
- command = f"demucs --two-stems=vocals {audio_path} -o output"
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- return "output/htdemucs/youtube_audio/vocals.wav", "output/htdemucs/youtube_audio/no_vocals.wav", audio_path, "output/htdemucs/youtube_audio/vocals.wav"
- else:
- command = f"demucs --two-stems=vocals -n mdx_extra_q {audio_path} -o output"
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- return "output/mdx_extra_q/youtube_audio/vocals.wav", "output/mdx_extra_q/youtube_audio/no_vocals.wav", audio_path, "output/mdx_extra_q/youtube_audio/vocals.wav"
- else:
- raise gr.Error("URL Required!")
- return None, None, None, None
-
-def combine_vocal_and_inst(audio_data, audio_volume, split_model):
- if not os.path.exists("output/result"):
- os.mkdir("output/result")
- vocal_path = "output/result/output.wav"
- output_path = "output/result/combine.mp3"
- if split_model == "htdemucs":
- inst_path = "output/htdemucs/youtube_audio/no_vocals.wav"
- else:
- inst_path = "output/mdx_extra_q/youtube_audio/no_vocals.wav"
- with wave.open(vocal_path, "w") as wave_file:
- wave_file.setnchannels(1)
- wave_file.setsampwidth(2)
- wave_file.setframerate(audio_data[0])
- wave_file.writeframes(audio_data[1].tobytes())
- command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}'
- result = subprocess.run(command.split(), stdout=subprocess.PIPE)
- print(result.stdout.decode())
- return output_path
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(config.device)
- if config.is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-def change_audio_mode(vc_audio_mode):
- if vc_audio_mode == "Input path":
- return (
- # Input & Upload
- gr.Textbox.update(visible=True),
- gr.Audio.update(visible=False),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
- elif vc_audio_mode == "Upload audio":
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Audio.update(visible=True),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
- elif vc_audio_mode == "Youtube":
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Audio.update(visible=False),
- # Youtube
- gr.Dropdown.update(visible=True),
- gr.Textbox.update(visible=True),
- gr.Dropdown.update(visible=True),
- gr.Button.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Slider.update(visible=True),
- gr.Audio.update(visible=True),
- gr.Button.update(visible=True),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
- elif vc_audio_mode == "TTS Audio":
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Audio.update(visible=False),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=True),
- gr.Dropdown.update(visible=True)
- )
- else:
- return (
- # Input & Upload
- gr.Textbox.update(visible=False),
- gr.Audio.update(visible=True),
- # Youtube
- gr.Dropdown.update(visible=False),
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False),
- gr.Button.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Slider.update(visible=False),
- gr.Audio.update(visible=False),
- gr.Button.update(visible=False),
- # TTS
- gr.Textbox.update(visible=False),
- gr.Dropdown.update(visible=False)
- )
-
-if __name__ == '__main__':
- load_hubert()
- categories = load_model()
- tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices())
- voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list]
- with gr.Blocks(theme=gr.themes.Base()) as app:
- gr.Markdown(
- "#
Genshin Impact RVC Models\n"
- "###
The input audio should be clean and pure voice without background music.\n"
- "[](https://colab.research.google.com/drive/1v4sSLQKY4zVLFzSVX8SrD0yoL_S75Bde?usp=share_link)\n\n"
- "[](https://ko-fi.com/karmkarma)\n\n"
- )
- for (folder_title, folder, models) in categories:
- with gr.TabItem(folder_title):
- with gr.Tabs():
- if not models:
- gr.Markdown("#
No Model Loaded.")
- gr.Markdown("##
Please add model or fix your model path.")
- continue
- for (name, title, author, cover, model_version, vc_fn) in models:
- with gr.TabItem(name):
- with gr.Row():
- gr.Markdown(
- '
'
- f'
{title}
\n'+
- f'
RVC {model_version} Model
\n'+
- (f'
Model author: {author}
' if author else "")+
- (f'' if cover else "")+
- '
'
- )
- with gr.Row():
- with gr.Column():
- vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio")
- # Input and Upload
- vc_input = gr.Textbox(label="Input audio path", visible=False)
- vc_upload = gr.Audio(label="Upload audio file", visible=True, interactive=True)
- # Youtube
- vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)")
- vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...")
- vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)")
- vc_split = gr.Button("Split Audio", variant="primary", visible=False)
- vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False)
- vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False)
- vc_audio_preview = gr.Audio(label="Audio Preview", visible=False)
- # TTS
- tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input")
- tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female")
- with gr.Column():
- vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice')
- f0method0 = gr.Radio(
- label="Pitch extraction algorithm",
- info=f0method_info,
- choices=f0method_mode,
- value="pm",
- interactive=True
- )
- index_rate1 = gr.Slider(
- minimum=0,
- maximum=1,
- label="Retrieval feature ratio",
- info="Accents controling. Too high prob gonna sounds too robotic (Default: 0.4)",
- value=0.4,
- interactive=True,
- )
- filter_radius0 = gr.Slider(
- minimum=0,
- maximum=7,
- label="Apply Median Filtering",
- info="The value represents the filter radius and can reduce breathiness.",
- value=1,
- step=1,
- interactive=True,
- )
- resample_sr0 = gr.Slider(
- minimum=0,
- maximum=48000,
- label="Resample the output audio",
- info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling",
- value=0,
- step=1,
- interactive=True,
- )
- rms_mix_rate0 = gr.Slider(
- minimum=0,
- maximum=1,
- label="Volume Envelope",
- info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used",
- value=1,
- interactive=True,
- )
- protect0 = gr.Slider(
- minimum=0,
- maximum=0.5,
- label="Voice Protection",
- info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy",
- value=0.23,
- step=0.01,
- interactive=True,
- )
- with gr.Column():
- vc_log = gr.Textbox(label="Output Information", interactive=False)
- vc_output = gr.Audio(label="Output Audio", interactive=False)
- vc_convert = gr.Button("Convert", variant="primary")
- vc_volume = gr.Slider(
- minimum=0,
- maximum=10,
- label="Vocal volume",
- value=4,
- interactive=True,
- step=1,
- info="Adjust vocal volume (Default: 4}",
- visible=False
- )
- vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False)
- vc_combine = gr.Button("Combine",variant="primary", visible=False)
- vc_convert.click(
- fn=vc_fn,
- inputs=[
- vc_audio_mode,
- vc_input,
- vc_upload,
- tts_text,
- tts_voice,
- vc_transform0,
- f0method0,
- index_rate1,
- filter_radius0,
- resample_sr0,
- rms_mix_rate0,
- protect0,
- ],
- outputs=[vc_log ,vc_output]
- )
- vc_split.click(
- fn=cut_vocal_and_inst,
- inputs=[vc_link, vc_download_audio, vc_split_model],
- outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input]
- )
- vc_combine.click(
- fn=combine_vocal_and_inst,
- inputs=[vc_output, vc_volume, vc_split_model],
- outputs=[vc_combined_output]
- )
- vc_audio_mode.change(
- fn=change_audio_mode,
- inputs=[vc_audio_mode],
- outputs=[
- vc_input,
- vc_upload,
- vc_download_audio,
- vc_link,
- vc_split_model,
- vc_split,
- vc_vocal_preview,
- vc_inst_preview,
- vc_audio_preview,
- vc_volume,
- vc_combined_output,
- vc_combine,
- tts_text,
- tts_voice
- ]
- )
-if limitation is True:
- app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab)
-else:
- app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=True)
\ No newline at end of file
diff --git a/spaces/Kayson/InstructDiffusion/utils/logger.py b/spaces/Kayson/InstructDiffusion/utils/logger.py
deleted file mode 100644
index a066e55badd6651d432eab8cab5fbf976ecc1c7e..0000000000000000000000000000000000000000
--- a/spaces/Kayson/InstructDiffusion/utils/logger.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# --------------------------------------------------------
-# Swin Transformer
-# Copyright (c) 2021 Microsoft
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Ze Liu
-# --------------------------------------------------------
-
-import os
-import sys
-import logging
-import functools
-from termcolor import colored
-
-
-@functools.lru_cache()
-def create_logger(output_dir, dist_rank=0, name=''):
- # create logger
- logger = logging.getLogger(name)
- logger.setLevel(logging.DEBUG)
- logger.propagate = False
-
- # create formatter
- fmt = '[%(asctime)s %(name)s] (%(filename)s %(lineno)d): %(levelname)s %(message)s'
- color_fmt = colored('[%(asctime)s %(name)s]', 'green') + \
- colored('(%(filename)s %(lineno)d)', 'yellow') + ': %(levelname)s %(message)s'
-
- # create console handlers for master process
- if dist_rank == 0:
- console_handler = logging.StreamHandler(sys.stdout)
- console_handler.setLevel(logging.DEBUG)
- console_handler.setFormatter(
- logging.Formatter(fmt=color_fmt, datefmt='%Y-%m-%d %H:%M:%S'))
- logger.addHandler(console_handler)
-
- # create file handlers
- file_handler = logging.FileHandler(os.path.join(output_dir, f'log_rank{dist_rank}.txt'), mode='a')
- file_handler.setLevel(logging.DEBUG)
- file_handler.setFormatter(logging.Formatter(fmt=fmt, datefmt='%Y-%m-%d %H:%M:%S'))
- logger.addHandler(file_handler)
-
- return logger
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/layers/transformer/dino_layers.py b/spaces/KyanChen/RSPrompter/mmdet/models/layers/transformer/dino_layers.py
deleted file mode 100644
index f462f86b1447c6973ba3c8460629ba58cc9d7a25..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/layers/transformer/dino_layers.py
+++ /dev/null
@@ -1,552 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import warnings
-from typing import Tuple, Union
-
-import torch
-from mmengine.model import BaseModule
-from torch import Tensor, nn
-
-from mmdet.structures import SampleList
-from mmdet.structures.bbox import bbox_xyxy_to_cxcywh
-from mmdet.utils import OptConfigType
-from .deformable_detr_layers import DeformableDetrTransformerDecoder
-from .utils import MLP, coordinate_to_encoding, inverse_sigmoid
-
-
-class DinoTransformerDecoder(DeformableDetrTransformerDecoder):
- """Transformer encoder of DINO."""
-
- def _init_layers(self) -> None:
- """Initialize decoder layers."""
- super()._init_layers()
- self.ref_point_head = MLP(self.embed_dims * 2, self.embed_dims,
- self.embed_dims, 2)
- self.norm = nn.LayerNorm(self.embed_dims)
-
- def forward(self, query: Tensor, value: Tensor, key_padding_mask: Tensor,
- self_attn_mask: Tensor, reference_points: Tensor,
- spatial_shapes: Tensor, level_start_index: Tensor,
- valid_ratios: Tensor, reg_branches: nn.ModuleList,
- **kwargs) -> Tensor:
- """Forward function of Transformer encoder.
-
- Args:
- query (Tensor): The input query, has shape (num_queries, bs, dim).
- value (Tensor): The input values, has shape (num_value, bs, dim).
- key_padding_mask (Tensor): The `key_padding_mask` of `self_attn`
- input. ByteTensor, has shape (num_queries, bs).
- self_attn_mask (Tensor): The attention mask to prevent information
- leakage from different denoising groups and matching parts, has
- shape (num_queries_total, num_queries_total). It is `None` when
- `self.training` is `False`.
- reference_points (Tensor): The initial reference, has shape
- (bs, num_queries, 4) with the last dimension arranged as
- (cx, cy, w, h).
- spatial_shapes (Tensor): Spatial shapes of features in all levels,
- has shape (num_levels, 2), last dimension represents (h, w).
- level_start_index (Tensor): The start index of each level.
- A tensor has shape (num_levels, ) and can be represented
- as [0, h_0*w_0, h_0*w_0+h_1*w_1, ...].
- valid_ratios (Tensor): The ratios of the valid width and the valid
- height relative to the width and the height of features in all
- levels, has shape (bs, num_levels, 2).
- reg_branches: (obj:`nn.ModuleList`): Used for refining the
- regression results.
-
- Returns:
- Tensor: Output queries of Transformer encoder, which is also
- called 'encoder output embeddings' or 'memory', has shape
- (num_queries, bs, dim)
- """
- intermediate = []
- intermediate_reference_points = [reference_points]
- for lid, layer in enumerate(self.layers):
- if reference_points.shape[-1] == 4:
- reference_points_input = \
- reference_points[:, :, None] * torch.cat(
- [valid_ratios, valid_ratios], -1)[:, None]
- else:
- assert reference_points.shape[-1] == 2
- reference_points_input = \
- reference_points[:, :, None] * valid_ratios[:, None]
-
- query_sine_embed = coordinate_to_encoding(
- reference_points_input[:, :, 0, :])
- query_pos = self.ref_point_head(query_sine_embed)
-
- query = layer(
- query,
- query_pos=query_pos,
- value=value,
- key_padding_mask=key_padding_mask,
- self_attn_mask=self_attn_mask,
- spatial_shapes=spatial_shapes,
- level_start_index=level_start_index,
- valid_ratios=valid_ratios,
- reference_points=reference_points_input,
- **kwargs)
-
- if reg_branches is not None:
- tmp = reg_branches[lid](query)
- assert reference_points.shape[-1] == 4
- new_reference_points = tmp + inverse_sigmoid(
- reference_points, eps=1e-3)
- new_reference_points = new_reference_points.sigmoid()
- reference_points = new_reference_points.detach()
-
- if self.return_intermediate:
- intermediate.append(self.norm(query))
- intermediate_reference_points.append(new_reference_points)
- # NOTE this is for the "Look Forward Twice" module,
- # in the DeformDETR, reference_points was appended.
-
- if self.return_intermediate:
- return torch.stack(intermediate), torch.stack(
- intermediate_reference_points)
-
- return query, reference_points
-
-
-class CdnQueryGenerator(BaseModule):
- """Implement query generator of the Contrastive denoising (CDN) proposed in
- `DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object
- Detection `_
-
- Code is modified from the `official github repo
- `_.
-
- Args:
- num_classes (int): Number of object classes.
- embed_dims (int): The embedding dimensions of the generated queries.
- num_matching_queries (int): The queries number of the matching part.
- Used for generating dn_mask.
- label_noise_scale (float): The scale of label noise, defaults to 0.5.
- box_noise_scale (float): The scale of box noise, defaults to 1.0.
- group_cfg (:obj:`ConfigDict` or dict, optional): The config of the
- denoising queries grouping, includes `dynamic`, `num_dn_queries`,
- and `num_groups`. Two grouping strategies, 'static dn groups' and
- 'dynamic dn groups', are supported. When `dynamic` is `False`,
- the `num_groups` should be set, and the number of denoising query
- groups will always be `num_groups`. When `dynamic` is `True`, the
- `num_dn_queries` should be set, and the group number will be
- dynamic to ensure that the denoising queries number will not exceed
- `num_dn_queries` to prevent large fluctuations of memory. Defaults
- to `None`.
- """
-
- def __init__(self,
- num_classes: int,
- embed_dims: int,
- num_matching_queries: int,
- label_noise_scale: float = 0.5,
- box_noise_scale: float = 1.0,
- group_cfg: OptConfigType = None) -> None:
- super().__init__()
- self.num_classes = num_classes
- self.embed_dims = embed_dims
- self.num_matching_queries = num_matching_queries
- self.label_noise_scale = label_noise_scale
- self.box_noise_scale = box_noise_scale
-
- # prepare grouping strategy
- group_cfg = {} if group_cfg is None else group_cfg
- self.dynamic_dn_groups = group_cfg.get('dynamic', True)
- if self.dynamic_dn_groups:
- if 'num_dn_queries' not in group_cfg:
- warnings.warn("'num_dn_queries' should be set when using "
- 'dynamic dn groups, use 100 as default.')
- self.num_dn_queries = group_cfg.get('num_dn_queries', 100)
- assert isinstance(self.num_dn_queries, int), \
- f'Expected the num_dn_queries to have type int, but got ' \
- f'{self.num_dn_queries}({type(self.num_dn_queries)}). '
- else:
- assert 'num_groups' in group_cfg, \
- 'num_groups should be set when using static dn groups'
- self.num_groups = group_cfg['num_groups']
- assert isinstance(self.num_groups, int), \
- f'Expected the num_groups to have type int, but got ' \
- f'{self.num_groups}({type(self.num_groups)}). '
-
- # NOTE The original repo of DINO set the num_embeddings 92 for coco,
- # 91 (0~90) of which represents target classes and the 92 (91)
- # indicates `Unknown` class. However, the embedding of `unknown` class
- # is not used in the original DINO.
- # TODO: num_classes + 1 or num_classes ?
- self.label_embedding = nn.Embedding(self.num_classes, self.embed_dims)
-
- def __call__(self, batch_data_samples: SampleList) -> tuple:
- """Generate contrastive denoising (cdn) queries with ground truth.
-
- Descriptions of the Number Values in code and comments:
- - num_target_total: the total target number of the input batch
- samples.
- - max_num_target: the max target number of the input batch samples.
- - num_noisy_targets: the total targets number after adding noise,
- i.e., num_target_total * num_groups * 2.
- - num_denoising_queries: the length of the output batched queries,
- i.e., max_num_target * num_groups * 2.
-
- NOTE The format of input bboxes in batch_data_samples is unnormalized
- (x, y, x, y), and the output bbox queries are embedded by normalized
- (cx, cy, w, h) format bboxes going through inverse_sigmoid.
-
- Args:
- batch_data_samples (list[:obj:`DetDataSample`]): List of the batch
- data samples, each includes `gt_instance` which has attributes
- `bboxes` and `labels`. The `bboxes` has unnormalized coordinate
- format (x, y, x, y).
-
- Returns:
- tuple: The outputs of the dn query generator.
-
- - dn_label_query (Tensor): The output content queries for denoising
- part, has shape (bs, num_denoising_queries, dim), where
- `num_denoising_queries = max_num_target * num_groups * 2`.
- - dn_bbox_query (Tensor): The output reference bboxes as positions
- of queries for denoising part, which are embedded by normalized
- (cx, cy, w, h) format bboxes going through inverse_sigmoid, has
- shape (bs, num_denoising_queries, 4) with the last dimension
- arranged as (cx, cy, w, h).
- - attn_mask (Tensor): The attention mask to prevent information
- leakage from different denoising groups and matching parts,
- will be used as `self_attn_mask` of the `decoder`, has shape
- (num_queries_total, num_queries_total), where `num_queries_total`
- is the sum of `num_denoising_queries` and `num_matching_queries`.
- - dn_meta (Dict[str, int]): The dictionary saves information about
- group collation, including 'num_denoising_queries' and
- 'num_denoising_groups'. It will be used for split outputs of
- denoising and matching parts and loss calculation.
- """
- # normalize bbox and collate ground truth (gt)
- gt_labels_list = []
- gt_bboxes_list = []
- for sample in batch_data_samples:
- img_h, img_w = sample.img_shape
- bboxes = sample.gt_instances.bboxes
- factor = bboxes.new_tensor([img_w, img_h, img_w,
- img_h]).unsqueeze(0)
- bboxes_normalized = bboxes / factor
- gt_bboxes_list.append(bboxes_normalized)
- gt_labels_list.append(sample.gt_instances.labels)
- gt_labels = torch.cat(gt_labels_list) # (num_target_total, 4)
- gt_bboxes = torch.cat(gt_bboxes_list)
-
- num_target_list = [len(bboxes) for bboxes in gt_bboxes_list]
- max_num_target = max(num_target_list)
- num_groups = self.get_num_groups(max_num_target)
-
- dn_label_query = self.generate_dn_label_query(gt_labels, num_groups)
- dn_bbox_query = self.generate_dn_bbox_query(gt_bboxes, num_groups)
-
- # The `batch_idx` saves the batch index of the corresponding sample
- # for each target, has shape (num_target_total).
- batch_idx = torch.cat([
- torch.full_like(t.long(), i) for i, t in enumerate(gt_labels_list)
- ])
- dn_label_query, dn_bbox_query = self.collate_dn_queries(
- dn_label_query, dn_bbox_query, batch_idx, len(batch_data_samples),
- num_groups)
-
- attn_mask = self.generate_dn_mask(
- max_num_target, num_groups, device=dn_label_query.device)
-
- dn_meta = dict(
- num_denoising_queries=int(max_num_target * 2 * num_groups),
- num_denoising_groups=num_groups)
-
- return dn_label_query, dn_bbox_query, attn_mask, dn_meta
-
- def get_num_groups(self, max_num_target: int = None) -> int:
- """Calculate denoising query groups number.
-
- Two grouping strategies, 'static dn groups' and 'dynamic dn groups',
- are supported. When `self.dynamic_dn_groups` is `False`, the number
- of denoising query groups will always be `self.num_groups`. When
- `self.dynamic_dn_groups` is `True`, the group number will be dynamic,
- ensuring the denoising queries number will not exceed
- `self.num_dn_queries` to prevent large fluctuations of memory.
-
- NOTE The `num_group` is shared for different samples in a batch. When
- the target numbers in the samples varies, the denoising queries of the
- samples containing fewer targets are padded to the max length.
-
- Args:
- max_num_target (int, optional): The max target number of the batch
- samples. It will only be used when `self.dynamic_dn_groups` is
- `True`. Defaults to `None`.
-
- Returns:
- int: The denoising group number of the current batch.
- """
- if self.dynamic_dn_groups:
- assert max_num_target is not None, \
- 'group_queries should be provided when using ' \
- 'dynamic dn groups'
- if max_num_target == 0:
- num_groups = 1
- else:
- num_groups = self.num_dn_queries // max_num_target
- else:
- num_groups = self.num_groups
- if num_groups < 1:
- num_groups = 1
- return int(num_groups)
-
- def generate_dn_label_query(self, gt_labels: Tensor,
- num_groups: int) -> Tensor:
- """Generate noisy labels and their query embeddings.
-
- The strategy for generating noisy labels is: Randomly choose labels of
- `self.label_noise_scale * 0.5` proportion and override each of them
- with a random object category label.
-
- NOTE Not add noise to all labels. Besides, the `self.label_noise_scale
- * 0.5` arg is the ratio of the chosen positions, which is higher than
- the actual proportion of noisy labels, because the labels to override
- may be correct. And the gap becomes larger as the number of target
- categories decreases. The users should notice this and modify the scale
- arg or the corresponding logic according to specific dataset.
-
- Args:
- gt_labels (Tensor): The concatenated gt labels of all samples
- in the batch, has shape (num_target_total, ) where
- `num_target_total = sum(num_target_list)`.
- num_groups (int): The number of denoising query groups.
-
- Returns:
- Tensor: The query embeddings of noisy labels, has shape
- (num_noisy_targets, embed_dims), where `num_noisy_targets =
- num_target_total * num_groups * 2`.
- """
- assert self.label_noise_scale > 0
- gt_labels_expand = gt_labels.repeat(2 * num_groups,
- 1).view(-1) # Note `* 2` # noqa
- p = torch.rand_like(gt_labels_expand.float())
- chosen_indice = torch.nonzero(p < (self.label_noise_scale * 0.5)).view(
- -1) # Note `* 0.5`
- new_labels = torch.randint_like(chosen_indice, 0, self.num_classes)
- noisy_labels_expand = gt_labels_expand.scatter(0, chosen_indice,
- new_labels)
- dn_label_query = self.label_embedding(noisy_labels_expand)
- return dn_label_query
-
- def generate_dn_bbox_query(self, gt_bboxes: Tensor,
- num_groups: int) -> Tensor:
- """Generate noisy bboxes and their query embeddings.
-
- The strategy for generating noisy bboxes is as follow:
-
- .. code:: text
-
- +--------------------+
- | negative |
- | +----------+ |
- | | positive | |
- | | +-----|----+------------+
- | | | | | |
- | +----+-----+ | |
- | | | |
- +---------+----------+ |
- | |
- | gt bbox |
- | |
- | +---------+----------+
- | | | |
- | | +----+-----+ |
- | | | | | |
- +-------------|--- +----+ | |
- | | positive | |
- | +----------+ |
- | negative |
- +--------------------+
-
- The random noise is added to the top-left and down-right point
- positions, hence, normalized (x, y, x, y) format of bboxes are
- required. The noisy bboxes of positive queries have the points
- both within the inner square, while those of negative queries
- have the points both between the inner and outer squares.
-
- Besides, the length of outer square is twice as long as that of
- the inner square, i.e., self.box_noise_scale * w_or_h / 2.
- NOTE The noise is added to all the bboxes. Moreover, there is still
- unconsidered case when one point is within the positive square and
- the others is between the inner and outer squares.
-
- Args:
- gt_bboxes (Tensor): The concatenated gt bboxes of all samples
- in the batch, has shape (num_target_total, 4) with the last
- dimension arranged as (cx, cy, w, h) where
- `num_target_total = sum(num_target_list)`.
- num_groups (int): The number of denoising query groups.
-
- Returns:
- Tensor: The output noisy bboxes, which are embedded by normalized
- (cx, cy, w, h) format bboxes going through inverse_sigmoid, has
- shape (num_noisy_targets, 4) with the last dimension arranged as
- (cx, cy, w, h), where
- `num_noisy_targets = num_target_total * num_groups * 2`.
- """
- assert self.box_noise_scale > 0
- device = gt_bboxes.device
-
- # expand gt_bboxes as groups
- gt_bboxes_expand = gt_bboxes.repeat(2 * num_groups, 1) # xyxy
-
- # obtain index of negative queries in gt_bboxes_expand
- positive_idx = torch.arange(
- len(gt_bboxes), dtype=torch.long, device=device)
- positive_idx = positive_idx.unsqueeze(0).repeat(num_groups, 1)
- positive_idx += 2 * len(gt_bboxes) * torch.arange(
- num_groups, dtype=torch.long, device=device)[:, None]
- positive_idx = positive_idx.flatten()
- negative_idx = positive_idx + len(gt_bboxes)
-
- # determine the sign of each element in the random part of the added
- # noise to be positive or negative randomly.
- rand_sign = torch.randint_like(
- gt_bboxes_expand, low=0, high=2,
- dtype=torch.float32) * 2.0 - 1.0 # [low, high), 1 or -1, randomly
-
- # calculate the random part of the added noise
- rand_part = torch.rand_like(gt_bboxes_expand) # [0, 1)
- rand_part[negative_idx] += 1.0 # pos: [0, 1); neg: [1, 2)
- rand_part *= rand_sign # pos: (-1, 1); neg: (-2, -1] U [1, 2)
-
- # add noise to the bboxes
- bboxes_whwh = bbox_xyxy_to_cxcywh(gt_bboxes_expand)[:, 2:].repeat(1, 2)
- noisy_bboxes_expand = gt_bboxes_expand + torch.mul(
- rand_part, bboxes_whwh) * self.box_noise_scale / 2 # xyxy
- noisy_bboxes_expand = noisy_bboxes_expand.clamp(min=0.0, max=1.0)
- noisy_bboxes_expand = bbox_xyxy_to_cxcywh(noisy_bboxes_expand)
-
- dn_bbox_query = inverse_sigmoid(noisy_bboxes_expand, eps=1e-3)
- return dn_bbox_query
-
- def collate_dn_queries(self, input_label_query: Tensor,
- input_bbox_query: Tensor, batch_idx: Tensor,
- batch_size: int, num_groups: int) -> Tuple[Tensor]:
- """Collate generated queries to obtain batched dn queries.
-
- The strategy for query collation is as follow:
-
- .. code:: text
-
- input_queries (num_target_total, query_dim)
- P_A1 P_B1 P_B2 N_A1 N_B1 N_B2 P'A1 P'B1 P'B2 N'A1 N'B1 N'B2
- |________ group1 ________| |________ group2 ________|
- |
- V
- P_A1 Pad0 N_A1 Pad0 P'A1 Pad0 N'A1 Pad0
- P_B1 P_B2 N_B1 N_B2 P'B1 P'B2 N'B1 N'B2
- |____ group1 ____| |____ group2 ____|
- batched_queries (batch_size, max_num_target, query_dim)
-
- where query_dim is 4 for bbox and self.embed_dims for label.
- Notation: _-group 1; '-group 2;
- A-Sample1(has 1 target); B-sample2(has 2 targets)
-
- Args:
- input_label_query (Tensor): The generated label queries of all
- targets, has shape (num_target_total, embed_dims) where
- `num_target_total = sum(num_target_list)`.
- input_bbox_query (Tensor): The generated bbox queries of all
- targets, has shape (num_target_total, 4) with the last
- dimension arranged as (cx, cy, w, h).
- batch_idx (Tensor): The batch index of the corresponding sample
- for each target, has shape (num_target_total).
- batch_size (int): The size of the input batch.
- num_groups (int): The number of denoising query groups.
-
- Returns:
- tuple[Tensor]: Output batched label and bbox queries.
- - batched_label_query (Tensor): The output batched label queries,
- has shape (batch_size, max_num_target, embed_dims).
- - batched_bbox_query (Tensor): The output batched bbox queries,
- has shape (batch_size, max_num_target, 4) with the last dimension
- arranged as (cx, cy, w, h).
- """
- device = input_label_query.device
- num_target_list = [
- torch.sum(batch_idx == idx) for idx in range(batch_size)
- ]
- max_num_target = max(num_target_list)
- num_denoising_queries = int(max_num_target * 2 * num_groups)
-
- map_query_index = torch.cat([
- torch.arange(num_target, device=device)
- for num_target in num_target_list
- ])
- map_query_index = torch.cat([
- map_query_index + max_num_target * i for i in range(2 * num_groups)
- ]).long()
- batch_idx_expand = batch_idx.repeat(2 * num_groups, 1).view(-1)
- mapper = (batch_idx_expand, map_query_index)
-
- batched_label_query = torch.zeros(
- batch_size, num_denoising_queries, self.embed_dims, device=device)
- batched_bbox_query = torch.zeros(
- batch_size, num_denoising_queries, 4, device=device)
-
- batched_label_query[mapper] = input_label_query
- batched_bbox_query[mapper] = input_bbox_query
- return batched_label_query, batched_bbox_query
-
- def generate_dn_mask(self, max_num_target: int, num_groups: int,
- device: Union[torch.device, str]) -> Tensor:
- """Generate attention mask to prevent information leakage from
- different denoising groups and matching parts.
-
- .. code:: text
-
- 0 0 0 0 1 1 1 1 0 0 0 0 0
- 0 0 0 0 1 1 1 1 0 0 0 0 0
- 0 0 0 0 1 1 1 1 0 0 0 0 0
- 0 0 0 0 1 1 1 1 0 0 0 0 0
- 1 1 1 1 0 0 0 0 0 0 0 0 0
- 1 1 1 1 0 0 0 0 0 0 0 0 0
- 1 1 1 1 0 0 0 0 0 0 0 0 0
- 1 1 1 1 0 0 0 0 0 0 0 0 0
- 1 1 1 1 1 1 1 1 0 0 0 0 0
- 1 1 1 1 1 1 1 1 0 0 0 0 0
- 1 1 1 1 1 1 1 1 0 0 0 0 0
- 1 1 1 1 1 1 1 1 0 0 0 0 0
- 1 1 1 1 1 1 1 1 0 0 0 0 0
- max_num_target |_| |_________| num_matching_queries
- |_____________| num_denoising_queries
-
- 1 -> True (Masked), means 'can not see'.
- 0 -> False (UnMasked), means 'can see'.
-
- Args:
- max_num_target (int): The max target number of the input batch
- samples.
- num_groups (int): The number of denoising query groups.
- device (obj:`device` or str): The device of generated mask.
-
- Returns:
- Tensor: The attention mask to prevent information leakage from
- different denoising groups and matching parts, will be used as
- `self_attn_mask` of the `decoder`, has shape (num_queries_total,
- num_queries_total), where `num_queries_total` is the sum of
- `num_denoising_queries` and `num_matching_queries`.
- """
- num_denoising_queries = int(max_num_target * 2 * num_groups)
- num_queries_total = num_denoising_queries + self.num_matching_queries
- attn_mask = torch.zeros(
- num_queries_total,
- num_queries_total,
- device=device,
- dtype=torch.bool)
- # Make the matching part cannot see the denoising groups
- attn_mask[num_denoising_queries:, :num_denoising_queries] = True
- # Make the denoising groups cannot see each other
- for i in range(num_groups):
- # Mask rows of one group per step.
- row_scope = slice(max_num_target * 2 * i,
- max_num_target * 2 * (i + 1))
- left_scope = slice(max_num_target * 2 * i)
- right_scope = slice(max_num_target * 2 * (i + 1),
- num_denoising_queries)
- attn_mask[row_scope, right_scope] = True
- attn_mask[row_scope, left_scope] = True
- return attn_mask
diff --git a/spaces/LanguageBind/LanguageBind/a_cls/dataloader.py b/spaces/LanguageBind/LanguageBind/a_cls/dataloader.py
deleted file mode 100644
index 235e90759d6ddd813358c95680b603c3f3df8cf2..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/a_cls/dataloader.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# -*- coding: utf-8 -*-
-# @Time : 6/19/21 12:23 AM
-# @Author : Yuan Gong
-# @Affiliation : Massachusetts Institute of Technology
-# @Email : yuangong@mit.edu
-# @File : dataloader.py
-
-# modified from:
-# Author: David Harwath
-# with some functions borrowed from https://github.com/SeanNaren/deepspeech.pytorch
-
-import csv
-import json
-import logging
-
-import torchaudio
-import numpy as np
-import torch
-import torch.nn.functional
-from torch.utils.data import Dataset
-import random
-
-def make_index_dict(label_csv):
- index_lookup = {}
- with open(label_csv, 'r') as f:
- csv_reader = csv.DictReader(f)
- line_count = 0
- for row in csv_reader:
- index_lookup[row['mid']] = row['index']
- line_count += 1
- return index_lookup
-
-def make_name_dict(label_csv):
- name_lookup = {}
- with open(label_csv, 'r') as f:
- csv_reader = csv.DictReader(f)
- line_count = 0
- for row in csv_reader:
- name_lookup[row['index']] = row['display_name']
- line_count += 1
- return name_lookup
-
-def lookup_list(index_list, label_csv):
- label_list = []
- table = make_name_dict(label_csv)
- for item in index_list:
- label_list.append(table[item])
- return label_list
-
-def preemphasis(signal,coeff=0.97):
- """perform preemphasis on the input signal.
-
- :param signal: The signal to filter.
- :param coeff: The preemphasis coefficient. 0 is none, default 0.97.
- :returns: the filtered signal.
- """
- return np.append(signal[0],signal[1:]-coeff*signal[:-1])
-
-class AudiosetDataset(Dataset):
- def __init__(self, dataset_json_file, audio_conf, label_csv=None):
- """
- Dataset that manages audio recordings
- :param audio_conf: Dictionary containing the audio loading and preprocessing settings
- :param dataset_json_file
- """
- self.datapath = dataset_json_file
- with open(dataset_json_file, 'r') as fp:
- data_json = json.load(fp)
- self.data = data_json['data']
- self.index_dict = make_index_dict(label_csv)
- self.label_num = len(self.index_dict)
-
- def __getitem__(self, index):
- datum = self.data[index]
- label_indices = np.zeros(self.label_num)
- try:
- fbank, mix_lambda = self._wav2fbank(datum['wav'])
- except Exception as e:
- logging.warning(f"Error at {datum['wav']} with \"{e}\"")
- return self.__getitem__(random.randint(0, self.__len__()-1))
- for label_str in datum['labels'].split(','):
- label_indices[int(self.index_dict[label_str])] = 1.0
-
- label_indices = torch.FloatTensor(label_indices)
-
-
- return fbank, label_indices
-
- def __len__(self):
- return len(self.data)
\ No newline at end of file
diff --git a/spaces/LanguageBind/LanguageBind/data/new_loadvat.py b/spaces/LanguageBind/LanguageBind/data/new_loadvat.py
deleted file mode 100644
index 5dc879f6b33b12760efdf685b46315d6825f8b86..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/data/new_loadvat.py
+++ /dev/null
@@ -1,498 +0,0 @@
-import ast
-import io
-import json
-import logging
-import math
-import os
-import random
-import sys
-import braceexpand
-from dataclasses import dataclass
-from multiprocessing import Value
-
-import numpy.lib.format
-import numpy as np
-import pandas as pd
-import torch
-import torchvision.datasets as datasets
-import webdataset as wds
-from PIL import Image
-from torch.utils.data import Dataset, DataLoader, SubsetRandomSampler, IterableDataset, get_worker_info
-from torch.utils.data.distributed import DistributedSampler
-from torchvision.transforms import ToTensor
-from tqdm import tqdm
-from webdataset.filters import _shuffle
-from webdataset.tariterators import base_plus_ext, url_opener, tar_file_expander, valid_sample
-
-from open_clip import get_tokenizer
-from open_clip.factory import HF_HUB_PREFIX
-from training.params import parse_args
-from data.process_text import load_and_transform_text
-from data.process_video import get_video_transform
-from data.process_audio import get_audio_transform
-from data.process_depth import get_depth_transform
-from data.process_thermal import get_thermal_transform
-import pdb
-try:
- import horovod.torch as hvd
-except ImportError:
- hvd = None
-
-
-
-class SharedEpoch:
- def __init__(self, epoch: int = 0):
- self.shared_epoch = Value('i', epoch)
-
- def set_value(self, epoch):
- self.shared_epoch.value = epoch
-
- def get_value(self):
- return self.shared_epoch.value
-
-
-@dataclass
-class DataInfo:
- dataloader: DataLoader
- sampler: DistributedSampler = None
- shared_epoch: SharedEpoch = None
-
- def set_epoch(self, epoch):
- if self.shared_epoch is not None:
- self.shared_epoch.set_value(epoch)
- if self.sampler is not None and isinstance(self.sampler, DistributedSampler):
- self.sampler.set_epoch(epoch)
-
-
-def expand_urls(urls, weights=None):
- if weights is None:
- expanded_urls = wds.shardlists.expand_urls(urls)
- return expanded_urls, None
- if isinstance(urls, str):
- urllist = urls.split("::")
- weights = weights.split('::')
- assert len(weights) == len(urllist), \
- f"Expected the number of data components ({len(urllist)}) and weights({len(weights)}) to match."
- weights = [float(weight) for weight in weights]
- all_urls, all_weights = [], []
- for url, weight in zip(urllist, weights):
- expanded_url = list(braceexpand.braceexpand(url))
- expanded_weights = [weight for _ in expanded_url]
- all_urls.extend(expanded_url)
- all_weights.extend(expanded_weights)
- return all_urls, all_weights
- else:
- all_urls = list(urls)
- return all_urls, weights
-
-
-def get_dataset_size(shards):
- shards_list, _ = expand_urls(shards)
- dir_path = os.path.dirname(shards_list[0])
- sizes_filename = os.path.join(dir_path, 'sizes.json')
- len_filename = os.path.join(dir_path, '__len__')
- if os.path.exists(sizes_filename):
- sizes = json.load(open(sizes_filename, 'r'))
- total_size = sum([int(sizes[os.path.basename(shard)]) for shard in shards_list])
- elif os.path.exists(len_filename):
- # FIXME this used to be eval(open(...)) but that seemed rather unsafe
- total_size = ast.literal_eval(open(len_filename, 'r').read())
- else:
- total_size = None # num samples undefined
- # some common dataset sizes (at time of authors last download)
- # CC3M (train): 2905954
- # CC12M: 10968539
- # LAION-400M: 407332084
- # LAION-2B (english): 2170337258
- num_shards = len(shards_list)
- return total_size, num_shards
-
-
-
-def count_samples(dataloader):
- os.environ["WDS_EPOCH"] = "0"
- n_elements, n_batches = 0, 0
- for images, texts in dataloader:
- n_batches += 1
- n_elements += len(images)
- assert len(images) == len(texts)
- return n_elements, n_batches
-
-
-def filter_no_caption_or_no_image(sample):
- has_caption = ('raw.txt' in sample and 'mplug.txt' in sample and 'polish_mplug.txt' in sample and 'ofa3.txt' in sample)
- has_image = ('frm7.jpg' in sample and 'tml0.jpg' in sample and 'dep0.npy' in sample)
- return has_caption and has_image
-
-
-def log_and_continue(exn):
- """Call in an exception handler to ignore any exception, issue a warning, and continue."""
- logging.warning(f'Handling webdataset error ({repr(exn)}). Ignoring.')
- return True
-
-
-def group_by_keys_nothrow(data, keys=base_plus_ext, lcase=True, suffixes=None, handler=None):
- """Return function over iterator that groups key, value pairs into samples.
-
- :param keys: function that splits the key into key and extension (base_plus_ext)
- :param lcase: convert suffixes to lower case (Default value = True)
- """
- current_sample = None
- for filesample in data:
- assert isinstance(filesample, dict)
- fname, value = filesample["fname"], filesample["data"]
- prefix, suffix = keys(fname)
- if prefix is None:
- continue
- if lcase:
- suffix = suffix.lower()
- # FIXME webdataset version throws if suffix in current_sample, but we have a potential for
- # this happening in the current LAION400m dataset if a tar ends with same prefix as the next
- # begins, rare, but can happen since prefix aren't unique across tar files in that dataset
- if current_sample is None or prefix != current_sample["__key__"] or suffix in current_sample:
- if valid_sample(current_sample):
- yield current_sample
- current_sample = dict(__key__=prefix, __url__=filesample["__url__"])
- if suffixes is None or suffix in suffixes:
- current_sample[suffix] = value
- if valid_sample(current_sample):
- yield current_sample
-
-
-def tarfile_to_samples_nothrow(src, handler=log_and_continue):
- # NOTE this is a re-impl of the webdataset impl with group_by_keys that doesn't throw
- streams = url_opener(src, handler=handler)
- files = tar_file_expander(streams, handler=handler)
- samples = group_by_keys_nothrow(files, handler=handler)
- return samples
-
-
-def pytorch_worker_seed(increment=0):
- """get dataloader worker seed from pytorch"""
- worker_info = get_worker_info()
- if worker_info is not None:
- # favour using the seed already created for pytorch dataloader workers if it exists
- seed = worker_info.seed
- if increment:
- # space out seed increments so they can't overlap across workers in different iterations
- seed += increment * max(1, worker_info.num_workers)
- return seed
- # fallback to wds rank based seed
- return wds.utils.pytorch_worker_seed()
-
-
-_SHARD_SHUFFLE_SIZE = 200
-_SHARD_SHUFFLE_INITIAL = 50
-_SAMPLE_SHUFFLE_SIZE = 500
-_SAMPLE_SHUFFLE_INITIAL = 100
-
-
-class detshuffle2(wds.PipelineStage):
- def __init__(
- self,
- bufsize=1000,
- initial=100,
- seed=0,
- epoch=-1,
- ):
- self.bufsize = bufsize
- self.initial = initial
- self.seed = seed
- self.epoch = epoch
-
- def run(self, src):
- if isinstance(self.epoch, SharedEpoch):
- epoch = self.epoch.get_value()
- else:
- # NOTE: this is epoch tracking is problematic in a multiprocess (dataloader workers or train)
- # situation as different workers may wrap at different times (or not at all).
- self.epoch += 1
- epoch = self.epoch
- rng = random.Random()
- if self.seed < 0:
- # If seed is negative, we use the worker's seed, this will be different across all nodes/workers
- seed = pytorch_worker_seed(epoch)
- else:
- # This seed to be deterministic AND the same across all nodes/workers in each epoch
- seed = self.seed + epoch
- rng.seed(seed)
- return _shuffle(src, self.bufsize, self.initial, rng)
-
-
-class ResampledShards2(IterableDataset):
- """An iterable dataset yielding a list of urls."""
-
- def __init__(
- self,
- urls,
- weights=None,
- nshards=sys.maxsize,
- worker_seed=None,
- deterministic=False,
- epoch=-1,
- ):
- """Sample shards from the shard list with replacement.
-
- :param urls: a list of URLs as a Python list or brace notation string
- """
- super().__init__()
- urls, weights = expand_urls(urls, weights)
- self.urls = urls
- self.weights = weights
- if self.weights is not None:
- assert len(self.urls) == len(self.weights), \
- f"Number of urls {len(self.urls)} and weights {len(self.weights)} should match."
- assert isinstance(self.urls[0], str)
- self.nshards = nshards
- self.rng = random.Random()
- self.worker_seed = worker_seed
- self.deterministic = deterministic
- self.epoch = epoch
-
- def __iter__(self):
- """Return an iterator over the shards."""
- if isinstance(self.epoch, SharedEpoch):
- epoch = self.epoch.get_value()
- else:
- # NOTE: this is epoch tracking is problematic in a multiprocess (dataloader workers or train)
- # situation as different workers may wrap at different times (or not at all).
- self.epoch += 1
- epoch = self.epoch
- if self.deterministic:
- # reset seed w/ epoch if deterministic
- if self.worker_seed is None:
- # pytorch worker seed should be deterministic due to being init by arg.seed + rank + worker id
- seed = pytorch_worker_seed(epoch)
- else:
- seed = self.worker_seed() + epoch
- self.rng.seed(seed)
- for _ in range(self.nshards):
- if self.weights is None:
- yield dict(url=self.rng.choice(self.urls))
- else:
- yield dict(url=self.rng.choices(self.urls, weights=self.weights, k=1)[0])
-
-
-class Decode:
- def __init__(self, args=None):
- self.num_frames = args.num_frames
- self.text_type = args.text_type
- self.chatgpt = self.text_type == 'polish_mplug'
- self.title = self.text_type == 'raw'
- self.clip_type = args.clip_type
- self.tokenizer = get_tokenizer(HF_HUB_PREFIX + args.model, cache_dir=args.cache_dir)
- self.video_transform = get_video_transform(args)
- self.audio_transform = get_audio_transform(args)
- self.depth_transform = get_depth_transform(args)
- self.thermal_transform = get_thermal_transform(args)
-
-
- def __call__(self, sample):
- input_ids, attention_mask = self.get_text(sample[f"{self.text_type}.txt"], chatgpt=self.chatgpt, title=self.title)
- if self.clip_type == 'vl':
- matched_modality = self.get_video([sample[f"frm{i}.jpg"] for i in range(self.num_frames)])
- elif self.clip_type == 'al':
- matched_modality = self.get_audio()
- elif self.clip_type == 'dl':
- matched_modality = self.get_depth(sample[f"dep0.npy"])
- elif self.clip_type == 'tl':
- matched_modality = self.get_thermal(sample[f"tml0.jpg"])
- # matched_modality = self.get_thermal(sample[f"tml{random.randint(0, 7)}.jpg"])
- else:
- raise ValueError
- return matched_modality, input_ids, attention_mask
-
-
- def get_video(self, frames):
- video_data = []
- for frame in frames:
- with io.BytesIO(frame) as stream:
- img = Image.open(stream)
- img.load()
- assert min(img.size) == 256
- result = ToTensor()(img)
- video_data.append(result)
- video_data = torch.stack(video_data, dim=1)
- # video_data torch.Size([3, 8, 455, 256])
- # video_outputs torch.Size([3, 8, 224, 224])
- video_outputs = self.video_transform(video_data)
- return video_outputs
-
-
- def get_text(self, text, chatgpt=True, title=False):
- text = text.decode("utf-8")
- if chatgpt:
- assert text.startswith('In the video, ')
- text = text[14:]
- tokens = load_and_transform_text(text, self.tokenizer, title=title)
- return tokens['input_ids'], tokens['attention_mask']
-
- def get_audio(self):
- raise NotImplementedError
-
- def get_depth(self, depth):
- stream = io.BytesIO(depth)
- img = numpy.lib.format.read_array(stream)
- depth = self.depth_transform(img)
- return depth
-
- def get_thermal(self, thermal):
- with io.BytesIO(thermal) as stream:
- img = Image.open(stream)
- img.load()
- thermal = self.thermal_transform(img)
- return thermal
-
-def get_wds_dataset(args, is_train, epoch=0, floor=False):
- input_shards = args.train_data if is_train else args.val_data
- assert input_shards is not None
- resampled = getattr(args, 'dataset_resampled', False) and is_train
-
- num_shards = None
- if is_train:
- if args.train_num_samples is not None:
- num_samples = args.train_num_samples
- else:
- num_samples, num_shards = get_dataset_size(input_shards)
- if not num_samples:
- raise RuntimeError(
- 'Currently, the number of dataset samples must be specified for the training dataset. '
- 'Please specify it via `--train-num-samples` if no dataset length info is present.')
- else:
- # Eval will just exhaust the iterator if the size is not specified.
- num_samples = args.val_num_samples or 0
-
- shared_epoch = SharedEpoch(epoch=epoch) # create a shared epoch store to sync epoch to dataloader worker proc
-
- if resampled:
- pipeline = [ResampledShards2(
- input_shards,
- weights=args.train_data_upsampling_factors,
- deterministic=True,
- epoch=shared_epoch,
- )]
- else:
- assert args.train_data_upsampling_factors is None, \
- "--train_data_upsampling_factors is only supported when sampling with replacement (with --dataset-resampled)."
- pipeline = [wds.SimpleShardList(input_shards)]
-
- # at this point we have an iterator over all the shards
- if is_train:
- if not resampled:
- pipeline.extend([
- detshuffle2(
- bufsize=_SHARD_SHUFFLE_SIZE,
- initial=_SHARD_SHUFFLE_INITIAL,
- seed=args.seed,
- epoch=shared_epoch,
- ),
- wds.split_by_node,
- wds.split_by_worker,
- ])
- pipeline.extend([
- # at this point, we have an iterator over the shards assigned to each worker at each node
- tarfile_to_samples_nothrow, # wds.tarfile_to_samples(handler=log_and_continue),
- wds.shuffle(
- bufsize=_SAMPLE_SHUFFLE_SIZE,
- initial=_SAMPLE_SHUFFLE_INITIAL,
- ),
- ])
- else:
- pipeline.extend([
- wds.split_by_worker,
- # at this point, we have an iterator over the shards assigned to each worker
- wds.tarfile_to_samples(handler=log_and_continue),
- ])
- pipeline.extend([
- wds.select(filter_no_caption_or_no_image),
- # wds.decode("pilrgb", handler=log_and_continue),
- # wds.rename(image="jpg;png;jpeg;webp", text="txt"),
- # wds.map_dict(image=preprocess_img, text=lambda text: tokenizer(text)[0]),
- # wds.to_tuple("image", "text"),
- wds.map(Decode(args), handler=log_and_continue),
- wds.batched(args.batch_size, partial=not is_train)
- ])
-
- dataset = wds.DataPipeline(*pipeline)
-
- if is_train:
- if not resampled:
- num_shards = num_shards or len(expand_urls(input_shards)[0])
- assert num_shards >= args.workers * args.world_size, 'number of shards must be >= total workers'
- # roll over and repeat a few samples to get same number of full batches on each node
- round_fn = math.floor if floor else math.ceil
- global_batch_size = args.batch_size * args.world_size
- num_batches = round_fn(num_samples / global_batch_size)
- num_workers = max(1, args.workers)
- num_worker_batches = round_fn(num_batches / num_workers) # per dataloader worker
- num_batches = num_worker_batches * num_workers
- num_samples = num_batches * global_batch_size
- dataset = dataset.with_epoch(num_worker_batches) # each worker is iterating over this
- else:
- # last batches are partial, eval is done on single (master) node
- num_batches = math.ceil(num_samples / args.batch_size)
-
- dataloader = wds.WebLoader(
- dataset,
- batch_size=None,
- shuffle=False,
- num_workers=args.workers,
- persistent_workers=args.workers > 0,
- )
-
- # FIXME not clear which approach is better, with_epoch before vs after dataloader?
- # hoping to resolve via https://github.com/webdataset/webdataset/issues/169
- # if is_train:
- # # roll over and repeat a few samples to get same number of full batches on each node
- # global_batch_size = args.batch_size * args.world_size
- # num_batches = math.ceil(num_samples / global_batch_size)
- # num_workers = max(1, args.workers)
- # num_batches = math.ceil(num_batches / num_workers) * num_workers
- # num_samples = num_batches * global_batch_size
- # dataloader = dataloader.with_epoch(num_batches)
- # else:
- # # last batches are partial, eval is done on single (master) node
- # num_batches = math.ceil(num_samples / args.batch_size)
-
- # add meta-data to dataloader instance for convenience
- dataloader.num_batches = num_batches
- dataloader.num_samples = num_samples
-
- return DataInfo(dataloader=dataloader, shared_epoch=shared_epoch)
-
-
-
-def get_data(args, epoch=0):
- data = {}
-
- data["train"] = get_wds_dataset(args, is_train=True, epoch=epoch)
-
- return data
-
-
-if __name__ == '__main__':
- args = parse_args(sys.argv[1:])
- args.workers = 10
- args.batch_size = 16
- args.world_size = 1
- args.num_frames = 8
- args.clip_type = 'vl'
- args.model = "laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K"
- args.train_data = '/apdcephfs_cq3/share_1311970/lb/vat2webdata/check_8frm_title_ofa_polishmplug_1tml_1dep/{00000..03020}.tar'
- args.train_num_samples = 10_000
- args.dataset_type = 'webdataset'
-
-
-
- data = get_data(args, epoch=0)
-
- data['train'].set_epoch(0) # set epoch in process safe manner via sampler or shared_epoch
- dataloader = data['train'].dataloader
- num_batches_per_epoch = dataloader.num_batches // args.accum_freq
- print(num_batches_per_epoch)
-
-
- for i, batch in enumerate(tqdm(dataloader)):
- images, input_ids, attention_mask = batch
- # print(images.shape, input_ids.shape, attention_mask.shape)
- # break
\ No newline at end of file
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/feeds/blaze.py b/spaces/Lianjd/stock_dashboard/backtrader/feeds/blaze.py
deleted file mode 100644
index 5f6f148f8c2160e02c0e0f78e25a582897ba5dc4..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/feeds/blaze.py
+++ /dev/null
@@ -1,94 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-from backtrader import date2num
-import backtrader.feed as feed
-
-
-class BlazeData(feed.DataBase):
- '''
- Support for `Blaze `_ ``Data`` objects.
-
- Only numeric indices to columns are supported.
-
- Note:
-
- - The ``dataname`` parameter is a blaze ``Data`` object
-
- - A negative value in any of the parameters for the Data lines
- indicates it's not present in the DataFrame
- it is
- '''
-
- params = (
- # datetime must be present
- ('datetime', 0),
- # pass -1 for any of the following to indicate absence
- ('open', 1),
- ('high', 2),
- ('low', 3),
- ('close', 4),
- ('volume', 5),
- ('openinterest', 6),
- )
-
- datafields = [
- 'datetime', 'open', 'high', 'low', 'close', 'volume', 'openinterest'
- ]
-
- def start(self):
- super(BlazeData, self).start()
-
- # reset the iterator on each start
- self._rows = iter(self.p.dataname)
-
- def _load(self):
- try:
- row = next(self._rows)
- except StopIteration:
- return False
-
- # Set the standard datafields - except for datetime
- for datafield in self.datafields[1:]:
- # get the column index
- colidx = getattr(self.params, datafield)
-
- if colidx < 0:
- # column not present -- skip
- continue
-
- # get the line to be set
- line = getattr(self.lines, datafield)
- line[0] = row[colidx]
-
- # datetime - assumed blaze always serves a native datetime.datetime
- colidx = getattr(self.params, self.datafields[0])
- dt = row[colidx]
- dtnum = date2num(dt)
-
- # get the line to be set
- line = getattr(self.lines, self.datafields[0])
- line[0] = dtnum
-
- # Done ... return
- return True
diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/sync_batchnorm/unittest.py b/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/sync_batchnorm/unittest.py
deleted file mode 100644
index 998223a0e0242dc4a5b2fcd74af79dc7232794da..0000000000000000000000000000000000000000
--- a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/sync_batchnorm/unittest.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : unittest.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import unittest
-import torch
-
-
-class TorchTestCase(unittest.TestCase):
- def assertTensorClose(self, x, y):
- adiff = float((x - y).abs().max())
- if (y == 0).all():
- rdiff = 'NaN'
- else:
- rdiff = float((adiff / y).abs().max())
-
- message = (
- 'Tensor close check failed\n'
- 'adiff={}\n'
- 'rdiff={}\n'
- ).format(adiff, rdiff)
- self.assertTrue(torch.allclose(x, y, atol=1e-5, rtol=1e-3), message)
-
diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/test.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/test.py
deleted file mode 100644
index 01264ed2069de188313c5cef0bbfb9fd14a638cf..0000000000000000000000000000000000000000
--- a/spaces/MCkernick/Image_Restoration_Colorization/Global/test.py
+++ /dev/null
@@ -1,192 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT License.
-
-import os
-from collections import OrderedDict
-from torch.autograd import Variable
-from options.test_options import TestOptions
-from models.models import create_model
-from models.mapping_model import Pix2PixHDModel_Mapping
-import util.util as util
-from PIL import Image
-import torch
-import torchvision.utils as vutils
-import torchvision.transforms as transforms
-import numpy as np
-import cv2
-
-def data_transforms(img, method=Image.BILINEAR, scale=False):
-
- ow, oh = img.size
- pw, ph = ow, oh
- if scale == True:
- if ow < oh:
- ow = 256
- oh = ph / pw * 256
- else:
- oh = 256
- ow = pw / ph * 256
-
- h = int(round(oh / 4) * 4)
- w = int(round(ow / 4) * 4)
-
- if (h == ph) and (w == pw):
- return img
-
- return img.resize((w, h), method)
-
-
-def data_transforms_rgb_old(img):
- w, h = img.size
- A = img
- if w < 256 or h < 256:
- A = transforms.Scale(256, Image.BILINEAR)(img)
- return transforms.CenterCrop(256)(A)
-
-
-def irregular_hole_synthesize(img, mask):
-
- img_np = np.array(img).astype("uint8")
- mask_np = np.array(mask).astype("uint8")
- mask_np = mask_np / 255
- img_new = img_np * (1 - mask_np) + mask_np * 255
-
- hole_img = Image.fromarray(img_new.astype("uint8")).convert("RGB")
-
- return hole_img
-
-
-def parameter_set(opt):
- ## Default parameters
- opt.serial_batches = True # no shuffle
- opt.no_flip = True # no flip
- opt.label_nc = 0
- opt.n_downsample_global = 3
- opt.mc = 64
- opt.k_size = 4
- opt.start_r = 1
- opt.mapping_n_block = 6
- opt.map_mc = 512
- opt.no_instance = True
- opt.checkpoints_dir = "./checkpoints/restoration"
- ##
-
- if opt.Quality_restore:
- opt.name = "mapping_quality"
- opt.load_pretrainA = os.path.join(opt.checkpoints_dir, "VAE_A_quality")
- opt.load_pretrainB = os.path.join(opt.checkpoints_dir, "VAE_B_quality")
- if opt.Scratch_and_Quality_restore:
- opt.NL_res = True
- opt.use_SN = True
- opt.correlation_renormalize = True
- opt.NL_use_mask = True
- opt.NL_fusion_method = "combine"
- opt.non_local = "Setting_42"
- opt.name = "mapping_scratch"
- opt.load_pretrainA = os.path.join(opt.checkpoints_dir, "VAE_A_quality")
- opt.load_pretrainB = os.path.join(opt.checkpoints_dir, "VAE_B_scratch")
- if opt.HR:
- opt.mapping_exp = 1
- opt.inference_optimize = True
- opt.mask_dilation = 3
- opt.name = "mapping_Patch_Attention"
-
-
-if __name__ == "__main__":
-
- opt = TestOptions().parse(save=False)
- parameter_set(opt)
-
- model = Pix2PixHDModel_Mapping()
-
- model.initialize(opt)
- model.eval()
-
- if not os.path.exists(opt.outputs_dir + "/" + "input_image"):
- os.makedirs(opt.outputs_dir + "/" + "input_image")
- if not os.path.exists(opt.outputs_dir + "/" + "restored_image"):
- os.makedirs(opt.outputs_dir + "/" + "restored_image")
- if not os.path.exists(opt.outputs_dir + "/" + "origin"):
- os.makedirs(opt.outputs_dir + "/" + "origin")
-
- dataset_size = 0
-
- input_loader = os.listdir(opt.test_input)
- dataset_size = len(input_loader)
- input_loader.sort()
-
- if opt.test_mask != "":
- mask_loader = os.listdir(opt.test_mask)
- dataset_size = len(os.listdir(opt.test_mask))
- mask_loader.sort()
-
- img_transform = transforms.Compose(
- [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
- )
- mask_transform = transforms.ToTensor()
-
- for i in range(dataset_size):
-
- input_name = input_loader[i]
- input_file = os.path.join(opt.test_input, input_name)
- if not os.path.isfile(input_file):
- print("Skipping non-file %s" % input_name)
- continue
- input = Image.open(input_file).convert("RGB")
-
- print("Now you are processing %s" % (input_name))
-
- if opt.NL_use_mask:
- mask_name = mask_loader[i]
- mask = Image.open(os.path.join(opt.test_mask, mask_name)).convert("RGB")
- if opt.mask_dilation != 0:
- kernel = np.ones((3,3),np.uint8)
- mask = np.array(mask)
- mask = cv2.dilate(mask,kernel,iterations = opt.mask_dilation)
- mask = Image.fromarray(mask.astype('uint8'))
- origin = input
- input = irregular_hole_synthesize(input, mask)
- mask = mask_transform(mask)
- mask = mask[:1, :, :] ## Convert to single channel
- mask = mask.unsqueeze(0)
- input = img_transform(input)
- input = input.unsqueeze(0)
- else:
- if opt.test_mode == "Scale":
- input = data_transforms(input, scale=True)
- if opt.test_mode == "Full":
- input = data_transforms(input, scale=False)
- if opt.test_mode == "Crop":
- input = data_transforms_rgb_old(input)
- origin = input
- input = img_transform(input)
- input = input.unsqueeze(0)
- mask = torch.zeros_like(input)
- ### Necessary input
-
- try:
- with torch.no_grad():
- generated = model.inference(input, mask)
- except Exception as ex:
- print("Skip %s due to an error:\n%s" % (input_name, str(ex)))
- continue
-
- if input_name.endswith(".jpg"):
- input_name = input_name[:-4] + ".png"
-
- image_grid = vutils.save_image(
- (input + 1.0) / 2.0,
- opt.outputs_dir + "/input_image/" + input_name,
- nrow=1,
- padding=0,
- normalize=True,
- )
- image_grid = vutils.save_image(
- (generated.data.cpu() + 1.0) / 2.0,
- opt.outputs_dir + "/restored_image/" + input_name,
- nrow=1,
- padding=0,
- normalize=True,
- )
-
- origin.save(opt.outputs_dir + "/origin/" + input_name)
\ No newline at end of file
diff --git a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/losses.py b/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/losses.py
deleted file mode 100644
index f835539a16b49e1065fef4e4a1efb259b88dcf64..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/losses.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import torch
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1 - dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1 - dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
-
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/utils/vis.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/utils/vis.py
deleted file mode 100644
index 4c1a291306453c15bdfe5117302beb62e0fe7248..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/utils/vis.py
+++ /dev/null
@@ -1,129 +0,0 @@
-from functools import lru_cache
-
-import cv2
-import numpy as np
-
-
-def visualize_instances(imask, bg_color=255,
- boundaries_color=None, boundaries_width=1, boundaries_alpha=0.8):
- num_objects = imask.max() + 1
- palette = get_palette(num_objects)
- if bg_color is not None:
- palette[0] = bg_color
-
- result = palette[imask].astype(np.uint8)
- if boundaries_color is not None:
- boundaries_mask = get_boundaries(imask, boundaries_width=boundaries_width)
- tresult = result.astype(np.float32)
- tresult[boundaries_mask] = boundaries_color
- tresult = tresult * boundaries_alpha + (1 - boundaries_alpha) * result
- result = tresult.astype(np.uint8)
-
- return result
-
-
-@lru_cache(maxsize=16)
-def get_palette(num_cls):
- palette = np.zeros(3 * num_cls, dtype=np.int32)
-
- for j in range(0, num_cls):
- lab = j
- i = 0
-
- while lab > 0:
- palette[j*3 + 0] |= (((lab >> 0) & 1) << (7-i))
- palette[j*3 + 1] |= (((lab >> 1) & 1) << (7-i))
- palette[j*3 + 2] |= (((lab >> 2) & 1) << (7-i))
- i = i + 1
- lab >>= 3
-
- return palette.reshape((-1, 3))
-
-
-def visualize_mask(mask, num_cls):
- palette = get_palette(num_cls)
- mask[mask == -1] = 0
-
- return palette[mask].astype(np.uint8)
-
-
-def visualize_proposals(proposals_info, point_color=(255, 0, 0), point_radius=1):
- proposal_map, colors, candidates = proposals_info
-
- proposal_map = draw_probmap(proposal_map)
- for x, y in candidates:
- proposal_map = cv2.circle(proposal_map, (y, x), point_radius, point_color, -1)
-
- return proposal_map
-
-
-def draw_probmap(x):
- return cv2.applyColorMap((x * 255).astype(np.uint8), cv2.COLORMAP_HOT)
-
-
-def draw_points(image, points, color, radius=3):
- image = image.copy()
- for p in points:
- image = cv2.circle(image, (int(p[1]), int(p[0])), radius, color, -1)
-
- return image
-
-
-def draw_instance_map(x, palette=None):
- num_colors = x.max() + 1
- if palette is None:
- palette = get_palette(num_colors)
-
- return palette[x].astype(np.uint8)
-
-
-def blend_mask(image, mask, alpha=0.6):
- if mask.min() == -1:
- mask = mask.copy() + 1
-
- imap = draw_instance_map(mask)
- result = (image * (1 - alpha) + alpha * imap).astype(np.uint8)
- return result
-
-
-def get_boundaries(instances_masks, boundaries_width=1):
- boundaries = np.zeros((instances_masks.shape[0], instances_masks.shape[1]), dtype=np.bool)
-
- for obj_id in np.unique(instances_masks.flatten()):
- if obj_id == 0:
- continue
-
- obj_mask = instances_masks == obj_id
- kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
- inner_mask = cv2.erode(obj_mask.astype(np.uint8), kernel, iterations=boundaries_width).astype(np.bool)
-
- obj_boundary = np.logical_xor(obj_mask, np.logical_and(inner_mask, obj_mask))
- boundaries = np.logical_or(boundaries, obj_boundary)
- return boundaries
-
-
-def draw_with_blend_and_clicks(img, mask=None, alpha=0.6, clicks_list=None, pos_color=(0, 255, 0),
- neg_color=(255, 0, 0), radius=4):
- result = img.copy()
-
- if mask is not None:
- palette = get_palette(np.max(mask) + 1)
- rgb_mask = palette[mask.astype(np.uint8)]
-
- mask_region = (mask > 0).astype(np.uint8)
- result = result * (1 - mask_region[:, :, np.newaxis]) + \
- (1 - alpha) * mask_region[:, :, np.newaxis] * result + \
- alpha * rgb_mask
- result = result.astype(np.uint8)
-
- # result = (result * (1 - alpha) + alpha * rgb_mask).astype(np.uint8)
-
- if clicks_list is not None and len(clicks_list) > 0:
- pos_points = [click.coords for click in clicks_list if click.is_positive]
- neg_points = [click.coords for click in clicks_list if not click.is_positive]
-
- result = draw_points(result, pos_points, pos_color, radius=radius)
- result = draw_points(result, neg_points, neg_color, radius=radius)
-
- return result
-
diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/datasets/imagenet.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/datasets/imagenet.py
deleted file mode 100644
index 9b6d78e51f1b0c7d6e1fba2869a72a6f383e81b2..0000000000000000000000000000000000000000
--- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/data/datasets/imagenet.py
+++ /dev/null
@@ -1,41 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import os
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.data.datasets.lvis import get_lvis_instances_meta
-from .lvis_v1 import custom_load_lvis_json, get_lvis_22k_meta
-def custom_register_imagenet_instances(name, metadata, json_file, image_root):
- """
- """
- DatasetCatalog.register(name, lambda: custom_load_lvis_json(
- json_file, image_root, name))
- MetadataCatalog.get(name).set(
- json_file=json_file, image_root=image_root,
- evaluator_type="imagenet", **metadata
- )
-
-_CUSTOM_SPLITS_IMAGENET = {
- "imagenet_lvis_v1": ("imagenet/ImageNet-LVIS/", "imagenet/annotations/imagenet_lvis_image_info.json"),
-}
-
-for key, (image_root, json_file) in _CUSTOM_SPLITS_IMAGENET.items():
- custom_register_imagenet_instances(
- key,
- get_lvis_instances_meta('lvis_v1'),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
-
-
-_CUSTOM_SPLITS_IMAGENET_22K = {
- "imagenet_lvis-22k": ("imagenet/ImageNet-LVIS/", "imagenet/annotations/imagenet-22k_image_info_lvis-22k.json"),
-}
-
-for key, (image_root, json_file) in _CUSTOM_SPLITS_IMAGENET_22K.items():
- custom_register_imagenet_instances(
- key,
- get_lvis_22k_meta(),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
\ No newline at end of file
diff --git a/spaces/MedicalAILabo/Xp-age/lib/component/criterion.py b/spaces/MedicalAILabo/Xp-age/lib/component/criterion.py
deleted file mode 100644
index f04e621b79e0e31a02042cd23da30a177c3fd755..0000000000000000000000000000000000000000
--- a/spaces/MedicalAILabo/Xp-age/lib/component/criterion.py
+++ /dev/null
@@ -1,332 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-
-import torch
-import torch.nn as nn
-from typing import Dict, Union
-
-# Alias of typing
-# eg. {'labels': {'label_A: torch.Tensor([0, 1, ...]), ...}}
-LabelDict = Dict[str, Dict[str, Union[torch.IntTensor, torch.FloatTensor]]]
-
-
-class RMSELoss(nn.Module):
- """
- Class to calculate RMSE.
- """
- def __init__(self, eps: float = 1e-7) -> None:
- """
- Args:
- eps (float, optional): value to avoid 0. Defaults to 1e-7.
- """
- super().__init__()
- self.mse = nn.MSELoss()
- self.eps = eps
-
- def forward(self, yhat: float, y: float) -> torch.FloatTensor:
- """
- Calculate RMSE.
-
- Args:
- yhat (float): prediction value
- y (float): ground truth value
-
- Returns:
- float: RMSE
- """
- _loss = self.mse(yhat, y) + self.eps
- return torch.sqrt(_loss)
-
-
-class Regularization:
- """
- Class to calculate regularization loss.
-
- Args:
- object (object): object
- """
- def __init__(self, order: int, weight_decay: float) -> None:
- """
- The initialization of Regularization class.
-
- Args:
- order: (int) norm order number
- weight_decay: (float) weight decay rate
- """
- super().__init__()
- self.order = order
- self.weight_decay = weight_decay
-
- def __call__(self, network: nn.Module) -> torch.FloatTensor:
- """"
- Calculates regularization(self.order) loss for network.
-
- Args:
- model: (torch.nn.Module object)
-
- Returns:
- torch.FloatTensor: the regularization(self.order) loss
- """
- reg_loss = 0
- for name, w in network.named_parameters():
- if 'weight' in name:
- reg_loss = reg_loss + torch.norm(w, p=self.order)
- reg_loss = self.weight_decay * reg_loss
- return reg_loss
-
-
-class NegativeLogLikelihood(nn.Module):
- """
- Class to calculate RMSE.
- """
- def __init__(self, device: torch.device) -> None:
- """
- Args:
- device (torch.device): device
- """
- super().__init__()
- self.L2_reg = 0.05
- self.reg = Regularization(order=2, weight_decay=self.L2_reg)
- self.device = device
-
- def forward(
- self,
- output: torch.FloatTensor,
- label: torch.IntTensor,
- periods: torch.FloatTensor,
- network: nn.Module
- ) -> torch.FloatTensor:
- """
- Calculates Negative Log Likelihood.
-
- Args:
- output (torch.FloatTensor): prediction value, ie risk prediction
- label (torch.IntTensor): occurrence of event
- periods (torch.FloatTensor): period
- network (nn.Network): network
-
- Returns:
- torch.FloatTensor: Negative Log Likelihood
- """
- mask = torch.ones(periods.shape[0], periods.shape[0]).to(self.device) # output and mask should be on the same device.
- mask[(periods.T - periods) > 0] = 0
-
- _loss = torch.exp(output) * mask
- # Note: torch.sum(_loss, dim=0) possibly returns nan, in particular MLP.
- _loss = torch.sum(_loss, dim=0) / torch.sum(mask, dim=0)
- _loss = torch.log(_loss).reshape(-1, 1)
- num_occurs = torch.sum(label)
-
- if num_occurs.item() == 0.0:
- loss = torch.tensor([1e-7], requires_grad=True).to(self.device) # To avoid zero division, set small value as loss
- return loss
- else:
- neg_log_loss = -torch.sum((output - _loss) * label) / num_occurs
- l2_loss = self.reg(network)
- loss = neg_log_loss + l2_loss
- return loss
-
-
-class ClsCriterion:
- """
- Class of criterion for classification.
- """
- def __init__(self, device: torch.device = None) -> None:
- """
- Set CrossEntropyLoss.
-
- Args:
- device (torch.device): device
- """
- self.device = device
- self.criterion = nn.CrossEntropyLoss()
-
- def __call__(
- self,
- outputs: Dict[str, torch.FloatTensor],
- labels: Dict[str, LabelDict]
- ) -> Dict[str, torch.FloatTensor]:
- """
- Calculate loss.
-
- Args:
- outputs (Dict[str, torch.FloatTensor], optional): output
- labels (Dict[str, LabelDict]): labels
-
- Returns:
- Dict[str, torch.FloatTensor]: loss for each label and their total loss
-
- # No reshape and no cast:
- output: [64, 2]: torch.float32
- label: [64] : torch.int64
- label.dtype should be torch.int64, otherwise nn.CrossEntropyLoss() causes error.
-
- eg.
- outputs = {'label_A': [[0.8, 0.2], ...] 'label_B': [[0.7, 0.3]], ...}
- labels = { 'labels': {'label_A: 1: [1, 1, 0, ...], 'label_B': [0, 0, 1, ...], ...} }
-
- -> losses = {total: loss_total, label_A: loss_A, label_B: loss_B, ... }
- """
- _labels = labels['labels']
-
- # loss for each label and total of their losses
- losses = dict()
- losses['total'] = torch.tensor([0.0], requires_grad=True).to(self.device)
- for label_name in labels['labels'].keys():
- _output = outputs[label_name]
- _label = _labels[label_name]
- _label_loss = self.criterion(_output, _label)
- losses[label_name] = _label_loss
- losses['total'] = torch.add(losses['total'], _label_loss)
- return losses
-
-
-class RegCriterion:
- """
- Class of criterion for regression.
- """
- def __init__(self, criterion_name: str = None, device: torch.device = None) -> None:
- """
- Set MSE, RMSE or MAE.
-
- Args:
- criterion_name (str): 'MSE', 'RMSE', or 'MAE'
- device (torch.device): device
- """
- self.device = device
-
- if criterion_name == 'MSE':
- self.criterion = nn.MSELoss()
- elif criterion_name == 'RMSE':
- self.criterion = RMSELoss()
- elif criterion_name == 'MAE':
- self.criterion = nn.L1Loss()
- else:
- raise ValueError(f"Invalid criterion for regression: {criterion_name}.")
-
- def __call__(
- self,
- outputs: Dict[str, torch.FloatTensor],
- labels: Dict[str, LabelDict]
- ) -> Dict[str, torch.FloatTensor]:
- """
- Calculate loss.
-
- Args:
- Args:
- outputs (Dict[str, torch.FloatTensor], optional): output
- labels (Dict[str, LabelDict]): labels
-
- Returns:
- Dict[str, torch.FloatTensor]: loss for each label and their total loss
-
- # Reshape and cast
- output: [64, 1] -> [64]: torch.float32
- label: [64]: torch.float64 -> torch.float32
- # label.dtype should be torch.float32, otherwise cannot backward.
-
- eg.
- outputs = {'label_A': [[10.8], ...] 'label_B': [[15.7]], ...}
- labels = {'labels': {'label_A: 1: [10, 9, ...], 'label_B': [12, 17,], ...}}
- -> losses = {total: loss_total, label_A: loss_A, label_B: loss_B, ... }
- """
- _outputs = {label_name: _output.squeeze() for label_name, _output in outputs.items()}
- _labels = {label_name: _label.to(torch.float32) for label_name, _label in labels['labels'].items()}
-
- # loss for each label and total of their losses
- losses = dict()
- losses['total'] = torch.tensor([0.0], requires_grad=True).to(self.device)
- for label_name in labels['labels'].keys():
- _output = _outputs[label_name]
- _label = _labels[label_name]
- _label_loss = self.criterion(_output, _label)
- losses[label_name] = _label_loss
- losses['total'] = torch.add(losses['total'], _label_loss)
- return losses
-
-
-class DeepSurvCriterion:
- """
- Class of criterion for deepsurv.
- """
- def __init__(self, device: torch.device = None) -> None:
- """
- Set NegativeLogLikelihood.
-
- Args:
- device (torch.device, optional): device
- """
- self.device = device
- self.criterion = NegativeLogLikelihood(self.device).to(self.device)
-
- def __call__(
- self,
- outputs: Dict[str, torch.FloatTensor],
- labels: Dict[str, Union[LabelDict, torch.IntTensor, nn.Module]]
- ) -> Dict[str, torch.FloatTensor]:
- """
- Calculate loss.
-
- Args:
- outputs (Dict[str, torch.FloatTensor], optional): output
- labels (Dict[str, Union[LabelDict, torch.IntTensor, nn.Module]]): labels, periods, and network
-
- Returns:
- Dict[str, torch.FloatTensor]: loss for each label and their total loss
-
- # Reshape and no cast
- output: [64, 1]: torch.float32
- label: [64] -> [64, 1]: torch.int64
- period: [64] -> [64, 1]: torch.float32
-
- eg.
- outputs = {'label_A': [[10.8], ...] 'label_B': [[15.7]], ...}
- labels = {
- 'labels': {'label_A: 1: [1, 0, 1, ...] },
- 'periods': [5, 10, 7, ...],
- 'network': network
- }
- -> losses = {total: loss_total, label_A: loss_A, label_B: loss_B, ... }
- """
- _labels = {label_name: _label.reshape(-1, 1) for label_name, _label in labels['labels'].items()}
- _periods = labels['periods'].reshape(-1, 1)
- _network = labels['network']
-
- # loss for each label and total of their losses
- losses = dict()
- losses['total'] = torch.tensor([0.0], requires_grad=True).to(self.device)
- for label_name in labels['labels'].keys():
- _output = outputs[label_name]
- _label = _labels[label_name]
- _label_loss = self.criterion(_output, _label, _periods, _network)
- losses[label_name] = _label_loss
- losses['total'] = torch.add(losses['total'], _label_loss)
- return losses
-
-
-def set_criterion(
- criterion_name: str,
- device: torch.device
- ) -> Union[ClsCriterion, RegCriterion, DeepSurvCriterion]:
- """
- Return criterion class
-
- Args:
- criterion_name (str): criterion name
- device (torch.device): device
-
- Returns:
- Union[ClsCriterion, RegCriterion, DeepSurvCriterion]: criterion class
- """
-
- if criterion_name == 'CEL':
- return ClsCriterion(device=device)
-
- elif criterion_name in ['MSE', 'RMSE', 'MAE']:
- return RegCriterion(criterion_name=criterion_name, device=device)
-
- elif criterion_name == 'NLL':
- return DeepSurvCriterion(device=device)
-
- else:
- raise ValueError(f"Invalid criterion: {criterion_name}.")
diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/kie/_base_/default_runtime.py b/spaces/Mountchicken/MAERec-Gradio/configs/kie/_base_/default_runtime.py
deleted file mode 100644
index bcc5b3fa02a0f3259f701cddecbc307988424a6b..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/configs/kie/_base_/default_runtime.py
+++ /dev/null
@@ -1,33 +0,0 @@
-default_scope = 'mmocr'
-env_cfg = dict(
- cudnn_benchmark=False,
- mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
- dist_cfg=dict(backend='nccl'),
-)
-randomness = dict(seed=None)
-
-default_hooks = dict(
- timer=dict(type='IterTimerHook'),
- logger=dict(type='LoggerHook', interval=100),
- param_scheduler=dict(type='ParamSchedulerHook'),
- checkpoint=dict(type='CheckpointHook', interval=1),
- sampler_seed=dict(type='DistSamplerSeedHook'),
- sync_buffer=dict(type='SyncBuffersHook'),
- visualization=dict(
- type='VisualizationHook',
- interval=1,
- enable=False,
- show=False,
- draw_gt=False,
- draw_pred=False),
-)
-
-# Logging
-log_level = 'INFO'
-log_processor = dict(type='LogProcessor', window_size=10, by_epoch=True)
-
-load_from = None
-resume = False
-
-visualizer = dict(
- type='KIELocalVisualizer', name='visualizer', is_openset=False)
diff --git a/spaces/Mountchicken/MAERec-Gradio/setup.py b/spaces/Mountchicken/MAERec-Gradio/setup.py
deleted file mode 100644
index edc22512aacd3095bc0d7fb6c79e9c596b687320..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/setup.py
+++ /dev/null
@@ -1,201 +0,0 @@
-import os
-import os.path as osp
-import shutil
-import sys
-import warnings
-from setuptools import find_packages, setup
-
-
-def readme():
- with open('README.md', encoding='utf-8') as f:
- content = f.read()
- return content
-
-
-version_file = 'mmocr/version.py'
-is_windows = sys.platform == 'win32'
-
-
-def add_mim_extension():
- """Add extra files that are required to support MIM into the package.
-
- These files will be added by creating a symlink to the originals if the
- package is installed in `editable` mode (e.g. pip install -e .), or by
- copying from the originals otherwise.
- """
-
- # parse installment mode
- if 'develop' in sys.argv:
- # installed by `pip install -e .`
- mode = 'symlink'
- elif 'sdist' in sys.argv or 'bdist_wheel' in sys.argv:
- # installed by `pip install .`
- # or create source distribution by `python setup.py sdist`
- mode = 'copy'
- else:
- return
-
- filenames = ['tools', 'configs', 'model-index.yml', 'dicts']
- repo_path = osp.dirname(__file__)
- mim_path = osp.join(repo_path, 'mmocr', '.mim')
- os.makedirs(mim_path, exist_ok=True)
-
- for filename in filenames:
- if osp.exists(filename):
- src_path = osp.join(repo_path, filename)
- tar_path = osp.join(mim_path, filename)
-
- if osp.isfile(tar_path) or osp.islink(tar_path):
- os.remove(tar_path)
- elif osp.isdir(tar_path):
- shutil.rmtree(tar_path)
-
- if mode == 'symlink':
- src_relpath = osp.relpath(src_path, osp.dirname(tar_path))
- try:
- os.symlink(src_relpath, tar_path)
- except OSError:
- # Creating a symbolic link on windows may raise an
- # `OSError: [WinError 1314]` due to privilege. If
- # the error happens, the src file will be copied
- mode = 'copy'
- warnings.warn(
- f'Failed to create a symbolic link for {src_relpath}, '
- f'and it will be copied to {tar_path}')
- else:
- continue
-
- if mode == 'copy':
- if osp.isfile(src_path):
- shutil.copyfile(src_path, tar_path)
- elif osp.isdir(src_path):
- shutil.copytree(src_path, tar_path)
- else:
- warnings.warn(f'Cannot copy file {src_path}.')
- else:
- raise ValueError(f'Invalid mode {mode}')
-
-
-def get_version():
- with open(version_file) as f:
- exec(compile(f.read(), version_file, 'exec'))
- import sys
-
- # return short version for sdist
- if 'sdist' in sys.argv or 'bdist_wheel' in sys.argv:
- return locals()['short_version']
- else:
- return locals()['__version__']
-
-
-def parse_requirements(fname='requirements.txt', with_version=True):
- """Parse the package dependencies listed in a requirements file but strip
- specific version information.
-
- Args:
- fname (str): Path to requirements file.
- with_version (bool, default=False): If True, include version specs.
- Returns:
- info (list[str]): List of requirements items.
- CommandLine:
- python -c "import setup; print(setup.parse_requirements())"
- """
- import re
- import sys
- from os.path import exists
- require_fpath = fname
-
- def parse_line(line):
- """Parse information from a line in a requirements text file."""
- if line.startswith('-r '):
- # Allow specifying requirements in other files
- target = line.split(' ')[1]
- for info in parse_require_file(target):
- yield info
- else:
- info = {'line': line}
- if line.startswith('-e '):
- info['package'] = line.split('#egg=')[1]
- else:
- # Remove versioning from the package
- pat = '(' + '|'.join(['>=', '==', '>']) + ')'
- parts = re.split(pat, line, maxsplit=1)
- parts = [p.strip() for p in parts]
-
- info['package'] = parts[0]
- if len(parts) > 1:
- op, rest = parts[1:]
- if ';' in rest:
- # Handle platform specific dependencies
- # http://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-platform-specific-dependencies
- version, platform_deps = map(str.strip,
- rest.split(';'))
- info['platform_deps'] = platform_deps
- else:
- version = rest # NOQA
- info['version'] = (op, version)
- yield info
-
- def parse_require_file(fpath):
- with open(fpath) as f:
- for line in f.readlines():
- line = line.strip()
- if line and not line.startswith('#'):
- yield from parse_line(line)
-
- def gen_packages_items():
- if exists(require_fpath):
- for info in parse_require_file(require_fpath):
- parts = [info['package']]
- if with_version and 'version' in info:
- parts.extend(info['version'])
- if not sys.version.startswith('3.4'):
- # apparently package_deps are broken in 3.4
- platform_deps = info.get('platform_deps')
- if platform_deps is not None:
- parts.append(';' + platform_deps)
- item = ''.join(parts)
- yield item
-
- packages = list(gen_packages_items())
- return packages
-
-
-if __name__ == '__main__':
- add_mim_extension()
- library_dirs = [
- lp for lp in os.environ.get('LD_LIBRARY_PATH', '').split(':')
- if len(lp) > 1
- ]
- setup(
- name='mmocr',
- version=get_version(),
- description='OpenMMLab Text Detection, OCR, and NLP Toolbox',
- long_description=readme(),
- long_description_content_type='text/markdown',
- maintainer='MMOCR Authors',
- maintainer_email='openmmlab@gmail.com',
- keywords='Text Detection, OCR, KIE, NLP',
- packages=find_packages(exclude=('configs', 'tools', 'demo')),
- include_package_data=True,
- url='https://github.com/open-mmlab/mmocr',
- classifiers=[
- 'Development Status :: 4 - Beta',
- 'License :: OSI Approved :: Apache Software License',
- 'Operating System :: OS Independent',
- 'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.6',
- 'Programming Language :: Python :: 3.7',
- 'Programming Language :: Python :: 3.8',
- 'Programming Language :: Python :: 3.9',
- ],
- license='Apache License 2.0',
- install_requires=parse_requirements('requirements/runtime.txt'),
- extras_require={
- 'all': parse_requirements('requirements.txt'),
- 'tests': parse_requirements('requirements/tests.txt'),
- 'build': parse_requirements('requirements/build.txt'),
- 'optional': parse_requirements('requirements/optional.txt'),
- 'mim': parse_requirements('requirements/mminstall.txt'),
- },
- zip_safe=False)
diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/bid_converter.py b/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/bid_converter.py
deleted file mode 100644
index ec61b64bb42effc6194e1661a819224fa02b2c13..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/tools/dataset_converters/textrecog/bid_converter.py
+++ /dev/null
@@ -1,247 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import argparse
-import os
-import os.path as osp
-
-import mmcv
-import mmengine
-
-from mmocr.utils import crop_img, dump_ocr_data
-
-
-def collect_files(img_dir, gt_dir):
- """Collect all images and their corresponding groundtruth files.
-
- Args:
- img_dir (str): The image directory
- gt_dir (str): The groundtruth directory
-
- Returns:
- files (list): The list of tuples (img_file, groundtruth_file)
- """
- assert isinstance(img_dir, str)
- assert img_dir
- assert isinstance(gt_dir, str)
- assert gt_dir
-
- ann_list, imgs_list = [], []
- for img_file in os.listdir(img_dir):
- ann_file = img_file.split('_')[0] + '_gt_ocr.txt'
- ann_list.append(osp.join(gt_dir, ann_file))
- imgs_list.append(osp.join(img_dir, img_file))
-
- files = list(zip(imgs_list, ann_list))
- assert len(files), f'No images found in {img_dir}'
- print(f'Loaded {len(files)} images from {img_dir}')
-
- return files
-
-
-def collect_annotations(files, nproc=1):
- """Collect the annotation information.
-
- Args:
- files (list): The list of tuples (image_file, groundtruth_file)
- nproc (int): The number of process to collect annotations
-
- Returns:
- images (list): The list of image information dicts
- """
- assert isinstance(files, list)
- assert isinstance(nproc, int)
-
- if nproc > 1:
- images = mmengine.track_parallel_progress(
- load_img_info, files, nproc=nproc)
- else:
- images = mmengine.track_progress(load_img_info, files)
-
- return images
-
-
-def load_img_info(files):
- """Load the information of one image.
-
- Args:
- files (tuple): The tuple of (img_file, groundtruth_file)
-
- Returns:
- img_info (dict): The dict of the img and annotation information
- """
- assert isinstance(files, tuple)
-
- img_file, gt_file = files
- assert osp.basename(gt_file).split('_')[0] == osp.basename(gt_file).split(
- '_')[0]
- # read imgs while ignoring orientations
- img = mmcv.imread(img_file, 'unchanged')
-
- img_info = dict(
- file_name=osp.basename(img_file),
- height=img.shape[0],
- width=img.shape[1],
- segm_file=osp.basename(gt_file))
-
- if osp.splitext(gt_file)[1] == '.txt':
- img_info = load_txt_info(gt_file, img_info)
- else:
- raise NotImplementedError
-
- return img_info
-
-
-def load_txt_info(gt_file, img_info):
- """Collect the annotation information.
-
- The annotation format is as the following:
- x, y, w, h, text
- 977, 152, 16, 49, NOME
- 962, 143, 12, 323, APPINHANESI BLAZEK PASSOTTO
- 906, 446, 12, 94, 206940361
- 905, 641, 12, 44, SPTC
-
- Args:
- gt_file (str): The path to ground-truth
- img_info (dict): The dict of the img and annotation information
-
- Returns:
- img_info (dict): The dict of the img and annotation information
- """
- with open(gt_file, encoding='latin1') as f:
- anno_info = []
- for line in f:
- line = line.strip('\n')
- # Ignore hard samples
- if line[0] == '[' or line[0] == 'x':
- continue
- ann = line.split(',')
- bbox = ann[0:4]
- bbox = [int(coord) for coord in bbox]
- x, y, w, h = bbox
- # in case ',' exists in label
- word = ','.join(ann[4:]) if len(ann[4:]) > 1 else ann[4]
- # remove the initial space
- word = word.strip()
- bbox = [x, y, x + w, y, x + w, y + h, x, y + h]
-
- anno = dict(bbox=bbox, word=word)
- anno_info.append(anno)
-
- img_info.update(anno_info=anno_info)
-
- return img_info
-
-
-def split_train_val_list(full_list, val_ratio):
- """Split list by val_ratio.
-
- Args:
- full_list (list): List to be splited
- val_ratio (float): Split ratio for val set
-
- return:
- list(list, list): Train_list and val_list
- """
- n_total = len(full_list)
- offset = int(n_total * val_ratio)
- if n_total == 0 or offset < 1:
- return [], full_list
- val_list = full_list[:offset]
- train_list = full_list[offset:]
- return [train_list, val_list]
-
-
-def generate_ann(root_path, image_infos, preserve_vertical, val_ratio, format):
- """Generate cropped annotations and label txt file.
-
- Args:
- root_path (str): The root path of the dataset
- image_infos (list[dict]): A list of dicts of the img and
- annotation information
- preserve_vertical (bool): Whether to preserve vertical texts
- val_ratio (float): Split ratio for val set
- format (str): Using jsonl(dict) or str to format annotations
- """
-
- assert val_ratio <= 1.
-
- if val_ratio:
- image_infos = split_train_val_list(image_infos, val_ratio)
- splits = ['training', 'val']
-
- else:
- image_infos = [image_infos]
- splits = ['training']
-
- for i, split in enumerate(splits):
- dst_image_root = osp.join(root_path, 'crops', split)
- ignore_image_root = osp.join(root_path, 'ignores', split)
- os.makedirs(dst_image_root, exist_ok=True)
-
- img_info = []
- for image_info in image_infos[i]:
- index = 1
- src_img_path = osp.join(root_path, 'imgs', image_info['file_name'])
- image = mmcv.imread(src_img_path)
- src_img_root = image_info['file_name'].split('.')[0]
-
- for anno in image_info['anno_info']:
- word = anno['word']
- dst_img = crop_img(image, anno['bbox'], 0, 0)
- h, w, _ = dst_img.shape
-
- dst_img_name = f'{src_img_root}_{index}.png'
- index += 1
- # Skip invalid annotations
- if min(dst_img.shape) == 0:
- continue
- # Skip vertical texts
- if not preserve_vertical and h / w > 2 and split == 'training':
- dst_img_path = osp.join(ignore_image_root, dst_img_name)
- mmcv.imwrite(dst_img, dst_img_path)
- continue
-
- dst_img_path = osp.join(dst_image_root, dst_img_name)
- mmcv.imwrite(dst_img, dst_img_path)
-
- img_info.append({
- 'file_name': dst_img_name,
- 'anno_info': [{
- 'text': word
- }]
- })
-
- dump_ocr_data(img_info,
- osp.join(root_path, f'{split.lower()}_label.json'),
- 'textrecog')
-
-
-def parse_args():
- parser = argparse.ArgumentParser(
- description='Generate training and val set of BID ')
- parser.add_argument('root_path', help='Root dir path of BID')
- parser.add_argument(
- '--preserve-vertical',
- help='Preserve samples containing vertical texts',
- action='store_true')
- parser.add_argument(
- '--val-ratio', help='Split ratio for val set', default=0., type=float)
- parser.add_argument(
- '--nproc', default=1, type=int, help='Number of processes')
- args = parser.parse_args()
- return args
-
-
-def main():
- args = parse_args()
- root_path = args.root_path
- with mmengine.Timer(print_tmpl='It takes {}s to convert BID annotation'):
- files = collect_files(
- osp.join(root_path, 'imgs'), osp.join(root_path, 'annotations'))
- image_infos = collect_annotations(files, nproc=args.nproc)
- generate_ann(root_path, image_infos, args.preserve_vertical,
- args.val_ratio, args.format)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/slurm_test.sh b/spaces/Mountchicken/MAERec-Gradio/tools/slurm_test.sh
deleted file mode 100644
index 865f45599ad883d216f0df0248a3815700615c17..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/tools/slurm_test.sh
+++ /dev/null
@@ -1,22 +0,0 @@
-#!/usr/bin/env bash
-
-set -x
-export PYTHONPATH=`pwd`:$PYTHONPATH
-
-PARTITION=$1
-JOB_NAME=$2
-CONFIG=$3
-CHECKPOINT=$4
-GPUS=${GPUS:-8}
-GPUS_PER_NODE=${GPUS_PER_NODE:-8}
-PY_ARGS=${@:5}
-SRUN_ARGS=${SRUN_ARGS:-""}
-
-srun -p ${PARTITION} \
- --job-name=${JOB_NAME} \
- --gres=gpu:${GPUS_PER_NODE} \
- --ntasks=${GPUS} \
- --ntasks-per-node=${GPUS_PER_NODE} \
- --kill-on-bad-exit=1 \
- ${SRUN_ARGS} \
- python -u tools/test.py ${CONFIG} ${CHECKPOINT} --launcher="slurm" ${PY_ARGS}
diff --git a/spaces/MyGenAiUser/MyGenAiVoiceChatBoat/app.py b/spaces/MyGenAiUser/MyGenAiVoiceChatBoat/app.py
deleted file mode 100644
index ca8b6d40b4ab898c70da92f4a4298de2baf703dc..0000000000000000000000000000000000000000
--- a/spaces/MyGenAiUser/MyGenAiVoiceChatBoat/app.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import os
-import re
-import requests
-import json
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-PLAY_HT_API_KEY=os.getenv('PLAY_HT_API_KEY')
-PLAY_HT_USER_ID=os.getenv('PLAY_HT_USER_ID')
-
-PLAY_HT_VOICE_ID=os.getenv('PLAY_HT_VOICE_ID')
-play_ht_api_get_audio_url = "https://play.ht/api/v2/tts"
-
-
-template = """You are a helpful assistant to answer user queries.
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-headers = {
- "accept": "text/event-stream",
- "content-type": "application/json",
- "AUTHORIZATION": "Bearer "+ PLAY_HT_API_KEY,
- "X-USER-ID": PLAY_HT_USER_ID
-}
-
-
-def get_payload(text):
- return {
- "text": text,
- "voice": PLAY_HT_VOICE_ID,
- "quality": "medium",
- "output_format": "mp3",
- "speed": 1,
- "sample_rate": 24000,
- "seed": None,
- "temperature": None
- }
-
-def get_generated_audio(text):
- payload = get_payload(text)
- generated_response = {}
- try:
- response = requests.post(play_ht_api_get_audio_url, json=payload, headers=headers)
- response.raise_for_status()
- generated_response["type"]= 'SUCCESS'
- generated_response["response"] = response.text
- except requests.exceptions.RequestException as e:
- generated_response["type"]= 'ERROR'
- try:
- response_text = json.loads(response.text)
- if response_text['error_message']:
- generated_response["response"] = response_text['error_message']
- else:
- generated_response["response"] = response.text
- except Exception as e:
- generated_response["response"] = response.text
- except Exception as e:
- generated_response["type"]= 'ERROR'
- generated_response["response"] = response.text
- return generated_response
-
-def extract_urls(text):
- # Define the regex pattern for URLs
- url_pattern = r'https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+[/\w\.-]*'
-
- # Find all occurrences of URLs in the text
- urls = re.findall(url_pattern, text)
-
- return urls
-
-def get_audio_reply_for_question(text):
- generated_audio_event = get_generated_audio(text)
- #From get_generated_audio, you will get events in a string format, from that we need to extract the url
- final_response = {
- "audio_url": '',
- "message": ''
- }
- if generated_audio_event["type"] == 'SUCCESS':
- audio_urls = extract_urls(generated_audio_event["response"])
- if len(audio_urls) == 0:
- final_response['message'] = "No audio file link found in generated event"
- else:
- final_response['audio_url'] = audio_urls[-1]
- else:
- final_response['message'] = generated_audio_event['response']
- return final_response
-
-def download_url(url):
- try:
- # Send a GET request to the URL to fetch the content
- final_response = {
- 'content':'',
- 'error':''
- }
- response = requests.get(url)
- # Check if the request was successful (status code 200)
- if response.status_code == 200:
- final_response['content'] = response.content
- else:
- final_response['error'] = f"Failed to download the URL. Status code: {response.status_code}"
- except Exception as e:
- final_response['error'] = f"Failed to download the URL. Error: {e}"
- return final_response
-
-def get_filename_from_url(url):
- # Use os.path.basename() to extract the file name from the URL
- file_name = os.path.basename(url)
- return file_name
-
-def get_text_response(user_message):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-def get_text_response_and_audio_response(user_message):
- response = get_text_response(user_message) # Getting the reply from Open AI
- audio_reply_for_question_response = get_audio_reply_for_question(response)
- final_response = {
- 'output_file_path': '',
- 'message':''
- }
- audio_url = audio_reply_for_question_response['audio_url']
- if audio_url:
- output_file_path=get_filename_from_url(audio_url)
- download_url_response = download_url(audio_url)
- audio_content = download_url_response['content']
- if audio_content:
- with open(output_file_path, "wb") as audio_file:
- audio_file.write(audio_content)
- final_response['output_file_path'] = output_file_path
- else:
- final_response['message'] = download_url_response['error']
- else:
- final_response['message'] = audio_reply_for_question_response['message']
- return final_response
-
-def chat_bot_response(message, history):
- text_and_audio_response = get_text_response_and_audio_response(message)
- output_file_path = text_and_audio_response['output_file_path']
- if output_file_path:
- return (text_and_audio_response['output_file_path'],)
- else:
- return text_and_audio_response['message']
-
-demo = gr.ChatInterface(chat_bot_response,examples=["How are you doing?","What are your interests?","Which places do you like to visit?"])
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/NATSpeech/DiffSpeech/utils/audio/io.py b/spaces/NATSpeech/DiffSpeech/utils/audio/io.py
deleted file mode 100644
index 34d5d20ae13e9aa481b1bc85117ad6539af8a624..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/DiffSpeech/utils/audio/io.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import subprocess
-
-import numpy as np
-from scipy.io import wavfile
-
-
-def save_wav(wav, path, sr, norm=False):
- if norm:
- wav = wav / np.abs(wav).max()
- wav = wav * 32767
- wavfile.write(path[:-4] + '.wav', sr, wav.astype(np.int16))
- if path[-4:] == '.mp3':
- to_mp3(path[:-4])
-
-
-def to_mp3(out_path):
- if out_path[-4:] == '.wav':
- out_path = out_path[:-4]
- subprocess.check_call(
- f'ffmpeg -threads 1 -loglevel error -i "{out_path}.wav" -vn -b:a 192k -y -hide_banner -async 1 "{out_path}.mp3"',
- shell=True, stdin=subprocess.PIPE)
- subprocess.check_call(f'rm -f "{out_path}.wav"', shell=True)
diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/export_tfhub.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/export_tfhub.py
deleted file mode 100644
index 5923309d1fa36a16d4cccda11650d9c3d0fcc616..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/export_tfhub.py
+++ /dev/null
@@ -1,95 +0,0 @@
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""A script to export the BERT core model as a TF-Hub SavedModel."""
-from __future__ import absolute_import
-from __future__ import division
-# from __future__ import google_type_annotations
-from __future__ import print_function
-
-from absl import app
-from absl import flags
-from absl import logging
-import tensorflow as tf
-from typing import Text
-from official.nlp.bert import bert_models
-from official.nlp.bert import configs
-
-FLAGS = flags.FLAGS
-
-flags.DEFINE_string("bert_config_file", None,
- "Bert configuration file to define core bert layers.")
-flags.DEFINE_string("model_checkpoint_path", None,
- "File path to TF model checkpoint.")
-flags.DEFINE_string("export_path", None, "TF-Hub SavedModel destination path.")
-flags.DEFINE_string("vocab_file", None,
- "The vocabulary file that the BERT model was trained on.")
-flags.DEFINE_bool("do_lower_case", None, "Whether to lowercase. If None, "
- "do_lower_case will be enabled if 'uncased' appears in the "
- "name of --vocab_file")
-
-
-def create_bert_model(bert_config: configs.BertConfig) -> tf.keras.Model:
- """Creates a BERT keras core model from BERT configuration.
-
- Args:
- bert_config: A `BertConfig` to create the core model.
-
- Returns:
- A keras model.
- """
- # Adds input layers just as placeholders.
- input_word_ids = tf.keras.layers.Input(
- shape=(None,), dtype=tf.int32, name="input_word_ids")
- input_mask = tf.keras.layers.Input(
- shape=(None,), dtype=tf.int32, name="input_mask")
- input_type_ids = tf.keras.layers.Input(
- shape=(None,), dtype=tf.int32, name="input_type_ids")
- transformer_encoder = bert_models.get_transformer_encoder(
- bert_config, sequence_length=None)
- sequence_output, pooled_output = transformer_encoder(
- [input_word_ids, input_mask, input_type_ids])
- # To keep consistent with legacy hub modules, the outputs are
- # "pooled_output" and "sequence_output".
- return tf.keras.Model(
- inputs=[input_word_ids, input_mask, input_type_ids],
- outputs=[pooled_output, sequence_output]), transformer_encoder
-
-
-def export_bert_tfhub(bert_config: configs.BertConfig,
- model_checkpoint_path: Text, hub_destination: Text,
- vocab_file: Text, do_lower_case: bool = None):
- """Restores a tf.keras.Model and saves for TF-Hub."""
- # If do_lower_case is not explicit, default to checking whether "uncased" is
- # in the vocab file name
- if do_lower_case is None:
- do_lower_case = "uncased" in vocab_file
- logging.info("Using do_lower_case=%s based on name of vocab_file=%s",
- do_lower_case, vocab_file)
- core_model, encoder = create_bert_model(bert_config)
- checkpoint = tf.train.Checkpoint(model=encoder)
- checkpoint.restore(model_checkpoint_path).assert_consumed()
- core_model.vocab_file = tf.saved_model.Asset(vocab_file)
- core_model.do_lower_case = tf.Variable(do_lower_case, trainable=False)
- core_model.save(hub_destination, include_optimizer=False, save_format="tf")
-
-
-def main(_):
- bert_config = configs.BertConfig.from_json_file(FLAGS.bert_config_file)
- export_bert_tfhub(bert_config, FLAGS.model_checkpoint_path, FLAGS.export_path,
- FLAGS.vocab_file, FLAGS.do_lower_case)
-
-
-if __name__ == "__main__":
- app.run(main)
diff --git a/spaces/Naszirs397/rvc-models/vc_infer_pipeline.py b/spaces/Naszirs397/rvc-models/vc_infer_pipeline.py
deleted file mode 100644
index c26d45068f9b6bf2b194b13c3c89f8a06347c124..0000000000000000000000000000000000000000
--- a/spaces/Naszirs397/rvc-models/vc_infer_pipeline.py
+++ /dev/null
@@ -1,306 +0,0 @@
-import numpy as np, parselmouth, torch, pdb
-from time import time as ttime
-import torch.nn.functional as F
-from config import x_pad, x_query, x_center, x_max
-import scipy.signal as signal
-import pyworld, os, traceback, faiss
-from scipy import signal
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-
-class VC(object):
- def __init__(self, tgt_sr, device, is_half):
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * x_query # 查询切点前后查询时间
- self.t_center = self.sr * x_center # 查询切点位置
- self.t_max = self.sr * x_max # 免查询时长阈值
- self.device = device
- self.is_half = is_half
-
- def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None):
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
- f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9, # layer 9
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0])
-
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
- _, I = index.search(npy, 1)
- npy = big_npy[I.squeeze()]
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768)
- .data.cpu()
- .float()
- .numpy()
- .astype(np.int16)
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768)
- .data.cpu()
- .float()
- .numpy()
- .astype(np.int16)
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- file_big_npy,
- index_rate,
- if_f0,
- f0_file=None,
- ):
- if (
- file_big_npy != ""
- and file_index != ""
- and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- big_npy = np.load(file_big_npy)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- print("Feature retrieval library doesn't exist or ratio is 0")
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0)
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- audio_opt = np.concatenate(audio_opt)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/Nee001/bing0/src/lib/bots/bing/tts.ts b/spaces/Nee001/bing0/src/lib/bots/bing/tts.ts
deleted file mode 100644
index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000
--- a/spaces/Nee001/bing0/src/lib/bots/bing/tts.ts
+++ /dev/null
@@ -1,82 +0,0 @@
-import { sleep } from './utils'
-
-const synth = window.speechSynthesis
-
-export class TTS {
- currentText = ''
- speakText = ''
- private controller = new AbortController()
- speaking = false
- get isSpeaking() {
- return this.speaking
- }
- finished = false
- constructor() {}
- abort = () => {
- this.controller.abort()
- }
-
- reset = () => {
- this.speaking = false
- this.finished = true
- this.currentText = ''
- this.speakText = ''
- this.abort()
- }
-
- speak = (text: string) => {
- if (!synth || text?.trim()?.length < 2) {
- return
- }
- this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '')
- this.finished = false
- this.loop()
- }
-
- private async doSpeek() {
- return new Promise((resolve) => {
- const endIndex = this.finished ? this.currentText.length :
- Math.max(
- this.currentText.lastIndexOf('。'),
- this.currentText.lastIndexOf(';'),
- this.currentText.lastIndexOf('、'),
- this.currentText.lastIndexOf('?'),
- this.currentText.lastIndexOf('\n')
- )
- const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0
-
- if (startIndex >= endIndex) {
- return resolve(true)
- }
- const text = this.currentText.slice(startIndex, endIndex)
- this.speakText = text
- const utterThis = new SpeechSynthesisUtterance(text)
- this.controller.signal.onabort = () => {
- synth.cancel()
- this.finished = true
- resolve(false)
- }
-
- utterThis.onend = function (event) {
- resolve(true)
- }
-
- utterThis.onerror = function (event) {
- resolve(false)
- }
-
- const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null
- utterThis.voice = voice
- synth.speak(utterThis)
- })
- }
-
- private async loop() {
- if (this.speaking) return
- this.speaking = true
- while(!this.finished) {
- await Promise.all([sleep(1000), this.doSpeek()])
- }
- this.speaking = false
- }
-}
diff --git a/spaces/NicolasGaudemet/LongDocumentSummarizer/summarizer_app.py b/spaces/NicolasGaudemet/LongDocumentSummarizer/summarizer_app.py
deleted file mode 100644
index 580c3de6f9f30b77d1eb5f0971d9d83404a52934..0000000000000000000000000000000000000000
--- a/spaces/NicolasGaudemet/LongDocumentSummarizer/summarizer_app.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import os
-import json
-import openai
-from langchain.document_loaders import PDFMinerLoader, UnstructuredURLLoader
-from langchain.chat_models import ChatOpenAI
-from langchain import PromptTemplate, LLMChain
-from langchain.text_splitter import CharacterTextSplitter
-from langchain.prompts import PromptTemplate
-from langchain.chains.summarize import load_summarize_chain
-import gradio as gr
-
-#chargement des paramètres
-with open("parametres.json", "r") as p:
- params = json.load(p)
- max_pages = params["max_pages"]
-
-def summarize(taille_resume, Document, url):
-
- # loads a PDF document
- if not Document and not url:
- return "Merci de fournir un document PDF ou lien vers un site web"
- elif not Document:
- loader = UnstructuredURLLoader(urls = [url])
- elif not Document.name.endswith('.pdf'):
- return ("Merci de fournir un document PDF")
- else:
- loader = PDFMinerLoader(Document.name) #PyPDFLoader créerait des pages trop petites (ex : 1 mot par page si ca vient d'un Ppt)
-
- docs = loader.load()
-
- #préparation du texte
- text_splitter = CharacterTextSplitter(separator = "\n", chunk_size=5000)
- docs = text_splitter.split_documents(docs)
- print(str(len(docs)) + " pages pour un maximum de " + str(max_pages))
-
- chunked_docs = docs[:int(max_pages/3)]
-
- #définition du LLM
- llm = ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens = taille_resume*2.2, temperature=0, openai_api_key = os.environ['OpenaiKey'])
-
- #résumé
- prompt_template = f"""Écris un résumé structuré et détaillé du document délimité par des triples accents graves.
- ASSURE-TOI que la longueur de ce résumé soit supérieure à {int(taille_resume/1.5)} mots et inférieure à {int(taille_resume*1.5)} mots.
- ASSURE-TOI AUSSI, C'EST LE PLUS IMPORTANT que la dernière phrase de ton résumé soit complète et se termine par un point.
- AJOUTE ENFIN le signe " |" après ce point final.
- """ + """DOCUMENT : ```{text}```"""
- summary_langage_prompt = PromptTemplate(template=prompt_template, input_variables=['text'])
- chain = load_summarize_chain(llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=summary_langage_prompt, combine_prompt = summary_langage_prompt)
- steps = chain({"input_documents": chunked_docs}, return_only_outputs=True)
-
- summary = steps['output_text']
- summary = summary + " " + str(len(summary.split())) + " mots"
-
- return summary
-
-# Création de l'interface Gradio
-
-iface = gr.Interface(
- fn=summarize,
- inputs=[gr.Slider(
- minimum=100,
- maximum=500,
- label="Taille indicative en mots",
- value=100,
- step=50),
- "file",
- gr.Textbox(label="Ou copier le lien")
- ],
- outputs=[gr.Textbox(label="Résumé")],
- title="Document Summarizer",
- description="par Nicolas \nRésume un PDF ou un site web",
- allow_flagging = "never")
-
-# Lancer l'interface
-iface.launch()
\ No newline at end of file
diff --git a/spaces/Notalib/GPT-Whisper-Wolfram-Google-Test/azure_utils.py b/spaces/Notalib/GPT-Whisper-Wolfram-Google-Test/azure_utils.py
deleted file mode 100644
index 4173eaa689abe9b7b6b66ed3fcf1ede591655a53..0000000000000000000000000000000000000000
--- a/spaces/Notalib/GPT-Whisper-Wolfram-Google-Test/azure_utils.py
+++ /dev/null
@@ -1,155 +0,0 @@
-# This class stores Azure voice data. Specifically, the class stores several records containing
-# language, lang_code, gender, voice_id and engine. The class also has a method to return the
-# voice_id, lang_code and engine given a language and gender.
-
-NEURAL_ENGINE = "neural"
-STANDARD_ENGINE = "standard"
-
-
-class AzureVoiceData:
- def get_voice(self, language, gender):
- for voice in self.voice_data:
- if voice['language'] == language and voice['gender'] == gender:
- return voice['azure_voice']
- return None
-
- def __init__(self):
- self.voice_data = [
- {'language': 'Arabic',
- 'azure_voice': 'ar-EG-ShakirNeural',
- 'gender': 'Male'},
- {'language': 'Arabic (Gulf)',
- 'azure_voice': 'ar-KW-FahedNeural',
- 'gender': 'Male'},
- {'language': 'Catalan',
- 'azure_voice': 'ca-ES-EnricNeural',
- 'gender': 'Male'},
- {'language': 'Chinese (Cantonese)',
- 'azure_voice': 'yue-CN-YunSongNeural',
- 'gender': 'Male'},
- {'language': 'Chinese (Mandarin)',
- 'azure_voice': 'zh-CN-YunxiNeural',
- 'gender': 'Male'},
- {'language': 'Danish',
- 'azure_voice': 'da-DK-JeppeNeural',
- 'gender': 'Male'},
- {'language': 'Dutch',
- 'azure_voice': 'nl-NL-MaartenNeural',
- 'gender': 'Male'},
- {'language': 'English (Australian)',
- 'azure_voice': 'en-AU-KenNeural',
- 'gender': 'Male'},
- {'language': 'English (British)',
- 'azure_voice': 'en-GB-RyanNeural',
- 'gender': 'Male'},
- {'language': 'English (Indian)',
- 'azure_voice': 'en-IN-PrabhatNeural',
- 'gender': 'Male'},
- {'language': 'English (New Zealand)',
- 'azure_voice': 'en-NZ-MitchellNeural',
- 'gender': 'Male'},
- {'language': 'English (South African)',
- 'azure_voice': 'en-ZA-LukeNeural',
- 'gender': 'Male'},
- {'language': 'English (US)',
- 'azure_voice': 'en-US-ChristopherNeural',
- 'gender': 'Male'},
- {'language': 'English (Welsh)',
- 'azure_voice': 'cy-GB-AledNeural',
- 'gender': 'Male'},
- {'language': 'Finnish',
- 'azure_voice': 'fi-FI-HarriNeural',
- 'gender': 'Male'},
- {'language': 'French',
- 'azure_voice': 'fr-FR-HenriNeural',
- 'gender': 'Male'},
- {'language': 'French (Canadian)',
- 'azure_voice': 'fr-CA-AntoineNeural',
- 'gender': 'Male'},
- {'language': 'German',
- 'azure_voice': 'de-DE-KlausNeural',
- 'gender': 'Male'},
- {'language': 'German (Austrian)',
- 'azure_voice': 'de-AT-JonasNeural',
- 'gender': 'Male'},
- {'language': 'Hindi',
- 'azure_voice': 'hi-IN-MadhurNeural',
- 'gender': 'Male'},
- {'language': 'Icelandic',
- 'azure_voice': 'is-IS-GunnarNeural',
- 'gender': 'Male'},
- {'language': 'Italian',
- 'azure_voice': 'it-IT-GianniNeural',
- 'gender': 'Male'},
- {'language': 'Japanese',
- 'azure_voice': 'ja-JP-KeitaNeural',
- 'gender': 'Male'},
- {'language': 'Korean',
- 'azure_voice': 'ko-KR-GookMinNeural',
- 'gender': 'Male'},
- {'language': 'Norwegian',
- 'azure_voice': 'nb-NO-FinnNeural',
- 'gender': 'Male'},
- {'language': 'Polish',
- 'azure_voice': 'pl-PL-MarekNeural',
- 'gender': 'Male'},
- {'language': 'Portuguese (Brazilian)',
- 'azure_voice': 'pt-BR-NicolauNeural',
- 'gender': 'Male'},
- {'language': 'Portuguese (European)',
- 'azure_voice': 'pt-PT-DuarteNeural',
- 'gender': 'Male'},
- {'language': 'Romanian',
- 'azure_voice': 'ro-RO-EmilNeural',
- 'gender': 'Male'},
- {'language': 'Russian',
- 'azure_voice': 'ru-RU-DmitryNeural',
- 'gender': 'Male'},
- {'language': 'Spanish (European)',
- 'azure_voice': 'es-ES-TeoNeural',
- 'gender': 'Male'},
- {'language': 'Spanish (Mexican)',
- 'azure_voice': 'es-MX-LibertoNeural',
- 'gender': 'Male'},
- {'language': 'Spanish (US)',
- 'azure_voice': 'es-US-AlonsoNeural"',
- 'gender': 'Male'},
- {'language': 'Swedish',
- 'azure_voice': 'sv-SE-MattiasNeural',
- 'gender': 'Male'},
- {'language': 'Turkish',
- 'azure_voice': 'tr-TR-AhmetNeural',
- 'gender': 'Male'},
- {'language': 'Welsh',
- 'azure_voice': 'cy-GB-AledNeural',
- 'gender': 'Male'},
- ]
-
-
-# Run from the command-line
-if __name__ == '__main__':
- azure_voice_data = AzureVoiceData()
-
- azure_voice = azure_voice_data.get_voice('English (US)', 'Male')
- print('English (US)', 'Male', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('English (US)', 'Female')
- print('English (US)', 'Female', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('French', 'Female')
- print('French', 'Female', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('French', 'Male')
- print('French', 'Male', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('Japanese', 'Female')
- print('Japanese', 'Female', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('Japanese', 'Male')
- print('Japanese', 'Male', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('Hindi', 'Female')
- print('Hindi', 'Female', azure_voice)
-
- azure_voice = azure_voice_data.get_voice('Hindi', 'Male')
- print('Hindi', 'Male', azure_voice)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_to_text/seg_mustc_data.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_to_text/seg_mustc_data.py
deleted file mode 100644
index 1ee665d6399729afe17d790d872eff34de124900..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_to_text/seg_mustc_data.py
+++ /dev/null
@@ -1,54 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import logging
-from pathlib import Path
-import soundfile as sf
-from examples.speech_to_text.prep_mustc_data import (
- MUSTC
-)
-
-from tqdm import tqdm
-
-log = logging.getLogger(__name__)
-
-
-def main(args):
- root = Path(args.data_root).absolute()
- lang = args.lang
- split = args.split
-
- cur_root = root / f"en-{lang}"
- assert cur_root.is_dir(), (
- f"{cur_root.as_posix()} does not exist. Skipped."
- )
-
- dataset = MUSTC(root.as_posix(), lang, split)
- output = Path(args.output).absolute()
- output.mkdir(exist_ok=True)
- f_text = open(output / f"{split}.{lang}", "w")
- f_wav_list = open(output / f"{split}.wav_list", "w")
- for waveform, sample_rate, _, text, _, utt_id in tqdm(dataset):
- sf.write(
- output / f"{utt_id}.wav",
- waveform.squeeze(0).numpy(),
- samplerate=int(sample_rate)
- )
- f_text.write(text + "\n")
- f_wav_list.write(str(output / f"{utt_id}.wav") + "\n")
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--data-root", "-d", required=True, type=str)
- parser.add_argument("--task", required=True, type=str, choices=["asr", "st"])
- parser.add_argument("--lang", required=True, type=str)
- parser.add_argument("--output", required=True, type=str)
- parser.add_argument("--split", required=True, choices=MUSTC.SPLITS)
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/wav2vec_featurize.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/wav2vec_featurize.py
deleted file mode 100644
index 588268b7080cbd3400ac144604b2d75cef2876dd..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/wav2vec_featurize.py
+++ /dev/null
@@ -1,249 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Helper script to pre-compute embeddings for a flashlight (previously called wav2letter++) dataset
-"""
-
-import argparse
-import glob
-import os
-from shutil import copy
-
-import h5py
-import numpy as np
-import soundfile as sf
-import torch
-import tqdm
-import fairseq
-from torch import nn
-
-
-def read_audio(fname):
- """ Load an audio file and return PCM along with the sample rate """
-
- wav, sr = sf.read(fname)
- assert sr == 16e3
-
- return wav, 16e3
-
-
-class PretrainedWav2VecModel(nn.Module):
- def __init__(self, fname):
- super().__init__()
-
- model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([fname])
- model = model[0]
- model.eval()
-
- self.model = model
-
- def forward(self, x):
- with torch.no_grad():
- z = self.model.feature_extractor(x)
- if isinstance(z, tuple):
- z = z[0]
- c = self.model.feature_aggregator(z)
- return z, c
-
-
-class EmbeddingWriterConfig(argparse.ArgumentParser):
- def __init__(self):
- super().__init__("Pre-compute embeddings for flashlight datasets")
-
- kwargs = {"action": "store", "type": str, "required": True}
-
- self.add_argument("--input", "-i", help="Input Directory", **kwargs)
- self.add_argument("--output", "-o", help="Output Directory", **kwargs)
- self.add_argument("--model", help="Path to model checkpoint", **kwargs)
- self.add_argument("--split", help="Dataset Splits", nargs="+", **kwargs)
- self.add_argument(
- "--ext", default="wav", required=False, help="Audio file extension"
- )
-
- self.add_argument(
- "--no-copy-labels",
- action="store_true",
- help="Do not copy label files. Useful for large datasets, use --targetdir in flashlight then.",
- )
- self.add_argument(
- "--use-feat",
- action="store_true",
- help="Use the feature vector ('z') instead of context vector ('c') for features",
- )
- self.add_argument("--gpu", help="GPU to use", default=0, type=int)
-
-
-class Prediction:
- """ Lightweight wrapper around a fairspeech embedding model """
-
- def __init__(self, fname, gpu=0):
- self.gpu = gpu
- self.model = PretrainedWav2VecModel(fname).cuda(gpu)
-
- def __call__(self, x):
- x = torch.from_numpy(x).float().cuda(self.gpu)
- with torch.no_grad():
- z, c = self.model(x.unsqueeze(0))
-
- return z.squeeze(0).cpu().numpy(), c.squeeze(0).cpu().numpy()
-
-
-class H5Writer:
- """ Write features as hdf5 file in flashlight compatible format """
-
- def __init__(self, fname):
- self.fname = fname
- os.makedirs(os.path.dirname(self.fname), exist_ok=True)
-
- def write(self, data):
- channel, T = data.shape
-
- with h5py.File(self.fname, "w") as out_ds:
- data = data.T.flatten()
- out_ds["features"] = data
- out_ds["info"] = np.array([16e3 // 160, T, channel])
-
-
-class EmbeddingDatasetWriter(object):
- """Given a model and a flashlight dataset, pre-compute and store embeddings
-
- Args:
- input_root, str :
- Path to the flashlight dataset
- output_root, str :
- Desired output directory. Will be created if non-existent
- split, str :
- Dataset split
- """
-
- def __init__(
- self,
- input_root,
- output_root,
- split,
- model_fname,
- extension="wav",
- gpu=0,
- verbose=False,
- use_feat=False,
- ):
-
- assert os.path.exists(model_fname)
-
- self.model_fname = model_fname
- self.model = Prediction(self.model_fname, gpu)
-
- self.input_root = input_root
- self.output_root = output_root
- self.split = split
- self.verbose = verbose
- self.extension = extension
- self.use_feat = use_feat
-
- assert os.path.exists(self.input_path), "Input path '{}' does not exist".format(
- self.input_path
- )
-
- def _progress(self, iterable, **kwargs):
- if self.verbose:
- return tqdm.tqdm(iterable, **kwargs)
- return iterable
-
- def require_output_path(self, fname=None):
- path = self.get_output_path(fname)
- os.makedirs(path, exist_ok=True)
-
- @property
- def input_path(self):
- return self.get_input_path()
-
- @property
- def output_path(self):
- return self.get_output_path()
-
- def get_input_path(self, fname=None):
- if fname is None:
- return os.path.join(self.input_root, self.split)
- return os.path.join(self.get_input_path(), fname)
-
- def get_output_path(self, fname=None):
- if fname is None:
- return os.path.join(self.output_root, self.split)
- return os.path.join(self.get_output_path(), fname)
-
- def copy_labels(self):
- self.require_output_path()
-
- labels = list(
- filter(
- lambda x: self.extension not in x, glob.glob(self.get_input_path("*"))
- )
- )
- for fname in tqdm.tqdm(labels):
- copy(fname, self.output_path)
-
- @property
- def input_fnames(self):
- return sorted(glob.glob(self.get_input_path("*.{}".format(self.extension))))
-
- def __len__(self):
- return len(self.input_fnames)
-
- def write_features(self):
-
- paths = self.input_fnames
-
- fnames_context = map(
- lambda x: os.path.join(
- self.output_path, x.replace("." + self.extension, ".h5context")
- ),
- map(os.path.basename, paths),
- )
-
- for name, target_fname in self._progress(
- zip(paths, fnames_context), total=len(self)
- ):
- wav, sr = read_audio(name)
- z, c = self.model(wav)
- feat = z if self.use_feat else c
- writer = H5Writer(target_fname)
- writer.write(feat)
-
- def __repr__(self):
-
- return "EmbeddingDatasetWriter ({n_files} files)\n\tinput:\t{input_root}\n\toutput:\t{output_root}\n\tsplit:\t{split})".format(
- n_files=len(self), **self.__dict__
- )
-
-
-if __name__ == "__main__":
-
- args = EmbeddingWriterConfig().parse_args()
-
- for split in args.split:
-
- writer = EmbeddingDatasetWriter(
- input_root=args.input,
- output_root=args.output,
- split=split,
- model_fname=args.model,
- gpu=args.gpu,
- extension=args.ext,
- use_feat=args.use_feat,
- )
-
- print(writer)
- writer.require_output_path()
-
- print("Writing Features...")
- writer.write_features()
- print("Done.")
-
- if not args.no_copy_labels:
- print("Copying label data...")
- writer.copy_labels()
- print("Done.")
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/hf_byte_bpe.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/hf_byte_bpe.py
deleted file mode 100644
index c508578d41bf6b7ce0a847e0797d71b19beb393d..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/hf_byte_bpe.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-
-from fairseq.data.encoders import register_bpe
-from fairseq.dataclass import FairseqDataclass
-from fairseq import file_utils
-
-
-@dataclass
-class HuggingFaceByteLevelBPEConfig(FairseqDataclass):
- bpe_merges: str = field(default="???", metadata={"help": "path to merges.txt"})
- bpe_vocab: str = field(default="???", metadata={"help": "path to vocab.json"})
- bpe_add_prefix_space: bool = field(
- default=False, metadata={"help": "add prefix space before encoding"}
- )
-
-
-@register_bpe("hf_byte_bpe", dataclass=HuggingFaceByteLevelBPEConfig)
-class HuggingFaceByteLevelBPE(object):
- def __init__(self, cfg):
- try:
- from tokenizers import ByteLevelBPETokenizer
- except ImportError:
- raise ImportError(
- "Please install huggingface/tokenizers with: " "pip install tokenizers"
- )
-
- bpe_vocab = file_utils.cached_path(cfg.bpe_vocab)
- bpe_merges = file_utils.cached_path(cfg.bpe_merges)
-
- self.bpe = ByteLevelBPETokenizer(
- bpe_vocab,
- bpe_merges,
- add_prefix_space=cfg.bpe_add_prefix_space,
- )
-
- def encode(self, x: str) -> str:
- return " ".join(map(str, self.bpe.encode(x).ids))
-
- def decode(self, x: str) -> str:
- return self.bpe.decode(
- [int(tok) if tok not in {"", ""} else tok for tok in x.split()]
- )
-
- def is_beginning_of_word(self, x: str) -> bool:
- return self.decode(x).startswith(" ")
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/scripts/remove_silence.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/scripts/remove_silence.py
deleted file mode 100644
index fac88b989703262a84b242b2761df621bf02c739..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/scripts/remove_silence.py
+++ /dev/null
@@ -1,63 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-get intervals from .vads file, specify output data, and this script removes silences and saves the audio data in out path folder
-paths=shards/train.tsv
-vads=shards/train.vads
-python remove_silence.py --paths $paths --vads $vads
-"""
-
-import os
-import argparse
-import torch
-import torchaudio
-import tqdm
-
-
-parser = argparse.ArgumentParser()
-parser.add_argument("--tsv", default="", type=str)
-parser.add_argument("--vads", default="", type=str)
-parser.add_argument("--out", type=str)
-params = parser.parse_args()
-
-# load paths
-paths = []
-with open(params.tsv) as f:
- root = next(f).rstrip()
- for line in f:
- paths.append(os.path.join(root, line.rstrip().split("\t")[0]))
-
-# load vads
-list_intervals = []
-with open(params.vads) as f:
- for line in f:
- interval = [
- [int(w.split(":")[0]), int(w.split(":")[1])] for w in line.rstrip().split()
- ]
- list_intervals.append(interval)
-
-
-# load audio and keep only intervals (i.e. remove silences)
-for i in tqdm.trange(len(paths)):
- data, _ = torchaudio.load(paths[i])
- if len(list_intervals[i]) > 0:
- data_filtered = torch.cat(
- [data[0][int(it[0]) : int(it[1])] for it in list_intervals[i]]
- ).unsqueeze(0)
- else:
- data_filtered = data
-
- # YOU MAY NEED TO MODIFY THIS TO GET THE RIGHT SUBPATH
- # outpath = params.out + '/'.join(paths[i].split('/')[-1])
- outpath = params.out + "/" + "/".join(paths[i].split("/")[-2:])
-
- if not os.path.isdir("/".join(outpath.split("/")[:-1])):
- os.makedirs("/".join(outpath.split("/")[:-1]))
- if not os.path.exists(outpath):
- torchaudio.save(outpath, data_filtered, sample_rate=16000)
- else:
- print(outpath, "exists!")
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/fairseq_decoder.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/fairseq_decoder.py
deleted file mode 100644
index 4f1e8b52a2e0a50199050f11cc613ab02ca9febe..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/fairseq_decoder.py
+++ /dev/null
@@ -1,105 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import Dict, List, Optional, Tuple
-
-import torch.nn as nn
-from fairseq import utils
-from torch import Tensor
-
-
-class FairseqDecoder(nn.Module):
- """Base class for decoders."""
-
- def __init__(self, dictionary):
- super().__init__()
- self.dictionary = dictionary
- self.onnx_trace = False
- self.adaptive_softmax = None
-
-
- def forward(self, prev_output_tokens, encoder_out=None, **kwargs):
- """
- Args:
- prev_output_tokens (LongTensor): shifted output tokens of shape
- `(batch, tgt_len)`, for teacher forcing
- encoder_out (dict, optional): output from the encoder, used for
- encoder-side attention
-
- Returns:
- tuple:
- - the decoder's output of shape `(batch, tgt_len, vocab)`
- - a dictionary with any model-specific outputs
- """
- x, extra = self.extract_features(
- prev_output_tokens, encoder_out=encoder_out, **kwargs
- )
- x = self.output_layer(x)
- return x, extra
-
- def extract_features(self, prev_output_tokens, encoder_out=None, **kwargs):
- """
- Returns:
- tuple:
- - the decoder's features of shape `(batch, tgt_len, embed_dim)`
- - a dictionary with any model-specific outputs
- """
- raise NotImplementedError
-
- def output_layer(self, features, **kwargs):
- """
- Project features to the default output size, e.g., vocabulary size.
-
- Args:
- features (Tensor): features returned by *extract_features*.
- """
- raise NotImplementedError
-
- def get_normalized_probs(
- self,
- net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]],
- log_probs: bool,
- sample: Optional[Dict[str, Tensor]] = None,
- ):
- """Get normalized probabilities (or log probs) from a net's output."""
- return self.get_normalized_probs_scriptable(net_output, log_probs, sample)
-
- # TorchScript doesn't support super() method so that the scriptable Subclass
- # can't access the base class model in Torchscript.
- # Current workaround is to add a helper function with different name and
- # call the helper function from scriptable Subclass.
- def get_normalized_probs_scriptable(
- self,
- net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]],
- log_probs: bool,
- sample: Optional[Dict[str, Tensor]] = None,
- ):
- """Get normalized probabilities (or log probs) from a net's output."""
-
- if hasattr(self, "adaptive_softmax") and self.adaptive_softmax is not None:
- if sample is not None:
- assert "target" in sample
- target = sample["target"]
- else:
- target = None
- out = self.adaptive_softmax.get_log_prob(net_output[0], target=target)
- return out.exp_() if not log_probs else out
-
- logits = net_output[0]
- if log_probs:
- return utils.log_softmax(logits, dim=-1, onnx_trace=self.onnx_trace)
- else:
- return utils.softmax(logits, dim=-1, onnx_trace=self.onnx_trace)
-
- def max_positions(self):
- """Maximum input length supported by the decoder."""
- return 1e6 # an arbitrary large number
-
- def upgrade_state_dict_named(self, state_dict, name):
- """Upgrade old state dicts to work with newer code."""
- return state_dict
-
- def prepare_for_onnx_export_(self):
- self.onnx_trace = True
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/feature_transforms/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/feature_transforms/__init__.py
deleted file mode 100644
index 359fa069716cba0dd615ce0959368b20828c31f7..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/feature_transforms/__init__.py
+++ /dev/null
@@ -1,82 +0,0 @@
-import importlib
-import os
-from abc import ABC, abstractmethod
-from typing import Dict, Optional
-
-
-class AudioFeatureTransform(ABC):
- @classmethod
- @abstractmethod
- def from_config_dict(cls, config: Optional[Dict] = None):
- pass
-
-
-AUDIO_FEATURE_TRANSFORM_REGISTRY = {}
-AUDIO_FEATURE_TRANSFORM_CLASS_NAMES = set()
-
-
-def register_audio_feature_transform(name):
- def register_audio_feature_transform_cls(cls):
- if name in AUDIO_FEATURE_TRANSFORM_REGISTRY:
- raise ValueError(f"Cannot register duplicate transform ({name})")
- if not issubclass(cls, AudioFeatureTransform):
- raise ValueError(
- f"Transform ({name}: {cls.__name__}) must extend "
- "AudioFeatureTransform"
- )
- if cls.__name__ in AUDIO_FEATURE_TRANSFORM_CLASS_NAMES:
- raise ValueError(
- f"Cannot register audio feature transform with duplicate "
- f"class name ({cls.__name__})"
- )
- AUDIO_FEATURE_TRANSFORM_REGISTRY[name] = cls
- AUDIO_FEATURE_TRANSFORM_CLASS_NAMES.add(cls.__name__)
- return cls
-
- return register_audio_feature_transform_cls
-
-
-def get_audio_feature_transform(name):
- return AUDIO_FEATURE_TRANSFORM_REGISTRY[name]
-
-
-transforms_dir = os.path.dirname(__file__)
-for file in os.listdir(transforms_dir):
- path = os.path.join(transforms_dir, file)
- if (
- not file.startswith("_")
- and not file.startswith(".")
- and (file.endswith(".py") or os.path.isdir(path))
- ):
- name = file[: file.find(".py")] if file.endswith(".py") else file
- importlib.import_module("fairseq.data.audio.feature_transforms." + name)
-
-
-class CompositeAudioFeatureTransform(AudioFeatureTransform):
- @classmethod
- def from_config_dict(cls, config=None):
- _config = {} if config is None else config
- _transforms = _config.get("transforms")
- if _transforms is None:
- return None
- transforms = [
- get_audio_feature_transform(_t).from_config_dict(_config.get(_t))
- for _t in _transforms
- ]
- return CompositeAudioFeatureTransform(transforms)
-
- def __init__(self, transforms):
- self.transforms = [t for t in transforms if t is not None]
-
- def __call__(self, x):
- for t in self.transforms:
- x = t(x)
- return x
-
- def __repr__(self):
- format_string = (
- [self.__class__.__name__ + "("]
- + [f" {t.__repr__()}" for t in self.transforms]
- + [")"]
- )
- return "\n".join(format_string)
diff --git a/spaces/OpenMotionLab/MotionGPT/app.py b/spaces/OpenMotionLab/MotionGPT/app.py
deleted file mode 100644
index 1a66ce41f865ff3c8b54a8a5937ff5cf4f7f2d78..0000000000000000000000000000000000000000
--- a/spaces/OpenMotionLab/MotionGPT/app.py
+++ /dev/null
@@ -1,604 +0,0 @@
-import os
-os.environ['DISPLAY'] = ':0.0'
-os.environ['PYOPENGL_PLATFORM'] = 'egl'
-os.environ["MESA_GL_VERSION_OVERRIDE"] = "4.1"
-os.system('pip install /home/user/app/pyrender')
-os.system('pip install triangle==20220202')
-
-import gradio as gr
-import random
-import torch
-import time
-import cv2
-import numpy as np
-import OpenGL.GL as gl
-import imageio
-import pytorch_lightning as pl
-import moviepy.editor as mp
-from pathlib import Path
-from mGPT.data.build_data import build_data
-from mGPT.models.build_model import build_model
-from mGPT.config import parse_args
-from scipy.spatial.transform import Rotation as RRR
-import mGPT.render.matplot.plot_3d_global as plot_3d
-from mGPT.render.pyrender.hybrik_loc2rot import HybrIKJointsToRotmat
-from mGPT.render.pyrender.smpl_render import SMPLRender
-from transformers import WhisperProcessor, WhisperForConditionalGeneration
-import librosa
-from huggingface_hub import snapshot_download
-import eventlet
-
-# Load model
-cfg = parse_args(phase="webui") # parse config file
-cfg.FOLDER = 'cache'
-output_dir = Path(cfg.FOLDER)
-output_dir.mkdir(parents=True, exist_ok=True)
-pl.seed_everything(cfg.SEED_VALUE)
-if torch.cuda.is_available():
- device = torch.device("cuda")
-else:
- device = torch.device("cpu")
-
-model_path = snapshot_download(repo_id="bill-jiang/MotionGPT-base")
-
-datamodule = build_data(cfg, phase="test")
-model = build_model(cfg, datamodule)
-state_dict = torch.load(f'{model_path}/motiongpt_s3_h3d.tar',
- map_location="cpu")["state_dict"]
-model.load_state_dict(state_dict)
-model.to(device)
-
-audio_processor = WhisperProcessor.from_pretrained(cfg.model.whisper_path)
-audio_model = WhisperForConditionalGeneration.from_pretrained(
- cfg.model.whisper_path).to(device)
-forced_decoder_ids_zh = audio_processor.get_decoder_prompt_ids(
- language="zh", task="translate")
-forced_decoder_ids_en = audio_processor.get_decoder_prompt_ids(
- language="en", task="translate")
-
-# HTML Style
-Video_Components = """
-
"""
-
- history[-1][1] = ""
- for character in response:
- history[-1][1] += character
- time.sleep(0.02)
- yield history, motion_uploaded, data_stored
-
-
-def bot_example(history, responses):
- history = history + responses
- return history
-
-
-with open("assets/css/custom.css", "r", encoding="utf-8") as f:
- customCSS = f.read()
-
-with gr.Blocks(css=customCSS) as demo:
-
- # Examples
- chat_instruct = gr.State([
- (None,
- "👋 Hi, I'm MotionGPT! I can generate realistic human motion from text, or generate text from motion."
- ),
- (None,
- "💡 You can chat with me in pure text like generating human motion following your descriptions."
- ),
- (None,
- "💡 After generation, you can click the button in the top right of generation human motion result to download the human motion video or feature stored in .npy format."
- ),
- (None,
- "💡 With the human motion feature file downloaded or got from dataset, you are able to ask me to translate it!"
- ),
- (None,
- "💡 Of courser, you can also purely chat with me and let me give you human motion in text, here are some examples!"
- ),
- (None,
- "💡 We provide two motion visulization methods. The default fast method is skeleton line ploting which is like the examples below:"
- ),
- (None,
- Video_Components_example.format(
- video_path="assets/videos/example0_fast.mp4",
- video_fname="example0_fast.mp4")),
- (None,
- "💡 And the slow method is SMPL model rendering which is more realistic but slower."
- ),
- (None,
- Video_Components_example.format(
- video_path="assets/videos/example0.mp4",
- video_fname="example0.mp4")),
- (None,
- "💡 If you want to get the video in our paper and website like below, you can refer to the scirpt in our [github repo](https://github.com/OpenMotionLab/MotionGPT#-visualization)."
- ),
- (None,
- Video_Components_example.format(
- video_path="assets/videos/example0_blender.mp4",
- video_fname="example0_blender.mp4")),
- (None, "👉 Follow the examples and try yourself!"),
- ])
- chat_instruct_sum = gr.State([(None, '''
- 👋 Hi, I'm MotionGPT! I can generate realistic human motion from text, or generate text from motion.
-
- 1. You can chat with me in pure text like generating human motion following your descriptions.
- 2. After generation, you can click the button in the top right of generation human motion result to download the human motion video or feature stored in .npy format.
- 3. With the human motion feature file downloaded or got from dataset, you are able to ask me to translate it!
- 4. Of course, you can also purely chat with me and let me give you human motion in text, here are some examples!
- ''')] + chat_instruct.value[-7:])
-
- t2m_examples = gr.State([
- (None,
- "💡 You can chat with me in pure text, following are some examples of text-to-motion generation!"
- ),
- ("A person is walking forwards, but stumbles and steps back, then carries on forward.",
- Video_Components_example.format(
- video_path="assets/videos/example0.mp4",
- video_fname="example0.mp4")),
- ("Generate a man aggressively kicks an object to the left using his right foot.",
- Video_Components_example.format(
- video_path="assets/videos/example1.mp4",
- video_fname="example1.mp4")),
- ("Generate a person lowers their arms, gets onto all fours, and crawls.",
- Video_Components_example.format(
- video_path="assets/videos/example2.mp4",
- video_fname="example2.mp4")),
- ("Show me the video of a person bends over and picks things up with both hands individually, then walks forward.",
- Video_Components_example.format(
- video_path="assets/videos/example3.mp4",
- video_fname="example3.mp4")),
- ("Imagine a person is practing balancing on one leg.",
- Video_Components_example.format(
- video_path="assets/videos/example5.mp4",
- video_fname="example5.mp4")),
- ("Show me a person walks forward, stops, turns directly to their right, then walks forward again.",
- Video_Components_example.format(
- video_path="assets/videos/example6.mp4",
- video_fname="example6.mp4")),
- ("I saw a person sits on the ledge of something then gets off and walks away.",
- Video_Components_example.format(
- video_path="assets/videos/example7.mp4",
- video_fname="example7.mp4")),
- ("Show me a person is crouched down and walking around sneakily.",
- Video_Components_example.format(
- video_path="assets/videos/example8.mp4",
- video_fname="example8.mp4")),
- ])
-
- m2t_examples = gr.State([
- (None,
- "💡 With the human motion feature file downloaded or got from dataset, you are able to ask me to translate it, here are some examples!"
- ),
- ("Please explain the movement shown in using natural language.",
- None),
- (Video_Components_example.format(
- video_path="assets/videos/example0.mp4",
- video_fname="example0.mp4"),
- "The person was pushed but didn't fall down"),
- ("What kind of action is being represented in ? Explain it in text.",
- None),
- (Video_Components_example.format(
- video_path="assets/videos/example4.mp4",
- video_fname="example4.mp4"),
- "The figure has its hands curled at jaw level, steps onto its left foot and raises right leg with bent knee to kick forward and return to starting stance."
- ),
- ("Provide a summary of the motion demonstrated in using words.",
- None),
- (Video_Components_example.format(
- video_path="assets/videos/example2.mp4",
- video_fname="example2.mp4"),
- "A person who is standing with his arms up and away from his sides bends over, gets down on his hands and then his knees and crawls forward."
- ),
- ("Generate text for :", None),
- (Video_Components_example.format(
- video_path="assets/videos/example5.mp4",
- video_fname="example5.mp4"),
- "The man tries to stand in a yoga tree pose and looses his balance."),
- ("Provide a summary of the motion depicted in using language.",
- None),
- (Video_Components_example.format(
- video_path="assets/videos/example6.mp4",
- video_fname="example6.mp4"),
- "Person walks up some steps then leeps to the other side and goes up a few more steps and jumps dow"
- ),
- ("Describe the motion represented by in plain English.",
- None),
- (Video_Components_example.format(
- video_path="assets/videos/example7.mp4",
- video_fname="example7.mp4"),
- "Person sits down, then stands up and walks forward. then the turns around 180 degrees and walks the opposite direction"
- ),
- ("Provide a description of the action in using words.",
- None),
- (Video_Components_example.format(
- video_path="assets/videos/example8.mp4",
- video_fname="example8.mp4"),
- "This man is bent forward and walks slowly around."),
- ])
-
- t2t_examples = gr.State([
- (None,
- "💡 Of course, you can also purely chat with me and let me give you human motion in text, here are some examples!"
- ),
- ('Depict a motion as like you have seen it.',
- "A person slowly walked forward in rigth direction while making the circle"
- ),
- ('Random say something about describing a human motion.',
- "A man throws punches using his right hand."),
- ('Describe the motion of someone as you will.',
- "Person is moving left to right in a dancing stance swaying hips, moving feet left to right with arms held out"
- ),
- ('Come up with a human motion caption.',
- "A person is walking in a counter counterclockwise motion."),
- ('Write a sentence about how someone might dance.',
- "A person with his hands down by his sides reaches down for something with his right hand, uses the object to make a stirring motion, then places the item back down."
- ),
- ('Depict a motion as like you have seen it.',
- "A person is walking forward a few feet, then turns around, walks back, and continues walking."
- )
- ])
-
- Init_chatbot = chat_instruct.value[:
- 1] + t2m_examples.value[:
- 3] + m2t_examples.value[:3] + t2t_examples.value[:2] + chat_instruct.value[
- -7:]
-
- # Variables
- motion_uploaded = gr.State({
- "feats": None,
- "joints": None,
- "motion_video": None,
- "motion_lengths": 0,
- "motion_token": None,
- "motion_token_string": '',
- "motion_token_length": 0,
- })
- data_stored = gr.State([])
-
- gr.Markdown('''
- # MotionGPT: Human Motion as a Foreign Language
-
-
- ''')
-
- chatbot = gr.Chatbot(Init_chatbot,
- elem_id="mGPT",
- height=600,
- label="MotionGPT",
- avatar_images=(None,
- ("assets/images/avatar_bot.jpg")),
- bubble_full_width=False)
-
- with gr.Row():
- with gr.Column(scale=0.85):
- with gr.Row():
- txt = gr.Textbox(
- label="Text",
- show_label=False,
- elem_id="textbox",
- placeholder=
- "Enter text and press ENTER or speak to input. You can also upload motion.",
- container=False)
-
- with gr.Row():
- aud = gr.Audio(source="microphone",
- label="Speak input",
- type='filepath')
- btn = gr.UploadButton("📁 Upload motion",
- elem_id="upload",
- file_types=["file"])
- # regen = gr.Button("🔄 Regenerate", elem_id="regen")
- clear = gr.ClearButton([txt, chatbot, aud], value='🗑️ Clear')
-
- with gr.Row():
- gr.Markdown('''
- ### You can get more examples (pre-generated for faster response) by clicking the buttons below:
- ''')
-
- with gr.Row():
- instruct_eg = gr.Button("Instructions", elem_id="instruct")
- t2m_eg = gr.Button("Text-to-Motion", elem_id="t2m")
- m2t_eg = gr.Button("Motion-to-Text", elem_id="m2t")
- t2t_eg = gr.Button("Random description", elem_id="t2t")
-
- with gr.Column(scale=0.15, min_width=150):
- method = gr.Dropdown(["slow", "fast"],
- label="Visulization method",
- interactive=True,
- elem_id="method",
- value="slow")
-
- language = gr.Dropdown(["English", "中文"],
- label="Speech language",
- interactive=True,
- elem_id="language",
- value="English")
-
- txt_msg = txt.submit(
- add_text, [chatbot, txt, motion_uploaded, data_stored, method],
- [chatbot, txt, motion_uploaded, data_stored],
- queue=False).then(bot, [chatbot, motion_uploaded, data_stored, method],
- [chatbot, motion_uploaded, data_stored])
-
- txt_msg.then(lambda: gr.update(interactive=True), None, [txt], queue=False)
-
- file_msg = btn.upload(add_file, [chatbot, btn, txt, motion_uploaded],
- [chatbot, txt, motion_uploaded],
- queue=False)
- aud_msg = aud.stop_recording(
- add_audio, [chatbot, aud, data_stored, language],
- [chatbot, data_stored],
- queue=False).then(bot, [chatbot, motion_uploaded, data_stored, method],
- [chatbot, motion_uploaded, data_stored])
- # regen_msg = regen.click(bot,
- # [chatbot, motion_uploaded, data_stored, method],
- # [chatbot, motion_uploaded, data_stored],
- # queue=False)
-
- instruct_msg = instruct_eg.click(bot_example, [chatbot, chat_instruct_sum],
- [chatbot],
- queue=False)
- t2m_eg_msg = t2m_eg.click(bot_example, [chatbot, t2m_examples], [chatbot],
- queue=False)
- m2t_eg_msg = m2t_eg.click(bot_example, [chatbot, m2t_examples], [chatbot],
- queue=False)
- t2t_eg_msg = t2t_eg.click(bot_example, [chatbot, t2t_examples], [chatbot],
- queue=False)
-
- chatbot.change(scroll_to_output=True)
-
-if __name__ == "__main__":
- demo.launch(debug=True)
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/midas/midas/midas_net.py b/spaces/PAIR/Text2Video-Zero/annotator/midas/midas/midas_net.py
deleted file mode 100644
index 8a954977800b0a0f48807e80fa63041910e33c1f..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/midas/midas/midas_net.py
+++ /dev/null
@@ -1,76 +0,0 @@
-"""MidashNet: Network for monocular depth estimation trained by mixing several datasets.
-This file contains code that is adapted from
-https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py
-"""
-import torch
-import torch.nn as nn
-
-from .base_model import BaseModel
-from .blocks import FeatureFusionBlock, Interpolate, _make_encoder
-
-
-class MidasNet(BaseModel):
- """Network for monocular depth estimation.
- """
-
- def __init__(self, path=None, features=256, non_negative=True):
- """Init.
-
- Args:
- path (str, optional): Path to saved model. Defaults to None.
- features (int, optional): Number of features. Defaults to 256.
- backbone (str, optional): Backbone network for encoder. Defaults to resnet50
- """
- print("Loading weights: ", path)
-
- super(MidasNet, self).__init__()
-
- use_pretrained = False if path is None else True
-
- self.pretrained, self.scratch = _make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained)
-
- self.scratch.refinenet4 = FeatureFusionBlock(features)
- self.scratch.refinenet3 = FeatureFusionBlock(features)
- self.scratch.refinenet2 = FeatureFusionBlock(features)
- self.scratch.refinenet1 = FeatureFusionBlock(features)
-
- self.scratch.output_conv = nn.Sequential(
- nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1),
- Interpolate(scale_factor=2, mode="bilinear"),
- nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1),
- nn.ReLU(True),
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
- nn.ReLU(True) if non_negative else nn.Identity(),
- )
-
- if path:
- self.load(path)
-
- def forward(self, x):
- """Forward pass.
-
- Args:
- x (tensor): input data (image)
-
- Returns:
- tensor: depth
- """
-
- layer_1 = self.pretrained.layer1(x)
- layer_2 = self.pretrained.layer2(layer_1)
- layer_3 = self.pretrained.layer3(layer_2)
- layer_4 = self.pretrained.layer4(layer_3)
-
- layer_1_rn = self.scratch.layer1_rn(layer_1)
- layer_2_rn = self.scratch.layer2_rn(layer_2)
- layer_3_rn = self.scratch.layer3_rn(layer_3)
- layer_4_rn = self.scratch.layer4_rn(layer_4)
-
- path_4 = self.scratch.refinenet4(layer_4_rn)
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
-
- out = self.scratch.output_conv(path_1)
-
- return torch.squeeze(out, dim=1)
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/arraymisc/__init__.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/arraymisc/__init__.py
deleted file mode 100644
index 4b4700d6139ae3d604ff6e542468cce4200c020c..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/arraymisc/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .quantization import dequantize, quantize
-
-__all__ = ['quantize', 'dequantize']
diff --git a/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/components/body/prompt.py b/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/components/body/prompt.py
deleted file mode 100644
index 3d829c532dbe9806f72002459e01b865be5541dd..0000000000000000000000000000000000000000
--- a/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/components/body/prompt.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import streamlit as st
-
-
-def prompt_box():
- prompt = st.text_area("Enter your question here", height=100)
- if st.session_state.get('generate_answer_button', None):
- if prompt == "" or prompt is None:
- st.caption(":red[Please enter a prompt]")
-
-
- if prompt is not None: # If prompt is not empty
- st.session_state['prompt'] = prompt
-
-
-if __name__ == "__main__":
- prompt_box()
\ No newline at end of file
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/metrics/fad.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/metrics/fad.py
deleted file mode 100644
index de66138dbb14fd4246bbfe590bddfd5beaf1ed8c..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/metrics/fad.py
+++ /dev/null
@@ -1,329 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from pathlib import Path
-import os
-import subprocess
-import tempfile
-import typing as tp
-
-from audiocraft.data.audio import audio_write
-from audiocraft.data.audio_utils import convert_audio
-import flashy
-import torch
-import torchmetrics
-
-from ..environment import AudioCraftEnvironment
-
-
-logger = logging.getLogger(__name__)
-
-VGGISH_SAMPLE_RATE = 16_000
-VGGISH_CHANNELS = 1
-
-
-class FrechetAudioDistanceMetric(torchmetrics.Metric):
- """Fréchet Audio Distance computation based on official TensorFlow implementation from Google Research.
-
- From: D.C. Dowson & B.V. Landau The Fréchet distance between
- multivariate normal distributions
- https://doi.org/10.1016/0047-259X(82)90077-X
- The Fréchet distance between two multivariate gaussians,
- `X ~ N(mu_x, sigma_x)` and `Y ~ N(mu_y, sigma_y)`, is `d^2`.
- d^2 = (mu_x - mu_y)^2 + Tr(sigma_x + sigma_y - 2 * sqrt(sigma_x*sigma_y))
- = (mu_x - mu_y)^2 + Tr(sigma_x) + Tr(sigma_y)
- - 2 * Tr(sqrt(sigma_x*sigma_y)))
-
- To use this FAD computation metric, you need to have the proper Frechet Audio Distance tool setup
- from: https://github.com/google-research/google-research/tree/master/frechet_audio_distance
- We provide the below instructions as reference but we do not guarantee for further support
- in frechet_audio_distance installation. This was tested with python 3.10, cuda 11.8, tensorflow 2.12.0.
-
- We recommend installing the frechet_audio_distance library in a dedicated env (e.g. conda).
-
- 1. Get the code and models following the repository instructions. We used the steps below:
- git clone git@github.com:google-research/google-research.git
- git clone git@github.com:tensorflow/models.git
- mkdir google-research/tensorflow_models
- touch google-research/tensorflow_models/__init__.py
- cp -r models/research/audioset google-research/tensorflow_models/
- touch google-research/tensorflow_models/audioset/__init__.py
- echo "from .vggish import mel_features, vggish_params, vggish_slim" > \
- google-research/tensorflow_models/audioset/__init__.py
- # we can now remove the tensorflow models repository
- # rm -r models
- cd google-research
- Follow the instructions to download the vggish checkpoint. AudioCraft base configuration
- assumes it is placed in the AudioCraft reference dir.
-
- Note that we operate the following changes for the code to work with TensorFlow 2.X and python 3:
- - Update xrange for range in:
- https://github.com/google-research/google-research/blob/master/frechet_audio_distance/audioset_model.py
- - Update `tf_record = tf.python_io.tf_record_iterator(filename).next()` to
- `tf_record = tf.python_io.tf_record_iterator(filename).__next__()` in
- https://github.com/google-research/google-research/blob/master/frechet_audio_distance/fad_utils.py
- - Update `import vggish_params as params` to `from . import vggish_params as params` in:
- https://github.com/tensorflow/models/blob/master/research/audioset/vggish/vggish_slim.py
- - Add flag to provide a given batch size for running the AudioSet model in:
- https://github.com/google-research/google-research/blob/master/frechet_audio_distance/create_embeddings_main.py
- ```
- flags.DEFINE_integer('batch_size', 64,
- 'Number of samples in the batch for AudioSet model.')
- ```
- Ensure you pass the flag to the create_embeddings_beam.create_pipeline function, adding:
- `batch_size=FLAGS.batch_size` to the provided parameters.
-
- 2. Follow instructions for the library installation and a valid TensorFlow installation
- ```
- # e.g. instructions from: https://www.tensorflow.org/install/pip
- conda install -c conda-forge cudatoolkit=11.8.0
- python3 -m pip install nvidia-cudnn-cu11==8.6.0.163 tensorflow==2.12.*
- mkdir -p $CONDA_PREFIX/etc/conda/activate.d
- echo 'CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))' \
- >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
- echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/:$CUDNN_PATH/lib' \
- >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
- source $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
- # Verify install: on a machine with GPU device
- python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
- ```
-
- Now install frechet_audio_distance required dependencies:
- ```
- # We assume we already have TensorFlow installed from the above steps
- pip install apache-beam numpy scipy tf_slim
- ```
-
- Finally, follow remaining library instructions to ensure you have a working frechet_audio_distance setup
- (you may want to specify --model_ckpt flag pointing to the model's path).
-
- 3. AudioCraft's FrechetAudioDistanceMetric requires 2 environment variables pointing to the python executable
- and Tensorflow library path from the above installation steps:
- export TF_PYTHON_EXE=""
- export TF_LIBRARY_PATH=""
-
- e.g. assuming we have installed everything in a dedicated conda env
- with python 3.10 that is currently active:
- export TF_PYTHON_EXE="$CONDA_PREFIX/bin/python"
- export TF_LIBRARY_PATH="$CONDA_PREFIX/lib/python3.10/site-packages/nvidia/cudnn/lib"
-
- Finally you may want to export the following variable:
- export TF_FORCE_GPU_ALLOW_GROWTH=true
- See: https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth
-
- You can save those environment variables in your training conda env, when currently active:
- `$CONDA_PREFIX/etc/conda/activate.d/env_vars.sh`
- e.g. assuming the env with TensorFlow and frechet_audio_distance install is named ac_eval,
- and the training conda env is named audiocraft:
- ```
- # activate training env
- conda activate audiocraft
- # get path to all envs
- CONDA_ENV_DIR=$(dirname $CONDA_PREFIX)
- # export pointers to evaluation env for using TensorFlow in FrechetAudioDistanceMetric
- touch $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
- echo 'export TF_PYTHON_EXE="$CONDA_ENV_DIR/ac_eval/bin/python"' >> \
- $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
- echo 'export TF_LIBRARY_PATH="$CONDA_ENV_DIR/ac_eval/lib/python3.10/site-packages/nvidia/cudnn/lib"' >> \
- $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
- # optionally:
- echo 'export TF_FORCE_GPU_ALLOW_GROWTH=true' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
- # you may need to reactivate the audiocraft env for this to take effect
- ```
-
- Args:
- bin (Path or str): Path to installed frechet audio distance code.
- model_path (Path or str): Path to Tensorflow checkpoint for the model
- used to compute statistics over the embedding beams.
- format (str): Audio format used to save files.
- log_folder (Path or str, optional): Path where to write process logs.
- """
- def __init__(self, bin: tp.Union[Path, str], model_path: tp.Union[Path, str],
- format: str = "wav", batch_size: tp.Optional[int] = None,
- log_folder: tp.Optional[tp.Union[Path, str]] = None):
- super().__init__()
- self.model_sample_rate = VGGISH_SAMPLE_RATE
- self.model_channels = VGGISH_CHANNELS
- self.model_path = AudioCraftEnvironment.resolve_reference_path(model_path)
- assert Path(self.model_path).exists(), f"Could not find provided model checkpoint path at: {self.model_path}"
- self.format = format
- self.batch_size = batch_size
- self.bin = bin
- self.tf_env = {"PYTHONPATH": str(self.bin)}
- self.python_path = os.environ.get('TF_PYTHON_EXE') or 'python'
- logger.info("Python exe for TF is %s", self.python_path)
- if 'TF_LIBRARY_PATH' in os.environ:
- self.tf_env['LD_LIBRARY_PATH'] = os.environ['TF_LIBRARY_PATH']
- if 'TF_FORCE_GPU_ALLOW_GROWTH' in os.environ:
- self.tf_env['TF_FORCE_GPU_ALLOW_GROWTH'] = os.environ['TF_FORCE_GPU_ALLOW_GROWTH']
- logger.info("Env for TF is %r", self.tf_env)
- self.reset(log_folder)
- self.add_state("total_files", default=torch.tensor(0.), dist_reduce_fx="sum")
-
- def reset(self, log_folder: tp.Optional[tp.Union[Path, str]] = None):
- """Reset torchmetrics.Metrics state."""
- log_folder = Path(log_folder or tempfile.mkdtemp())
- self.tmp_dir = log_folder / 'fad'
- self.tmp_dir.mkdir(exist_ok=True)
- self.samples_tests_dir = self.tmp_dir / 'tests'
- self.samples_tests_dir.mkdir(exist_ok=True)
- self.samples_background_dir = self.tmp_dir / 'background'
- self.samples_background_dir.mkdir(exist_ok=True)
- self.manifest_tests = self.tmp_dir / 'files_tests.cvs'
- self.manifest_background = self.tmp_dir / 'files_background.cvs'
- self.stats_tests_dir = self.tmp_dir / 'stats_tests'
- self.stats_background_dir = self.tmp_dir / 'stats_background'
- self.counter = 0
-
- def update(self, preds: torch.Tensor, targets: torch.Tensor,
- sizes: torch.Tensor, sample_rates: torch.Tensor,
- stems: tp.Optional[tp.List[str]] = None):
- """Update torchmetrics.Metrics by saving the audio and updating the manifest file."""
- assert preds.shape == targets.shape, f"preds={preds.shape} != targets={targets.shape}"
- num_samples = preds.shape[0]
- assert num_samples == sizes.size(0) and num_samples == sample_rates.size(0)
- assert stems is None or num_samples == len(set(stems))
- for i in range(num_samples):
- self.total_files += 1 # type: ignore
- self.counter += 1
- wav_len = int(sizes[i].item())
- sample_rate = int(sample_rates[i].item())
- pred_wav = preds[i]
- target_wav = targets[i]
- pred_wav = pred_wav[..., :wav_len]
- target_wav = target_wav[..., :wav_len]
- stem_name = stems[i] if stems is not None else f'sample_{self.counter}_{flashy.distrib.rank()}'
- # dump audio files
- try:
- pred_wav = convert_audio(
- pred_wav.unsqueeze(0), from_rate=sample_rate,
- to_rate=self.model_sample_rate, to_channels=1).squeeze(0)
- audio_write(
- self.samples_tests_dir / stem_name, pred_wav, sample_rate=self.model_sample_rate,
- format=self.format, strategy="peak")
- except Exception as e:
- logger.error(f"Exception occured when saving tests files for FAD computation: {repr(e)} - {e}")
- try:
- # for the ground truth audio, we enforce the 'peak' strategy to avoid modifying
- # the original audio when writing it
- target_wav = convert_audio(
- target_wav.unsqueeze(0), from_rate=sample_rate,
- to_rate=self.model_sample_rate, to_channels=1).squeeze(0)
- audio_write(
- self.samples_background_dir / stem_name, target_wav, sample_rate=self.model_sample_rate,
- format=self.format, strategy="peak")
- except Exception as e:
- logger.error(f"Exception occured when saving background files for FAD computation: {repr(e)} - {e}")
-
- def _get_samples_name(self, is_background: bool):
- return 'background' if is_background else 'tests'
-
- def _create_embedding_beams(self, is_background: bool, gpu_index: tp.Optional[int] = None):
- if is_background:
- input_samples_dir = self.samples_background_dir
- input_filename = self.manifest_background
- stats_name = self.stats_background_dir
- else:
- input_samples_dir = self.samples_tests_dir
- input_filename = self.manifest_tests
- stats_name = self.stats_tests_dir
- beams_name = self._get_samples_name(is_background)
- log_file = self.tmp_dir / f'fad_logs_create_beams_{beams_name}.log'
-
- logger.info(f"Scanning samples folder to fetch list of files: {input_samples_dir}")
- with open(input_filename, "w") as fout:
- for path in Path(input_samples_dir).glob(f"*.{self.format}"):
- fout.write(f"{str(path)}\n")
-
- cmd = [
- self.python_path, "-m",
- "frechet_audio_distance.create_embeddings_main",
- "--model_ckpt", f"{self.model_path}",
- "--input_files", f"{str(input_filename)}",
- "--stats", f"{str(stats_name)}",
- ]
- if self.batch_size is not None:
- cmd += ["--batch_size", str(self.batch_size)]
- logger.info(f"Launching frechet_audio_distance embeddings main method: {' '.join(cmd)} on {beams_name}")
- env = os.environ
- if gpu_index is not None:
- env["CUDA_VISIBLE_DEVICES"] = str(gpu_index)
- process = subprocess.Popen(
- cmd, stdout=open(log_file, "w"), env={**env, **self.tf_env}, stderr=subprocess.STDOUT)
- return process, log_file
-
- def _compute_fad_score(self, gpu_index: tp.Optional[int] = None):
- cmd = [
- self.python_path, "-m", "frechet_audio_distance.compute_fad",
- "--test_stats", f"{str(self.stats_tests_dir)}",
- "--background_stats", f"{str(self.stats_background_dir)}",
- ]
- logger.info(f"Launching frechet_audio_distance compute fad method: {' '.join(cmd)}")
- env = os.environ
- if gpu_index is not None:
- env["CUDA_VISIBLE_DEVICES"] = str(gpu_index)
- result = subprocess.run(cmd, env={**env, **self.tf_env}, capture_output=True)
- if result.returncode:
- logger.error(
- "Error with FAD computation from stats: \n %s \n %s",
- result.stdout.decode(), result.stderr.decode()
- )
- raise RuntimeError("Error while executing FAD computation from stats")
- try:
- # result is "FAD: (d+).(d+)" hence we remove the prefix with (d+) being one digit or more
- fad_score = float(result.stdout[4:])
- return fad_score
- except Exception as e:
- raise RuntimeError(f"Error parsing FAD score from command stdout: {e}")
-
- def _log_process_result(self, returncode: int, log_file: tp.Union[Path, str], is_background: bool) -> None:
- beams_name = self._get_samples_name(is_background)
- if returncode:
- with open(log_file, "r") as f:
- error_log = f.read()
- logger.error(error_log)
- os._exit(1)
- else:
- logger.info(f"Successfully computed embedding beams on {beams_name} samples.")
-
- def _parallel_create_embedding_beams(self, num_of_gpus: int):
- assert num_of_gpus > 0
- logger.info("Creating embeddings beams in a parallel manner on different GPUs")
- tests_beams_process, tests_beams_log_file = self._create_embedding_beams(is_background=False, gpu_index=0)
- bg_beams_process, bg_beams_log_file = self._create_embedding_beams(is_background=True, gpu_index=1)
- tests_beams_code = tests_beams_process.wait()
- bg_beams_code = bg_beams_process.wait()
- self._log_process_result(tests_beams_code, tests_beams_log_file, is_background=False)
- self._log_process_result(bg_beams_code, bg_beams_log_file, is_background=True)
-
- def _sequential_create_embedding_beams(self):
- logger.info("Creating embeddings beams in a sequential manner")
- tests_beams_process, tests_beams_log_file = self._create_embedding_beams(is_background=False)
- tests_beams_code = tests_beams_process.wait()
- self._log_process_result(tests_beams_code, tests_beams_log_file, is_background=False)
- bg_beams_process, bg_beams_log_file = self._create_embedding_beams(is_background=True)
- bg_beams_code = bg_beams_process.wait()
- self._log_process_result(bg_beams_code, bg_beams_log_file, is_background=True)
-
- @flashy.distrib.rank_zero_only
- def _local_compute_frechet_audio_distance(self):
- """Compute Frechet Audio Distance score calling TensorFlow API."""
- num_of_gpus = torch.cuda.device_count() if torch.cuda.is_available() else 0
- if num_of_gpus > 1:
- self._parallel_create_embedding_beams(num_of_gpus)
- else:
- self._sequential_create_embedding_beams()
- fad_score = self._compute_fad_score(gpu_index=0)
- return fad_score
-
- def compute(self) -> float:
- """Compute metrics."""
- assert self.total_files.item() > 0, "No files dumped for FAD computation!" # type: ignore
- fad_score = self._local_compute_frechet_audio_distance()
- logger.warning(f"FAD score = {fad_score}")
- fad_score = flashy.distrib.broadcast_object(fad_score, src=0)
- return fad_score
diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/scripts/sample_fast.py b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/scripts/sample_fast.py
deleted file mode 100644
index ff546c7dcbe459807ac3b70f834ccc1082fe8b4e..0000000000000000000000000000000000000000
--- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/scripts/sample_fast.py
+++ /dev/null
@@ -1,260 +0,0 @@
-import argparse, os, sys, glob
-import torch
-import time
-import numpy as np
-from omegaconf import OmegaConf
-from PIL import Image
-from tqdm import tqdm, trange
-from einops import repeat
-
-from main import instantiate_from_config
-from taming.modules.transformer.mingpt import sample_with_past
-
-
-rescale = lambda x: (x + 1.) / 2.
-
-
-def chw_to_pillow(x):
- return Image.fromarray((255*rescale(x.detach().cpu().numpy().transpose(1,2,0))).clip(0,255).astype(np.uint8))
-
-
-@torch.no_grad()
-def sample_classconditional(model, batch_size, class_label, steps=256, temperature=None, top_k=None, callback=None,
- dim_z=256, h=16, w=16, verbose_time=False, top_p=None):
- log = dict()
- assert type(class_label) == int, f'expecting type int but type is {type(class_label)}'
- qzshape = [batch_size, dim_z, h, w]
- assert not model.be_unconditional, 'Expecting a class-conditional Net2NetTransformer.'
- c_indices = repeat(torch.tensor([class_label]), '1 -> b 1', b=batch_size).to(model.device) # class token
- t1 = time.time()
- index_sample = sample_with_past(c_indices, model.transformer, steps=steps,
- sample_logits=True, top_k=top_k, callback=callback,
- temperature=temperature, top_p=top_p)
- if verbose_time:
- sampling_time = time.time() - t1
- print(f"Full sampling takes about {sampling_time:.2f} seconds.")
- x_sample = model.decode_to_img(index_sample, qzshape)
- log["samples"] = x_sample
- log["class_label"] = c_indices
- return log
-
-
-@torch.no_grad()
-def sample_unconditional(model, batch_size, steps=256, temperature=None, top_k=None, top_p=None, callback=None,
- dim_z=256, h=16, w=16, verbose_time=False):
- log = dict()
- qzshape = [batch_size, dim_z, h, w]
- assert model.be_unconditional, 'Expecting an unconditional model.'
- c_indices = repeat(torch.tensor([model.sos_token]), '1 -> b 1', b=batch_size).to(model.device) # sos token
- t1 = time.time()
- index_sample = sample_with_past(c_indices, model.transformer, steps=steps,
- sample_logits=True, top_k=top_k, callback=callback,
- temperature=temperature, top_p=top_p)
- if verbose_time:
- sampling_time = time.time() - t1
- print(f"Full sampling takes about {sampling_time:.2f} seconds.")
- x_sample = model.decode_to_img(index_sample, qzshape)
- log["samples"] = x_sample
- return log
-
-
-@torch.no_grad()
-def run(logdir, model, batch_size, temperature, top_k, unconditional=True, num_samples=50000,
- given_classes=None, top_p=None):
- batches = [batch_size for _ in range(num_samples//batch_size)] + [num_samples % batch_size]
- if not unconditional:
- assert given_classes is not None
- print("Running in pure class-conditional sampling mode. I will produce "
- f"{num_samples} samples for each of the {len(given_classes)} classes, "
- f"i.e. {num_samples*len(given_classes)} in total.")
- for class_label in tqdm(given_classes, desc="Classes"):
- for n, bs in tqdm(enumerate(batches), desc="Sampling Class"):
- if bs == 0: break
- logs = sample_classconditional(model, batch_size=bs, class_label=class_label,
- temperature=temperature, top_k=top_k, top_p=top_p)
- save_from_logs(logs, logdir, base_count=n * batch_size, cond_key=logs["class_label"])
- else:
- print(f"Running in unconditional sampling mode, producing {num_samples} samples.")
- for n, bs in tqdm(enumerate(batches), desc="Sampling"):
- if bs == 0: break
- logs = sample_unconditional(model, batch_size=bs, temperature=temperature, top_k=top_k, top_p=top_p)
- save_from_logs(logs, logdir, base_count=n * batch_size)
-
-
-def save_from_logs(logs, logdir, base_count, key="samples", cond_key=None):
- xx = logs[key]
- for i, x in enumerate(xx):
- x = chw_to_pillow(x)
- count = base_count + i
- if cond_key is None:
- x.save(os.path.join(logdir, f"{count:06}.png"))
- else:
- condlabel = cond_key[i]
- if type(condlabel) == torch.Tensor: condlabel = condlabel.item()
- os.makedirs(os.path.join(logdir, str(condlabel)), exist_ok=True)
- x.save(os.path.join(logdir, str(condlabel), f"{count:06}.png"))
-
-
-def get_parser():
- def str2bool(v):
- if isinstance(v, bool):
- return v
- if v.lower() in ("yes", "true", "t", "y", "1"):
- return True
- elif v.lower() in ("no", "false", "f", "n", "0"):
- return False
- else:
- raise argparse.ArgumentTypeError("Boolean value expected.")
-
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "-r",
- "--resume",
- type=str,
- nargs="?",
- help="load from logdir or checkpoint in logdir",
- )
- parser.add_argument(
- "-o",
- "--outdir",
- type=str,
- nargs="?",
- help="path where the samples will be logged to.",
- default=""
- )
- parser.add_argument(
- "-b",
- "--base",
- nargs="*",
- metavar="base_config.yaml",
- help="paths to base configs. Loaded from left-to-right. "
- "Parameters can be overwritten or added with command-line options of the form `--key value`.",
- default=list(),
- )
- parser.add_argument(
- "-n",
- "--num_samples",
- type=int,
- nargs="?",
- help="num_samples to draw",
- default=50000
- )
- parser.add_argument(
- "--batch_size",
- type=int,
- nargs="?",
- help="the batch size",
- default=25
- )
- parser.add_argument(
- "-k",
- "--top_k",
- type=int,
- nargs="?",
- help="top-k value to sample with",
- default=250,
- )
- parser.add_argument(
- "-t",
- "--temperature",
- type=float,
- nargs="?",
- help="temperature value to sample with",
- default=1.0
- )
- parser.add_argument(
- "-p",
- "--top_p",
- type=float,
- nargs="?",
- help="top-p value to sample with",
- default=1.0
- )
- parser.add_argument(
- "--classes",
- type=str,
- nargs="?",
- help="specify comma-separated classes to sample from. Uses 1000 classes per default.",
- default="imagenet"
- )
- return parser
-
-
-def load_model_from_config(config, sd, gpu=True, eval_mode=True):
- model = instantiate_from_config(config)
- if sd is not None:
- model.load_state_dict(sd)
- if gpu:
- model.cuda()
- if eval_mode:
- model.eval()
- return {"model": model}
-
-
-def load_model(config, ckpt, gpu, eval_mode):
- # load the specified checkpoint
- if ckpt:
- pl_sd = torch.load(ckpt, map_location="cpu")
- global_step = pl_sd["global_step"]
- print(f"loaded model from global step {global_step}.")
- else:
- pl_sd = {"state_dict": None}
- global_step = None
- model = load_model_from_config(config.model, pl_sd["state_dict"], gpu=gpu, eval_mode=eval_mode)["model"]
- return model, global_step
-
-
-if __name__ == "__main__":
- sys.path.append(os.getcwd())
- parser = get_parser()
-
- opt, unknown = parser.parse_known_args()
- assert opt.resume
-
- ckpt = None
-
- if not os.path.exists(opt.resume):
- raise ValueError("Cannot find {}".format(opt.resume))
- if os.path.isfile(opt.resume):
- paths = opt.resume.split("/")
- try:
- idx = len(paths)-paths[::-1].index("logs")+1
- except ValueError:
- idx = -2 # take a guess: path/to/logdir/checkpoints/model.ckpt
- logdir = "/".join(paths[:idx])
- ckpt = opt.resume
- else:
- assert os.path.isdir(opt.resume), opt.resume
- logdir = opt.resume.rstrip("/")
- ckpt = os.path.join(logdir, "checkpoints", "last.ckpt")
-
- base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*-project.yaml")))
- opt.base = base_configs+opt.base
-
- configs = [OmegaConf.load(cfg) for cfg in opt.base]
- cli = OmegaConf.from_dotlist(unknown)
- config = OmegaConf.merge(*configs, cli)
-
- model, global_step = load_model(config, ckpt, gpu=True, eval_mode=True)
-
- if opt.outdir:
- print(f"Switching logdir from '{logdir}' to '{opt.outdir}'")
- logdir = opt.outdir
-
- if opt.classes == "imagenet":
- given_classes = [i for i in range(1000)]
- else:
- cls_str = opt.classes
- assert not cls_str.endswith(","), 'class string should not end with a ","'
- given_classes = [int(c) for c in cls_str.split(",")]
-
- logdir = os.path.join(logdir, "samples", f"top_k_{opt.top_k}_temp_{opt.temperature:.2f}_top_p_{opt.top_p}",
- f"{global_step}")
-
- print(f"Logging to {logdir}")
- os.makedirs(logdir, exist_ok=True)
-
- run(logdir, model, opt.batch_size, opt.temperature, opt.top_k, unconditional=model.be_unconditional,
- given_classes=given_classes, num_samples=opt.num_samples, top_p=opt.top_p)
-
- print("done.")
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distlib/util.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distlib/util.py
deleted file mode 100644
index dd01849d997e5ae9dc9809295e29ceb871b14216..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distlib/util.py
+++ /dev/null
@@ -1,1932 +0,0 @@
-#
-# Copyright (C) 2012-2021 The Python Software Foundation.
-# See LICENSE.txt and CONTRIBUTORS.txt.
-#
-import codecs
-from collections import deque
-import contextlib
-import csv
-from glob import iglob as std_iglob
-import io
-import json
-import logging
-import os
-import py_compile
-import re
-import socket
-try:
- import ssl
-except ImportError: # pragma: no cover
- ssl = None
-import subprocess
-import sys
-import tarfile
-import tempfile
-import textwrap
-
-try:
- import threading
-except ImportError: # pragma: no cover
- import dummy_threading as threading
-import time
-
-from . import DistlibException
-from .compat import (string_types, text_type, shutil, raw_input, StringIO,
- cache_from_source, urlopen, urljoin, httplib, xmlrpclib,
- splittype, HTTPHandler, BaseConfigurator, valid_ident,
- Container, configparser, URLError, ZipFile, fsdecode,
- unquote, urlparse)
-
-logger = logging.getLogger(__name__)
-
-#
-# Requirement parsing code as per PEP 508
-#
-
-IDENTIFIER = re.compile(r'^([\w\.-]+)\s*')
-VERSION_IDENTIFIER = re.compile(r'^([\w\.*+-]+)\s*')
-COMPARE_OP = re.compile(r'^(<=?|>=?|={2,3}|[~!]=)\s*')
-MARKER_OP = re.compile(r'^((<=?)|(>=?)|={2,3}|[~!]=|in|not\s+in)\s*')
-OR = re.compile(r'^or\b\s*')
-AND = re.compile(r'^and\b\s*')
-NON_SPACE = re.compile(r'(\S+)\s*')
-STRING_CHUNK = re.compile(r'([\s\w\.{}()*+#:;,/?!~`@$%^&=|<>\[\]-]+)')
-
-
-def parse_marker(marker_string):
- """
- Parse a marker string and return a dictionary containing a marker expression.
-
- The dictionary will contain keys "op", "lhs" and "rhs" for non-terminals in
- the expression grammar, or strings. A string contained in quotes is to be
- interpreted as a literal string, and a string not contained in quotes is a
- variable (such as os_name).
- """
- def marker_var(remaining):
- # either identifier, or literal string
- m = IDENTIFIER.match(remaining)
- if m:
- result = m.groups()[0]
- remaining = remaining[m.end():]
- elif not remaining:
- raise SyntaxError('unexpected end of input')
- else:
- q = remaining[0]
- if q not in '\'"':
- raise SyntaxError('invalid expression: %s' % remaining)
- oq = '\'"'.replace(q, '')
- remaining = remaining[1:]
- parts = [q]
- while remaining:
- # either a string chunk, or oq, or q to terminate
- if remaining[0] == q:
- break
- elif remaining[0] == oq:
- parts.append(oq)
- remaining = remaining[1:]
- else:
- m = STRING_CHUNK.match(remaining)
- if not m:
- raise SyntaxError('error in string literal: %s' % remaining)
- parts.append(m.groups()[0])
- remaining = remaining[m.end():]
- else:
- s = ''.join(parts)
- raise SyntaxError('unterminated string: %s' % s)
- parts.append(q)
- result = ''.join(parts)
- remaining = remaining[1:].lstrip() # skip past closing quote
- return result, remaining
-
- def marker_expr(remaining):
- if remaining and remaining[0] == '(':
- result, remaining = marker(remaining[1:].lstrip())
- if remaining[0] != ')':
- raise SyntaxError('unterminated parenthesis: %s' % remaining)
- remaining = remaining[1:].lstrip()
- else:
- lhs, remaining = marker_var(remaining)
- while remaining:
- m = MARKER_OP.match(remaining)
- if not m:
- break
- op = m.groups()[0]
- remaining = remaining[m.end():]
- rhs, remaining = marker_var(remaining)
- lhs = {'op': op, 'lhs': lhs, 'rhs': rhs}
- result = lhs
- return result, remaining
-
- def marker_and(remaining):
- lhs, remaining = marker_expr(remaining)
- while remaining:
- m = AND.match(remaining)
- if not m:
- break
- remaining = remaining[m.end():]
- rhs, remaining = marker_expr(remaining)
- lhs = {'op': 'and', 'lhs': lhs, 'rhs': rhs}
- return lhs, remaining
-
- def marker(remaining):
- lhs, remaining = marker_and(remaining)
- while remaining:
- m = OR.match(remaining)
- if not m:
- break
- remaining = remaining[m.end():]
- rhs, remaining = marker_and(remaining)
- lhs = {'op': 'or', 'lhs': lhs, 'rhs': rhs}
- return lhs, remaining
-
- return marker(marker_string)
-
-
-def parse_requirement(req):
- """
- Parse a requirement passed in as a string. Return a Container
- whose attributes contain the various parts of the requirement.
- """
- remaining = req.strip()
- if not remaining or remaining.startswith('#'):
- return None
- m = IDENTIFIER.match(remaining)
- if not m:
- raise SyntaxError('name expected: %s' % remaining)
- distname = m.groups()[0]
- remaining = remaining[m.end():]
- extras = mark_expr = versions = uri = None
- if remaining and remaining[0] == '[':
- i = remaining.find(']', 1)
- if i < 0:
- raise SyntaxError('unterminated extra: %s' % remaining)
- s = remaining[1:i]
- remaining = remaining[i + 1:].lstrip()
- extras = []
- while s:
- m = IDENTIFIER.match(s)
- if not m:
- raise SyntaxError('malformed extra: %s' % s)
- extras.append(m.groups()[0])
- s = s[m.end():]
- if not s:
- break
- if s[0] != ',':
- raise SyntaxError('comma expected in extras: %s' % s)
- s = s[1:].lstrip()
- if not extras:
- extras = None
- if remaining:
- if remaining[0] == '@':
- # it's a URI
- remaining = remaining[1:].lstrip()
- m = NON_SPACE.match(remaining)
- if not m:
- raise SyntaxError('invalid URI: %s' % remaining)
- uri = m.groups()[0]
- t = urlparse(uri)
- # there are issues with Python and URL parsing, so this test
- # is a bit crude. See bpo-20271, bpo-23505. Python doesn't
- # always parse invalid URLs correctly - it should raise
- # exceptions for malformed URLs
- if not (t.scheme and t.netloc):
- raise SyntaxError('Invalid URL: %s' % uri)
- remaining = remaining[m.end():].lstrip()
- else:
-
- def get_versions(ver_remaining):
- """
- Return a list of operator, version tuples if any are
- specified, else None.
- """
- m = COMPARE_OP.match(ver_remaining)
- versions = None
- if m:
- versions = []
- while True:
- op = m.groups()[0]
- ver_remaining = ver_remaining[m.end():]
- m = VERSION_IDENTIFIER.match(ver_remaining)
- if not m:
- raise SyntaxError('invalid version: %s' % ver_remaining)
- v = m.groups()[0]
- versions.append((op, v))
- ver_remaining = ver_remaining[m.end():]
- if not ver_remaining or ver_remaining[0] != ',':
- break
- ver_remaining = ver_remaining[1:].lstrip()
- # Some packages have a trailing comma which would break things
- # See issue #148
- if not ver_remaining:
- break
- m = COMPARE_OP.match(ver_remaining)
- if not m:
- raise SyntaxError('invalid constraint: %s' % ver_remaining)
- if not versions:
- versions = None
- return versions, ver_remaining
-
- if remaining[0] != '(':
- versions, remaining = get_versions(remaining)
- else:
- i = remaining.find(')', 1)
- if i < 0:
- raise SyntaxError('unterminated parenthesis: %s' % remaining)
- s = remaining[1:i]
- remaining = remaining[i + 1:].lstrip()
- # As a special diversion from PEP 508, allow a version number
- # a.b.c in parentheses as a synonym for ~= a.b.c (because this
- # is allowed in earlier PEPs)
- if COMPARE_OP.match(s):
- versions, _ = get_versions(s)
- else:
- m = VERSION_IDENTIFIER.match(s)
- if not m:
- raise SyntaxError('invalid constraint: %s' % s)
- v = m.groups()[0]
- s = s[m.end():].lstrip()
- if s:
- raise SyntaxError('invalid constraint: %s' % s)
- versions = [('~=', v)]
-
- if remaining:
- if remaining[0] != ';':
- raise SyntaxError('invalid requirement: %s' % remaining)
- remaining = remaining[1:].lstrip()
-
- mark_expr, remaining = parse_marker(remaining)
-
- if remaining and remaining[0] != '#':
- raise SyntaxError('unexpected trailing data: %s' % remaining)
-
- if not versions:
- rs = distname
- else:
- rs = '%s %s' % (distname, ', '.join(['%s %s' % con for con in versions]))
- return Container(name=distname, extras=extras, constraints=versions,
- marker=mark_expr, url=uri, requirement=rs)
-
-
-def get_resources_dests(resources_root, rules):
- """Find destinations for resources files"""
-
- def get_rel_path(root, path):
- # normalizes and returns a lstripped-/-separated path
- root = root.replace(os.path.sep, '/')
- path = path.replace(os.path.sep, '/')
- assert path.startswith(root)
- return path[len(root):].lstrip('/')
-
- destinations = {}
- for base, suffix, dest in rules:
- prefix = os.path.join(resources_root, base)
- for abs_base in iglob(prefix):
- abs_glob = os.path.join(abs_base, suffix)
- for abs_path in iglob(abs_glob):
- resource_file = get_rel_path(resources_root, abs_path)
- if dest is None: # remove the entry if it was here
- destinations.pop(resource_file, None)
- else:
- rel_path = get_rel_path(abs_base, abs_path)
- rel_dest = dest.replace(os.path.sep, '/').rstrip('/')
- destinations[resource_file] = rel_dest + '/' + rel_path
- return destinations
-
-
-def in_venv():
- if hasattr(sys, 'real_prefix'):
- # virtualenv venvs
- result = True
- else:
- # PEP 405 venvs
- result = sys.prefix != getattr(sys, 'base_prefix', sys.prefix)
- return result
-
-
-def get_executable():
-# The __PYVENV_LAUNCHER__ dance is apparently no longer needed, as
-# changes to the stub launcher mean that sys.executable always points
-# to the stub on OS X
-# if sys.platform == 'darwin' and ('__PYVENV_LAUNCHER__'
-# in os.environ):
-# result = os.environ['__PYVENV_LAUNCHER__']
-# else:
-# result = sys.executable
-# return result
- # Avoid normcasing: see issue #143
- # result = os.path.normcase(sys.executable)
- result = sys.executable
- if not isinstance(result, text_type):
- result = fsdecode(result)
- return result
-
-
-def proceed(prompt, allowed_chars, error_prompt=None, default=None):
- p = prompt
- while True:
- s = raw_input(p)
- p = prompt
- if not s and default:
- s = default
- if s:
- c = s[0].lower()
- if c in allowed_chars:
- break
- if error_prompt:
- p = '%c: %s\n%s' % (c, error_prompt, prompt)
- return c
-
-
-def extract_by_key(d, keys):
- if isinstance(keys, string_types):
- keys = keys.split()
- result = {}
- for key in keys:
- if key in d:
- result[key] = d[key]
- return result
-
-def read_exports(stream):
- if sys.version_info[0] >= 3:
- # needs to be a text stream
- stream = codecs.getreader('utf-8')(stream)
- # Try to load as JSON, falling back on legacy format
- data = stream.read()
- stream = StringIO(data)
- try:
- jdata = json.load(stream)
- result = jdata['extensions']['python.exports']['exports']
- for group, entries in result.items():
- for k, v in entries.items():
- s = '%s = %s' % (k, v)
- entry = get_export_entry(s)
- assert entry is not None
- entries[k] = entry
- return result
- except Exception:
- stream.seek(0, 0)
-
- def read_stream(cp, stream):
- if hasattr(cp, 'read_file'):
- cp.read_file(stream)
- else:
- cp.readfp(stream)
-
- cp = configparser.ConfigParser()
- try:
- read_stream(cp, stream)
- except configparser.MissingSectionHeaderError:
- stream.close()
- data = textwrap.dedent(data)
- stream = StringIO(data)
- read_stream(cp, stream)
-
- result = {}
- for key in cp.sections():
- result[key] = entries = {}
- for name, value in cp.items(key):
- s = '%s = %s' % (name, value)
- entry = get_export_entry(s)
- assert entry is not None
- #entry.dist = self
- entries[name] = entry
- return result
-
-
-def write_exports(exports, stream):
- if sys.version_info[0] >= 3:
- # needs to be a text stream
- stream = codecs.getwriter('utf-8')(stream)
- cp = configparser.ConfigParser()
- for k, v in exports.items():
- # TODO check k, v for valid values
- cp.add_section(k)
- for entry in v.values():
- if entry.suffix is None:
- s = entry.prefix
- else:
- s = '%s:%s' % (entry.prefix, entry.suffix)
- if entry.flags:
- s = '%s [%s]' % (s, ', '.join(entry.flags))
- cp.set(k, entry.name, s)
- cp.write(stream)
-
-
-@contextlib.contextmanager
-def tempdir():
- td = tempfile.mkdtemp()
- try:
- yield td
- finally:
- shutil.rmtree(td)
-
-@contextlib.contextmanager
-def chdir(d):
- cwd = os.getcwd()
- try:
- os.chdir(d)
- yield
- finally:
- os.chdir(cwd)
-
-
-@contextlib.contextmanager
-def socket_timeout(seconds=15):
- cto = socket.getdefaulttimeout()
- try:
- socket.setdefaulttimeout(seconds)
- yield
- finally:
- socket.setdefaulttimeout(cto)
-
-
-class cached_property(object):
- def __init__(self, func):
- self.func = func
- #for attr in ('__name__', '__module__', '__doc__'):
- # setattr(self, attr, getattr(func, attr, None))
-
- def __get__(self, obj, cls=None):
- if obj is None:
- return self
- value = self.func(obj)
- object.__setattr__(obj, self.func.__name__, value)
- #obj.__dict__[self.func.__name__] = value = self.func(obj)
- return value
-
-def convert_path(pathname):
- """Return 'pathname' as a name that will work on the native filesystem.
-
- The path is split on '/' and put back together again using the current
- directory separator. Needed because filenames in the setup script are
- always supplied in Unix style, and have to be converted to the local
- convention before we can actually use them in the filesystem. Raises
- ValueError on non-Unix-ish systems if 'pathname' either starts or
- ends with a slash.
- """
- if os.sep == '/':
- return pathname
- if not pathname:
- return pathname
- if pathname[0] == '/':
- raise ValueError("path '%s' cannot be absolute" % pathname)
- if pathname[-1] == '/':
- raise ValueError("path '%s' cannot end with '/'" % pathname)
-
- paths = pathname.split('/')
- while os.curdir in paths:
- paths.remove(os.curdir)
- if not paths:
- return os.curdir
- return os.path.join(*paths)
-
-
-class FileOperator(object):
- def __init__(self, dry_run=False):
- self.dry_run = dry_run
- self.ensured = set()
- self._init_record()
-
- def _init_record(self):
- self.record = False
- self.files_written = set()
- self.dirs_created = set()
-
- def record_as_written(self, path):
- if self.record:
- self.files_written.add(path)
-
- def newer(self, source, target):
- """Tell if the target is newer than the source.
-
- Returns true if 'source' exists and is more recently modified than
- 'target', or if 'source' exists and 'target' doesn't.
-
- Returns false if both exist and 'target' is the same age or younger
- than 'source'. Raise PackagingFileError if 'source' does not exist.
-
- Note that this test is not very accurate: files created in the same
- second will have the same "age".
- """
- if not os.path.exists(source):
- raise DistlibException("file '%r' does not exist" %
- os.path.abspath(source))
- if not os.path.exists(target):
- return True
-
- return os.stat(source).st_mtime > os.stat(target).st_mtime
-
- def copy_file(self, infile, outfile, check=True):
- """Copy a file respecting dry-run and force flags.
- """
- self.ensure_dir(os.path.dirname(outfile))
- logger.info('Copying %s to %s', infile, outfile)
- if not self.dry_run:
- msg = None
- if check:
- if os.path.islink(outfile):
- msg = '%s is a symlink' % outfile
- elif os.path.exists(outfile) and not os.path.isfile(outfile):
- msg = '%s is a non-regular file' % outfile
- if msg:
- raise ValueError(msg + ' which would be overwritten')
- shutil.copyfile(infile, outfile)
- self.record_as_written(outfile)
-
- def copy_stream(self, instream, outfile, encoding=None):
- assert not os.path.isdir(outfile)
- self.ensure_dir(os.path.dirname(outfile))
- logger.info('Copying stream %s to %s', instream, outfile)
- if not self.dry_run:
- if encoding is None:
- outstream = open(outfile, 'wb')
- else:
- outstream = codecs.open(outfile, 'w', encoding=encoding)
- try:
- shutil.copyfileobj(instream, outstream)
- finally:
- outstream.close()
- self.record_as_written(outfile)
-
- def write_binary_file(self, path, data):
- self.ensure_dir(os.path.dirname(path))
- if not self.dry_run:
- if os.path.exists(path):
- os.remove(path)
- with open(path, 'wb') as f:
- f.write(data)
- self.record_as_written(path)
-
- def write_text_file(self, path, data, encoding):
- self.write_binary_file(path, data.encode(encoding))
-
- def set_mode(self, bits, mask, files):
- if os.name == 'posix' or (os.name == 'java' and os._name == 'posix'):
- # Set the executable bits (owner, group, and world) on
- # all the files specified.
- for f in files:
- if self.dry_run:
- logger.info("changing mode of %s", f)
- else:
- mode = (os.stat(f).st_mode | bits) & mask
- logger.info("changing mode of %s to %o", f, mode)
- os.chmod(f, mode)
-
- set_executable_mode = lambda s, f: s.set_mode(0o555, 0o7777, f)
-
- def ensure_dir(self, path):
- path = os.path.abspath(path)
- if path not in self.ensured and not os.path.exists(path):
- self.ensured.add(path)
- d, f = os.path.split(path)
- self.ensure_dir(d)
- logger.info('Creating %s' % path)
- if not self.dry_run:
- os.mkdir(path)
- if self.record:
- self.dirs_created.add(path)
-
- def byte_compile(self, path, optimize=False, force=False, prefix=None, hashed_invalidation=False):
- dpath = cache_from_source(path, not optimize)
- logger.info('Byte-compiling %s to %s', path, dpath)
- if not self.dry_run:
- if force or self.newer(path, dpath):
- if not prefix:
- diagpath = None
- else:
- assert path.startswith(prefix)
- diagpath = path[len(prefix):]
- compile_kwargs = {}
- if hashed_invalidation and hasattr(py_compile, 'PycInvalidationMode'):
- compile_kwargs['invalidation_mode'] = py_compile.PycInvalidationMode.CHECKED_HASH
- py_compile.compile(path, dpath, diagpath, True, **compile_kwargs) # raise error
- self.record_as_written(dpath)
- return dpath
-
- def ensure_removed(self, path):
- if os.path.exists(path):
- if os.path.isdir(path) and not os.path.islink(path):
- logger.debug('Removing directory tree at %s', path)
- if not self.dry_run:
- shutil.rmtree(path)
- if self.record:
- if path in self.dirs_created:
- self.dirs_created.remove(path)
- else:
- if os.path.islink(path):
- s = 'link'
- else:
- s = 'file'
- logger.debug('Removing %s %s', s, path)
- if not self.dry_run:
- os.remove(path)
- if self.record:
- if path in self.files_written:
- self.files_written.remove(path)
-
- def is_writable(self, path):
- result = False
- while not result:
- if os.path.exists(path):
- result = os.access(path, os.W_OK)
- break
- parent = os.path.dirname(path)
- if parent == path:
- break
- path = parent
- return result
-
- def commit(self):
- """
- Commit recorded changes, turn off recording, return
- changes.
- """
- assert self.record
- result = self.files_written, self.dirs_created
- self._init_record()
- return result
-
- def rollback(self):
- if not self.dry_run:
- for f in list(self.files_written):
- if os.path.exists(f):
- os.remove(f)
- # dirs should all be empty now, except perhaps for
- # __pycache__ subdirs
- # reverse so that subdirs appear before their parents
- dirs = sorted(self.dirs_created, reverse=True)
- for d in dirs:
- flist = os.listdir(d)
- if flist:
- assert flist == ['__pycache__']
- sd = os.path.join(d, flist[0])
- os.rmdir(sd)
- os.rmdir(d) # should fail if non-empty
- self._init_record()
-
-def resolve(module_name, dotted_path):
- if module_name in sys.modules:
- mod = sys.modules[module_name]
- else:
- mod = __import__(module_name)
- if dotted_path is None:
- result = mod
- else:
- parts = dotted_path.split('.')
- result = getattr(mod, parts.pop(0))
- for p in parts:
- result = getattr(result, p)
- return result
-
-
-class ExportEntry(object):
- def __init__(self, name, prefix, suffix, flags):
- self.name = name
- self.prefix = prefix
- self.suffix = suffix
- self.flags = flags
-
- @cached_property
- def value(self):
- return resolve(self.prefix, self.suffix)
-
- def __repr__(self): # pragma: no cover
- return '' % (self.name, self.prefix,
- self.suffix, self.flags)
-
- def __eq__(self, other):
- if not isinstance(other, ExportEntry):
- result = False
- else:
- result = (self.name == other.name and
- self.prefix == other.prefix and
- self.suffix == other.suffix and
- self.flags == other.flags)
- return result
-
- __hash__ = object.__hash__
-
-
-ENTRY_RE = re.compile(r'''(?P(\w|[-.+])+)
- \s*=\s*(?P(\w+)([:\.]\w+)*)
- \s*(\[\s*(?P[\w-]+(=\w+)?(,\s*\w+(=\w+)?)*)\s*\])?
- ''', re.VERBOSE)
-
-def get_export_entry(specification):
- m = ENTRY_RE.search(specification)
- if not m:
- result = None
- if '[' in specification or ']' in specification:
- raise DistlibException("Invalid specification "
- "'%s'" % specification)
- else:
- d = m.groupdict()
- name = d['name']
- path = d['callable']
- colons = path.count(':')
- if colons == 0:
- prefix, suffix = path, None
- else:
- if colons != 1:
- raise DistlibException("Invalid specification "
- "'%s'" % specification)
- prefix, suffix = path.split(':')
- flags = d['flags']
- if flags is None:
- if '[' in specification or ']' in specification:
- raise DistlibException("Invalid specification "
- "'%s'" % specification)
- flags = []
- else:
- flags = [f.strip() for f in flags.split(',')]
- result = ExportEntry(name, prefix, suffix, flags)
- return result
-
-
-def get_cache_base(suffix=None):
- """
- Return the default base location for distlib caches. If the directory does
- not exist, it is created. Use the suffix provided for the base directory,
- and default to '.distlib' if it isn't provided.
-
- On Windows, if LOCALAPPDATA is defined in the environment, then it is
- assumed to be a directory, and will be the parent directory of the result.
- On POSIX, and on Windows if LOCALAPPDATA is not defined, the user's home
- directory - using os.expanduser('~') - will be the parent directory of
- the result.
-
- The result is just the directory '.distlib' in the parent directory as
- determined above, or with the name specified with ``suffix``.
- """
- if suffix is None:
- suffix = '.distlib'
- if os.name == 'nt' and 'LOCALAPPDATA' in os.environ:
- result = os.path.expandvars('$localappdata')
- else:
- # Assume posix, or old Windows
- result = os.path.expanduser('~')
- # we use 'isdir' instead of 'exists', because we want to
- # fail if there's a file with that name
- if os.path.isdir(result):
- usable = os.access(result, os.W_OK)
- if not usable:
- logger.warning('Directory exists but is not writable: %s', result)
- else:
- try:
- os.makedirs(result)
- usable = True
- except OSError:
- logger.warning('Unable to create %s', result, exc_info=True)
- usable = False
- if not usable:
- result = tempfile.mkdtemp()
- logger.warning('Default location unusable, using %s', result)
- return os.path.join(result, suffix)
-
-
-def path_to_cache_dir(path):
- """
- Convert an absolute path to a directory name for use in a cache.
-
- The algorithm used is:
-
- #. On Windows, any ``':'`` in the drive is replaced with ``'---'``.
- #. Any occurrence of ``os.sep`` is replaced with ``'--'``.
- #. ``'.cache'`` is appended.
- """
- d, p = os.path.splitdrive(os.path.abspath(path))
- if d:
- d = d.replace(':', '---')
- p = p.replace(os.sep, '--')
- return d + p + '.cache'
-
-
-def ensure_slash(s):
- if not s.endswith('/'):
- return s + '/'
- return s
-
-
-def parse_credentials(netloc):
- username = password = None
- if '@' in netloc:
- prefix, netloc = netloc.rsplit('@', 1)
- if ':' not in prefix:
- username = prefix
- else:
- username, password = prefix.split(':', 1)
- if username:
- username = unquote(username)
- if password:
- password = unquote(password)
- return username, password, netloc
-
-
-def get_process_umask():
- result = os.umask(0o22)
- os.umask(result)
- return result
-
-def is_string_sequence(seq):
- result = True
- i = None
- for i, s in enumerate(seq):
- if not isinstance(s, string_types):
- result = False
- break
- assert i is not None
- return result
-
-PROJECT_NAME_AND_VERSION = re.compile('([a-z0-9_]+([.-][a-z_][a-z0-9_]*)*)-'
- '([a-z0-9_.+-]+)', re.I)
-PYTHON_VERSION = re.compile(r'-py(\d\.?\d?)')
-
-
-def split_filename(filename, project_name=None):
- """
- Extract name, version, python version from a filename (no extension)
-
- Return name, version, pyver or None
- """
- result = None
- pyver = None
- filename = unquote(filename).replace(' ', '-')
- m = PYTHON_VERSION.search(filename)
- if m:
- pyver = m.group(1)
- filename = filename[:m.start()]
- if project_name and len(filename) > len(project_name) + 1:
- m = re.match(re.escape(project_name) + r'\b', filename)
- if m:
- n = m.end()
- result = filename[:n], filename[n + 1:], pyver
- if result is None:
- m = PROJECT_NAME_AND_VERSION.match(filename)
- if m:
- result = m.group(1), m.group(3), pyver
- return result
-
-# Allow spaces in name because of legacy dists like "Twisted Core"
-NAME_VERSION_RE = re.compile(r'(?P[\w .-]+)\s*'
- r'\(\s*(?P[^\s)]+)\)$')
-
-def parse_name_and_version(p):
- """
- A utility method used to get name and version from a string.
-
- From e.g. a Provides-Dist value.
-
- :param p: A value in a form 'foo (1.0)'
- :return: The name and version as a tuple.
- """
- m = NAME_VERSION_RE.match(p)
- if not m:
- raise DistlibException('Ill-formed name/version string: \'%s\'' % p)
- d = m.groupdict()
- return d['name'].strip().lower(), d['ver']
-
-def get_extras(requested, available):
- result = set()
- requested = set(requested or [])
- available = set(available or [])
- if '*' in requested:
- requested.remove('*')
- result |= available
- for r in requested:
- if r == '-':
- result.add(r)
- elif r.startswith('-'):
- unwanted = r[1:]
- if unwanted not in available:
- logger.warning('undeclared extra: %s' % unwanted)
- if unwanted in result:
- result.remove(unwanted)
- else:
- if r not in available:
- logger.warning('undeclared extra: %s' % r)
- result.add(r)
- return result
-#
-# Extended metadata functionality
-#
-
-def _get_external_data(url):
- result = {}
- try:
- # urlopen might fail if it runs into redirections,
- # because of Python issue #13696. Fixed in locators
- # using a custom redirect handler.
- resp = urlopen(url)
- headers = resp.info()
- ct = headers.get('Content-Type')
- if not ct.startswith('application/json'):
- logger.debug('Unexpected response for JSON request: %s', ct)
- else:
- reader = codecs.getreader('utf-8')(resp)
- #data = reader.read().decode('utf-8')
- #result = json.loads(data)
- result = json.load(reader)
- except Exception as e:
- logger.exception('Failed to get external data for %s: %s', url, e)
- return result
-
-_external_data_base_url = 'https://www.red-dove.com/pypi/projects/'
-
-def get_project_data(name):
- url = '%s/%s/project.json' % (name[0].upper(), name)
- url = urljoin(_external_data_base_url, url)
- result = _get_external_data(url)
- return result
-
-def get_package_data(name, version):
- url = '%s/%s/package-%s.json' % (name[0].upper(), name, version)
- url = urljoin(_external_data_base_url, url)
- return _get_external_data(url)
-
-
-class Cache(object):
- """
- A class implementing a cache for resources that need to live in the file system
- e.g. shared libraries. This class was moved from resources to here because it
- could be used by other modules, e.g. the wheel module.
- """
-
- def __init__(self, base):
- """
- Initialise an instance.
-
- :param base: The base directory where the cache should be located.
- """
- # we use 'isdir' instead of 'exists', because we want to
- # fail if there's a file with that name
- if not os.path.isdir(base): # pragma: no cover
- os.makedirs(base)
- if (os.stat(base).st_mode & 0o77) != 0:
- logger.warning('Directory \'%s\' is not private', base)
- self.base = os.path.abspath(os.path.normpath(base))
-
- def prefix_to_dir(self, prefix):
- """
- Converts a resource prefix to a directory name in the cache.
- """
- return path_to_cache_dir(prefix)
-
- def clear(self):
- """
- Clear the cache.
- """
- not_removed = []
- for fn in os.listdir(self.base):
- fn = os.path.join(self.base, fn)
- try:
- if os.path.islink(fn) or os.path.isfile(fn):
- os.remove(fn)
- elif os.path.isdir(fn):
- shutil.rmtree(fn)
- except Exception:
- not_removed.append(fn)
- return not_removed
-
-
-class EventMixin(object):
- """
- A very simple publish/subscribe system.
- """
- def __init__(self):
- self._subscribers = {}
-
- def add(self, event, subscriber, append=True):
- """
- Add a subscriber for an event.
-
- :param event: The name of an event.
- :param subscriber: The subscriber to be added (and called when the
- event is published).
- :param append: Whether to append or prepend the subscriber to an
- existing subscriber list for the event.
- """
- subs = self._subscribers
- if event not in subs:
- subs[event] = deque([subscriber])
- else:
- sq = subs[event]
- if append:
- sq.append(subscriber)
- else:
- sq.appendleft(subscriber)
-
- def remove(self, event, subscriber):
- """
- Remove a subscriber for an event.
-
- :param event: The name of an event.
- :param subscriber: The subscriber to be removed.
- """
- subs = self._subscribers
- if event not in subs:
- raise ValueError('No subscribers: %r' % event)
- subs[event].remove(subscriber)
-
- def get_subscribers(self, event):
- """
- Return an iterator for the subscribers for an event.
- :param event: The event to return subscribers for.
- """
- return iter(self._subscribers.get(event, ()))
-
- def publish(self, event, *args, **kwargs):
- """
- Publish a event and return a list of values returned by its
- subscribers.
-
- :param event: The event to publish.
- :param args: The positional arguments to pass to the event's
- subscribers.
- :param kwargs: The keyword arguments to pass to the event's
- subscribers.
- """
- result = []
- for subscriber in self.get_subscribers(event):
- try:
- value = subscriber(event, *args, **kwargs)
- except Exception:
- logger.exception('Exception during event publication')
- value = None
- result.append(value)
- logger.debug('publish %s: args = %s, kwargs = %s, result = %s',
- event, args, kwargs, result)
- return result
-
-#
-# Simple sequencing
-#
-class Sequencer(object):
- def __init__(self):
- self._preds = {}
- self._succs = {}
- self._nodes = set() # nodes with no preds/succs
-
- def add_node(self, node):
- self._nodes.add(node)
-
- def remove_node(self, node, edges=False):
- if node in self._nodes:
- self._nodes.remove(node)
- if edges:
- for p in set(self._preds.get(node, ())):
- self.remove(p, node)
- for s in set(self._succs.get(node, ())):
- self.remove(node, s)
- # Remove empties
- for k, v in list(self._preds.items()):
- if not v:
- del self._preds[k]
- for k, v in list(self._succs.items()):
- if not v:
- del self._succs[k]
-
- def add(self, pred, succ):
- assert pred != succ
- self._preds.setdefault(succ, set()).add(pred)
- self._succs.setdefault(pred, set()).add(succ)
-
- def remove(self, pred, succ):
- assert pred != succ
- try:
- preds = self._preds[succ]
- succs = self._succs[pred]
- except KeyError: # pragma: no cover
- raise ValueError('%r not a successor of anything' % succ)
- try:
- preds.remove(pred)
- succs.remove(succ)
- except KeyError: # pragma: no cover
- raise ValueError('%r not a successor of %r' % (succ, pred))
-
- def is_step(self, step):
- return (step in self._preds or step in self._succs or
- step in self._nodes)
-
- def get_steps(self, final):
- if not self.is_step(final):
- raise ValueError('Unknown: %r' % final)
- result = []
- todo = []
- seen = set()
- todo.append(final)
- while todo:
- step = todo.pop(0)
- if step in seen:
- # if a step was already seen,
- # move it to the end (so it will appear earlier
- # when reversed on return) ... but not for the
- # final step, as that would be confusing for
- # users
- if step != final:
- result.remove(step)
- result.append(step)
- else:
- seen.add(step)
- result.append(step)
- preds = self._preds.get(step, ())
- todo.extend(preds)
- return reversed(result)
-
- @property
- def strong_connections(self):
- #http://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm
- index_counter = [0]
- stack = []
- lowlinks = {}
- index = {}
- result = []
-
- graph = self._succs
-
- def strongconnect(node):
- # set the depth index for this node to the smallest unused index
- index[node] = index_counter[0]
- lowlinks[node] = index_counter[0]
- index_counter[0] += 1
- stack.append(node)
-
- # Consider successors
- try:
- successors = graph[node]
- except Exception:
- successors = []
- for successor in successors:
- if successor not in lowlinks:
- # Successor has not yet been visited
- strongconnect(successor)
- lowlinks[node] = min(lowlinks[node],lowlinks[successor])
- elif successor in stack:
- # the successor is in the stack and hence in the current
- # strongly connected component (SCC)
- lowlinks[node] = min(lowlinks[node],index[successor])
-
- # If `node` is a root node, pop the stack and generate an SCC
- if lowlinks[node] == index[node]:
- connected_component = []
-
- while True:
- successor = stack.pop()
- connected_component.append(successor)
- if successor == node: break
- component = tuple(connected_component)
- # storing the result
- result.append(component)
-
- for node in graph:
- if node not in lowlinks:
- strongconnect(node)
-
- return result
-
- @property
- def dot(self):
- result = ['digraph G {']
- for succ in self._preds:
- preds = self._preds[succ]
- for pred in preds:
- result.append(' %s -> %s;' % (pred, succ))
- for node in self._nodes:
- result.append(' %s;' % node)
- result.append('}')
- return '\n'.join(result)
-
-#
-# Unarchiving functionality for zip, tar, tgz, tbz, whl
-#
-
-ARCHIVE_EXTENSIONS = ('.tar.gz', '.tar.bz2', '.tar', '.zip',
- '.tgz', '.tbz', '.whl')
-
-def unarchive(archive_filename, dest_dir, format=None, check=True):
-
- def check_path(path):
- if not isinstance(path, text_type):
- path = path.decode('utf-8')
- p = os.path.abspath(os.path.join(dest_dir, path))
- if not p.startswith(dest_dir) or p[plen] != os.sep:
- raise ValueError('path outside destination: %r' % p)
-
- dest_dir = os.path.abspath(dest_dir)
- plen = len(dest_dir)
- archive = None
- if format is None:
- if archive_filename.endswith(('.zip', '.whl')):
- format = 'zip'
- elif archive_filename.endswith(('.tar.gz', '.tgz')):
- format = 'tgz'
- mode = 'r:gz'
- elif archive_filename.endswith(('.tar.bz2', '.tbz')):
- format = 'tbz'
- mode = 'r:bz2'
- elif archive_filename.endswith('.tar'):
- format = 'tar'
- mode = 'r'
- else: # pragma: no cover
- raise ValueError('Unknown format for %r' % archive_filename)
- try:
- if format == 'zip':
- archive = ZipFile(archive_filename, 'r')
- if check:
- names = archive.namelist()
- for name in names:
- check_path(name)
- else:
- archive = tarfile.open(archive_filename, mode)
- if check:
- names = archive.getnames()
- for name in names:
- check_path(name)
- if format != 'zip' and sys.version_info[0] < 3:
- # See Python issue 17153. If the dest path contains Unicode,
- # tarfile extraction fails on Python 2.x if a member path name
- # contains non-ASCII characters - it leads to an implicit
- # bytes -> unicode conversion using ASCII to decode.
- for tarinfo in archive.getmembers():
- if not isinstance(tarinfo.name, text_type):
- tarinfo.name = tarinfo.name.decode('utf-8')
- archive.extractall(dest_dir)
-
- finally:
- if archive:
- archive.close()
-
-
-def zip_dir(directory):
- """zip a directory tree into a BytesIO object"""
- result = io.BytesIO()
- dlen = len(directory)
- with ZipFile(result, "w") as zf:
- for root, dirs, files in os.walk(directory):
- for name in files:
- full = os.path.join(root, name)
- rel = root[dlen:]
- dest = os.path.join(rel, name)
- zf.write(full, dest)
- return result
-
-#
-# Simple progress bar
-#
-
-UNITS = ('', 'K', 'M', 'G','T','P')
-
-
-class Progress(object):
- unknown = 'UNKNOWN'
-
- def __init__(self, minval=0, maxval=100):
- assert maxval is None or maxval >= minval
- self.min = self.cur = minval
- self.max = maxval
- self.started = None
- self.elapsed = 0
- self.done = False
-
- def update(self, curval):
- assert self.min <= curval
- assert self.max is None or curval <= self.max
- self.cur = curval
- now = time.time()
- if self.started is None:
- self.started = now
- else:
- self.elapsed = now - self.started
-
- def increment(self, incr):
- assert incr >= 0
- self.update(self.cur + incr)
-
- def start(self):
- self.update(self.min)
- return self
-
- def stop(self):
- if self.max is not None:
- self.update(self.max)
- self.done = True
-
- @property
- def maximum(self):
- return self.unknown if self.max is None else self.max
-
- @property
- def percentage(self):
- if self.done:
- result = '100 %'
- elif self.max is None:
- result = ' ?? %'
- else:
- v = 100.0 * (self.cur - self.min) / (self.max - self.min)
- result = '%3d %%' % v
- return result
-
- def format_duration(self, duration):
- if (duration <= 0) and self.max is None or self.cur == self.min:
- result = '??:??:??'
- #elif duration < 1:
- # result = '--:--:--'
- else:
- result = time.strftime('%H:%M:%S', time.gmtime(duration))
- return result
-
- @property
- def ETA(self):
- if self.done:
- prefix = 'Done'
- t = self.elapsed
- #import pdb; pdb.set_trace()
- else:
- prefix = 'ETA '
- if self.max is None:
- t = -1
- elif self.elapsed == 0 or (self.cur == self.min):
- t = 0
- else:
- #import pdb; pdb.set_trace()
- t = float(self.max - self.min)
- t /= self.cur - self.min
- t = (t - 1) * self.elapsed
- return '%s: %s' % (prefix, self.format_duration(t))
-
- @property
- def speed(self):
- if self.elapsed == 0:
- result = 0.0
- else:
- result = (self.cur - self.min) / self.elapsed
- for unit in UNITS:
- if result < 1000:
- break
- result /= 1000.0
- return '%d %sB/s' % (result, unit)
-
-#
-# Glob functionality
-#
-
-RICH_GLOB = re.compile(r'\{([^}]*)\}')
-_CHECK_RECURSIVE_GLOB = re.compile(r'[^/\\,{]\*\*|\*\*[^/\\,}]')
-_CHECK_MISMATCH_SET = re.compile(r'^[^{]*\}|\{[^}]*$')
-
-
-def iglob(path_glob):
- """Extended globbing function that supports ** and {opt1,opt2,opt3}."""
- if _CHECK_RECURSIVE_GLOB.search(path_glob):
- msg = """invalid glob %r: recursive glob "**" must be used alone"""
- raise ValueError(msg % path_glob)
- if _CHECK_MISMATCH_SET.search(path_glob):
- msg = """invalid glob %r: mismatching set marker '{' or '}'"""
- raise ValueError(msg % path_glob)
- return _iglob(path_glob)
-
-
-def _iglob(path_glob):
- rich_path_glob = RICH_GLOB.split(path_glob, 1)
- if len(rich_path_glob) > 1:
- assert len(rich_path_glob) == 3, rich_path_glob
- prefix, set, suffix = rich_path_glob
- for item in set.split(','):
- for path in _iglob(''.join((prefix, item, suffix))):
- yield path
- else:
- if '**' not in path_glob:
- for item in std_iglob(path_glob):
- yield item
- else:
- prefix, radical = path_glob.split('**', 1)
- if prefix == '':
- prefix = '.'
- if radical == '':
- radical = '*'
- else:
- # we support both
- radical = radical.lstrip('/')
- radical = radical.lstrip('\\')
- for path, dir, files in os.walk(prefix):
- path = os.path.normpath(path)
- for fn in _iglob(os.path.join(path, radical)):
- yield fn
-
-if ssl:
- from .compat import (HTTPSHandler as BaseHTTPSHandler, match_hostname,
- CertificateError)
-
-
-#
-# HTTPSConnection which verifies certificates/matches domains
-#
-
- class HTTPSConnection(httplib.HTTPSConnection):
- ca_certs = None # set this to the path to the certs file (.pem)
- check_domain = True # only used if ca_certs is not None
-
- # noinspection PyPropertyAccess
- def connect(self):
- sock = socket.create_connection((self.host, self.port), self.timeout)
- if getattr(self, '_tunnel_host', False):
- self.sock = sock
- self._tunnel()
-
- context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
- if hasattr(ssl, 'OP_NO_SSLv2'):
- context.options |= ssl.OP_NO_SSLv2
- if self.cert_file:
- context.load_cert_chain(self.cert_file, self.key_file)
- kwargs = {}
- if self.ca_certs:
- context.verify_mode = ssl.CERT_REQUIRED
- context.load_verify_locations(cafile=self.ca_certs)
- if getattr(ssl, 'HAS_SNI', False):
- kwargs['server_hostname'] = self.host
-
- self.sock = context.wrap_socket(sock, **kwargs)
- if self.ca_certs and self.check_domain:
- try:
- match_hostname(self.sock.getpeercert(), self.host)
- logger.debug('Host verified: %s', self.host)
- except CertificateError: # pragma: no cover
- self.sock.shutdown(socket.SHUT_RDWR)
- self.sock.close()
- raise
-
- class HTTPSHandler(BaseHTTPSHandler):
- def __init__(self, ca_certs, check_domain=True):
- BaseHTTPSHandler.__init__(self)
- self.ca_certs = ca_certs
- self.check_domain = check_domain
-
- def _conn_maker(self, *args, **kwargs):
- """
- This is called to create a connection instance. Normally you'd
- pass a connection class to do_open, but it doesn't actually check for
- a class, and just expects a callable. As long as we behave just as a
- constructor would have, we should be OK. If it ever changes so that
- we *must* pass a class, we'll create an UnsafeHTTPSConnection class
- which just sets check_domain to False in the class definition, and
- choose which one to pass to do_open.
- """
- result = HTTPSConnection(*args, **kwargs)
- if self.ca_certs:
- result.ca_certs = self.ca_certs
- result.check_domain = self.check_domain
- return result
-
- def https_open(self, req):
- try:
- return self.do_open(self._conn_maker, req)
- except URLError as e:
- if 'certificate verify failed' in str(e.reason):
- raise CertificateError('Unable to verify server certificate '
- 'for %s' % req.host)
- else:
- raise
-
- #
- # To prevent against mixing HTTP traffic with HTTPS (examples: A Man-In-The-
- # Middle proxy using HTTP listens on port 443, or an index mistakenly serves
- # HTML containing a http://xyz link when it should be https://xyz),
- # you can use the following handler class, which does not allow HTTP traffic.
- #
- # It works by inheriting from HTTPHandler - so build_opener won't add a
- # handler for HTTP itself.
- #
- class HTTPSOnlyHandler(HTTPSHandler, HTTPHandler):
- def http_open(self, req):
- raise URLError('Unexpected HTTP request on what should be a secure '
- 'connection: %s' % req)
-
-#
-# XML-RPC with timeouts
-#
-class Transport(xmlrpclib.Transport):
- def __init__(self, timeout, use_datetime=0):
- self.timeout = timeout
- xmlrpclib.Transport.__init__(self, use_datetime)
-
- def make_connection(self, host):
- h, eh, x509 = self.get_host_info(host)
- if not self._connection or host != self._connection[0]:
- self._extra_headers = eh
- self._connection = host, httplib.HTTPConnection(h)
- return self._connection[1]
-
-if ssl:
- class SafeTransport(xmlrpclib.SafeTransport):
- def __init__(self, timeout, use_datetime=0):
- self.timeout = timeout
- xmlrpclib.SafeTransport.__init__(self, use_datetime)
-
- def make_connection(self, host):
- h, eh, kwargs = self.get_host_info(host)
- if not kwargs:
- kwargs = {}
- kwargs['timeout'] = self.timeout
- if not self._connection or host != self._connection[0]:
- self._extra_headers = eh
- self._connection = host, httplib.HTTPSConnection(h, None,
- **kwargs)
- return self._connection[1]
-
-
-class ServerProxy(xmlrpclib.ServerProxy):
- def __init__(self, uri, **kwargs):
- self.timeout = timeout = kwargs.pop('timeout', None)
- # The above classes only come into play if a timeout
- # is specified
- if timeout is not None:
- # scheme = splittype(uri) # deprecated as of Python 3.8
- scheme = urlparse(uri)[0]
- use_datetime = kwargs.get('use_datetime', 0)
- if scheme == 'https':
- tcls = SafeTransport
- else:
- tcls = Transport
- kwargs['transport'] = t = tcls(timeout, use_datetime=use_datetime)
- self.transport = t
- xmlrpclib.ServerProxy.__init__(self, uri, **kwargs)
-
-#
-# CSV functionality. This is provided because on 2.x, the csv module can't
-# handle Unicode. However, we need to deal with Unicode in e.g. RECORD files.
-#
-
-def _csv_open(fn, mode, **kwargs):
- if sys.version_info[0] < 3:
- mode += 'b'
- else:
- kwargs['newline'] = ''
- # Python 3 determines encoding from locale. Force 'utf-8'
- # file encoding to match other forced utf-8 encoding
- kwargs['encoding'] = 'utf-8'
- return open(fn, mode, **kwargs)
-
-
-class CSVBase(object):
- defaults = {
- 'delimiter': str(','), # The strs are used because we need native
- 'quotechar': str('"'), # str in the csv API (2.x won't take
- 'lineterminator': str('\n') # Unicode)
- }
-
- def __enter__(self):
- return self
-
- def __exit__(self, *exc_info):
- self.stream.close()
-
-
-class CSVReader(CSVBase):
- def __init__(self, **kwargs):
- if 'stream' in kwargs:
- stream = kwargs['stream']
- if sys.version_info[0] >= 3:
- # needs to be a text stream
- stream = codecs.getreader('utf-8')(stream)
- self.stream = stream
- else:
- self.stream = _csv_open(kwargs['path'], 'r')
- self.reader = csv.reader(self.stream, **self.defaults)
-
- def __iter__(self):
- return self
-
- def next(self):
- result = next(self.reader)
- if sys.version_info[0] < 3:
- for i, item in enumerate(result):
- if not isinstance(item, text_type):
- result[i] = item.decode('utf-8')
- return result
-
- __next__ = next
-
-class CSVWriter(CSVBase):
- def __init__(self, fn, **kwargs):
- self.stream = _csv_open(fn, 'w')
- self.writer = csv.writer(self.stream, **self.defaults)
-
- def writerow(self, row):
- if sys.version_info[0] < 3:
- r = []
- for item in row:
- if isinstance(item, text_type):
- item = item.encode('utf-8')
- r.append(item)
- row = r
- self.writer.writerow(row)
-
-#
-# Configurator functionality
-#
-
-class Configurator(BaseConfigurator):
-
- value_converters = dict(BaseConfigurator.value_converters)
- value_converters['inc'] = 'inc_convert'
-
- def __init__(self, config, base=None):
- super(Configurator, self).__init__(config)
- self.base = base or os.getcwd()
-
- def configure_custom(self, config):
- def convert(o):
- if isinstance(o, (list, tuple)):
- result = type(o)([convert(i) for i in o])
- elif isinstance(o, dict):
- if '()' in o:
- result = self.configure_custom(o)
- else:
- result = {}
- for k in o:
- result[k] = convert(o[k])
- else:
- result = self.convert(o)
- return result
-
- c = config.pop('()')
- if not callable(c):
- c = self.resolve(c)
- props = config.pop('.', None)
- # Check for valid identifiers
- args = config.pop('[]', ())
- if args:
- args = tuple([convert(o) for o in args])
- items = [(k, convert(config[k])) for k in config if valid_ident(k)]
- kwargs = dict(items)
- result = c(*args, **kwargs)
- if props:
- for n, v in props.items():
- setattr(result, n, convert(v))
- return result
-
- def __getitem__(self, key):
- result = self.config[key]
- if isinstance(result, dict) and '()' in result:
- self.config[key] = result = self.configure_custom(result)
- return result
-
- def inc_convert(self, value):
- """Default converter for the inc:// protocol."""
- if not os.path.isabs(value):
- value = os.path.join(self.base, value)
- with codecs.open(value, 'r', encoding='utf-8') as f:
- result = json.load(f)
- return result
-
-
-class SubprocessMixin(object):
- """
- Mixin for running subprocesses and capturing their output
- """
- def __init__(self, verbose=False, progress=None):
- self.verbose = verbose
- self.progress = progress
-
- def reader(self, stream, context):
- """
- Read lines from a subprocess' output stream and either pass to a progress
- callable (if specified) or write progress information to sys.stderr.
- """
- progress = self.progress
- verbose = self.verbose
- while True:
- s = stream.readline()
- if not s:
- break
- if progress is not None:
- progress(s, context)
- else:
- if not verbose:
- sys.stderr.write('.')
- else:
- sys.stderr.write(s.decode('utf-8'))
- sys.stderr.flush()
- stream.close()
-
- def run_command(self, cmd, **kwargs):
- p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
- stderr=subprocess.PIPE, **kwargs)
- t1 = threading.Thread(target=self.reader, args=(p.stdout, 'stdout'))
- t1.start()
- t2 = threading.Thread(target=self.reader, args=(p.stderr, 'stderr'))
- t2.start()
- p.wait()
- t1.join()
- t2.join()
- if self.progress is not None:
- self.progress('done.', 'main')
- elif self.verbose:
- sys.stderr.write('done.\n')
- return p
-
-
-def normalize_name(name):
- """Normalize a python package name a la PEP 503"""
- # https://www.python.org/dev/peps/pep-0503/#normalized-names
- return re.sub('[-_.]+', '-', name).lower()
-
-# def _get_pypirc_command():
- # """
- # Get the distutils command for interacting with PyPI configurations.
- # :return: the command.
- # """
- # from distutils.core import Distribution
- # from distutils.config import PyPIRCCommand
- # d = Distribution()
- # return PyPIRCCommand(d)
-
-class PyPIRCFile(object):
-
- DEFAULT_REPOSITORY = 'https://upload.pypi.org/legacy/'
- DEFAULT_REALM = 'pypi'
-
- def __init__(self, fn=None, url=None):
- if fn is None:
- fn = os.path.join(os.path.expanduser('~'), '.pypirc')
- self.filename = fn
- self.url = url
-
- def read(self):
- result = {}
-
- if os.path.exists(self.filename):
- repository = self.url or self.DEFAULT_REPOSITORY
-
- config = configparser.RawConfigParser()
- config.read(self.filename)
- sections = config.sections()
- if 'distutils' in sections:
- # let's get the list of servers
- index_servers = config.get('distutils', 'index-servers')
- _servers = [server.strip() for server in
- index_servers.split('\n')
- if server.strip() != '']
- if _servers == []:
- # nothing set, let's try to get the default pypi
- if 'pypi' in sections:
- _servers = ['pypi']
- else:
- for server in _servers:
- result = {'server': server}
- result['username'] = config.get(server, 'username')
-
- # optional params
- for key, default in (('repository', self.DEFAULT_REPOSITORY),
- ('realm', self.DEFAULT_REALM),
- ('password', None)):
- if config.has_option(server, key):
- result[key] = config.get(server, key)
- else:
- result[key] = default
-
- # work around people having "repository" for the "pypi"
- # section of their config set to the HTTP (rather than
- # HTTPS) URL
- if (server == 'pypi' and
- repository in (self.DEFAULT_REPOSITORY, 'pypi')):
- result['repository'] = self.DEFAULT_REPOSITORY
- elif (result['server'] != repository and
- result['repository'] != repository):
- result = {}
- elif 'server-login' in sections:
- # old format
- server = 'server-login'
- if config.has_option(server, 'repository'):
- repository = config.get(server, 'repository')
- else:
- repository = self.DEFAULT_REPOSITORY
- result = {
- 'username': config.get(server, 'username'),
- 'password': config.get(server, 'password'),
- 'repository': repository,
- 'server': server,
- 'realm': self.DEFAULT_REALM
- }
- return result
-
- def update(self, username, password):
- # import pdb; pdb.set_trace()
- config = configparser.RawConfigParser()
- fn = self.filename
- config.read(fn)
- if not config.has_section('pypi'):
- config.add_section('pypi')
- config.set('pypi', 'username', username)
- config.set('pypi', 'password', password)
- with open(fn, 'w') as f:
- config.write(f)
-
-def _load_pypirc(index):
- """
- Read the PyPI access configuration as supported by distutils.
- """
- return PyPIRCFile(url=index.url).read()
-
-def _store_pypirc(index):
- PyPIRCFile().update(index.username, index.password)
-
-#
-# get_platform()/get_host_platform() copied from Python 3.10.a0 source, with some minor
-# tweaks
-#
-
-def get_host_platform():
- """Return a string that identifies the current platform. This is used mainly to
- distinguish platform-specific build directories and platform-specific built
- distributions. Typically includes the OS name and version and the
- architecture (as supplied by 'os.uname()'), although the exact information
- included depends on the OS; eg. on Linux, the kernel version isn't
- particularly important.
-
- Examples of returned values:
- linux-i586
- linux-alpha (?)
- solaris-2.6-sun4u
-
- Windows will return one of:
- win-amd64 (64bit Windows on AMD64 (aka x86_64, Intel64, EM64T, etc)
- win32 (all others - specifically, sys.platform is returned)
-
- For other non-POSIX platforms, currently just returns 'sys.platform'.
-
- """
- if os.name == 'nt':
- if 'amd64' in sys.version.lower():
- return 'win-amd64'
- if '(arm)' in sys.version.lower():
- return 'win-arm32'
- if '(arm64)' in sys.version.lower():
- return 'win-arm64'
- return sys.platform
-
- # Set for cross builds explicitly
- if "_PYTHON_HOST_PLATFORM" in os.environ:
- return os.environ["_PYTHON_HOST_PLATFORM"]
-
- if os.name != 'posix' or not hasattr(os, 'uname'):
- # XXX what about the architecture? NT is Intel or Alpha,
- # Mac OS is M68k or PPC, etc.
- return sys.platform
-
- # Try to distinguish various flavours of Unix
-
- (osname, host, release, version, machine) = os.uname()
-
- # Convert the OS name to lowercase, remove '/' characters, and translate
- # spaces (for "Power Macintosh")
- osname = osname.lower().replace('/', '')
- machine = machine.replace(' ', '_').replace('/', '-')
-
- if osname[:5] == 'linux':
- # At least on Linux/Intel, 'machine' is the processor --
- # i386, etc.
- # XXX what about Alpha, SPARC, etc?
- return "%s-%s" % (osname, machine)
-
- elif osname[:5] == 'sunos':
- if release[0] >= '5': # SunOS 5 == Solaris 2
- osname = 'solaris'
- release = '%d.%s' % (int(release[0]) - 3, release[2:])
- # We can't use 'platform.architecture()[0]' because a
- # bootstrap problem. We use a dict to get an error
- # if some suspicious happens.
- bitness = {2147483647:'32bit', 9223372036854775807:'64bit'}
- machine += '.%s' % bitness[sys.maxsize]
- # fall through to standard osname-release-machine representation
- elif osname[:3] == 'aix':
- from _aix_support import aix_platform
- return aix_platform()
- elif osname[:6] == 'cygwin':
- osname = 'cygwin'
- rel_re = re.compile (r'[\d.]+', re.ASCII)
- m = rel_re.match(release)
- if m:
- release = m.group()
- elif osname[:6] == 'darwin':
- import _osx_support, distutils.sysconfig
- osname, release, machine = _osx_support.get_platform_osx(
- distutils.sysconfig.get_config_vars(),
- osname, release, machine)
-
- return '%s-%s-%s' % (osname, release, machine)
-
-
-_TARGET_TO_PLAT = {
- 'x86' : 'win32',
- 'x64' : 'win-amd64',
- 'arm' : 'win-arm32',
-}
-
-
-def get_platform():
- if os.name != 'nt':
- return get_host_platform()
- cross_compilation_target = os.environ.get('VSCMD_ARG_TGT_ARCH')
- if cross_compilation_target not in _TARGET_TO_PLAT:
- return get_host_platform()
- return _TARGET_TO_PLAT[cross_compilation_target]
diff --git a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/nets/heatmap_decoder.py b/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/nets/heatmap_decoder.py
deleted file mode 100644
index 11828426a2852fb3e9ee3e6a3310ca89cbcd4d78..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/model/nets/heatmap_decoder.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import torch.nn as nn
-
-
-class PixelShuffleDecoder(nn.Module):
- """Pixel shuffle decoder."""
-
- def __init__(self, input_feat_dim=128, num_upsample=2, output_channel=2):
- super(PixelShuffleDecoder, self).__init__()
- # Get channel parameters
- self.channel_conf = self.get_channel_conf(num_upsample)
-
- # Define the pixel shuffle
- self.pixshuffle = nn.PixelShuffle(2)
-
- # Process the feature
- self.conv_block_lst = []
- # The input block
- self.conv_block_lst.append(
- nn.Sequential(
- nn.Conv2d(
- input_feat_dim,
- self.channel_conf[0],
- kernel_size=3,
- stride=1,
- padding=1,
- ),
- nn.BatchNorm2d(self.channel_conf[0]),
- nn.ReLU(inplace=True),
- )
- )
-
- # Intermediate block
- for channel in self.channel_conf[1:-1]:
- self.conv_block_lst.append(
- nn.Sequential(
- nn.Conv2d(channel, channel, kernel_size=3, stride=1, padding=1),
- nn.BatchNorm2d(channel),
- nn.ReLU(inplace=True),
- )
- )
-
- # Output block
- self.conv_block_lst.append(
- nn.Conv2d(
- self.channel_conf[-1],
- output_channel,
- kernel_size=1,
- stride=1,
- padding=0,
- )
- )
- self.conv_block_lst = nn.ModuleList(self.conv_block_lst)
-
- # Get num of channels based on number of upsampling.
- def get_channel_conf(self, num_upsample):
- if num_upsample == 2:
- return [256, 64, 16]
- elif num_upsample == 3:
- return [256, 64, 16, 4]
-
- def forward(self, input_features):
- # Iterate til output block
- out = input_features
- for block in self.conv_block_lst[:-1]:
- out = block(out)
- out = self.pixshuffle(out)
-
- # Output layer
- out = self.conv_block_lst[-1](out)
-
- return out
diff --git a/spaces/RedBaron5/PatentSolver/LSTM/inputHandler.py b/spaces/RedBaron5/PatentSolver/LSTM/inputHandler.py
deleted file mode 100644
index 85e9737b51a211e6101959c5fe3add5177995478..0000000000000000000000000000000000000000
--- a/spaces/RedBaron5/PatentSolver/LSTM/inputHandler.py
+++ /dev/null
@@ -1,147 +0,0 @@
-from keras.preprocessing.sequence import pad_sequences
-from keras.preprocessing.text import Tokenizer
-from gensim.models import Word2Vec
-import numpy as np
-import gc
-
-
-def train_word2vec(documents, embedding_dim):
- """
- train word2vector over training documents
- Args:
- documents (list): list of document
- embedding_dim (int): output wordvector size
- Returns:
- word_vectors(dict): dict containing words and their respective vectors
- """
- model = Word2Vec(documents, min_count=1, size=embedding_dim)
- word_vectors = model.wv
- del model
- return word_vectors
-
-
-def create_embedding_matrix(tokenizer, word_vectors, embedding_dim):
- """
- Create embedding matrix containing word indexes and respective vectors from word vectors
- Args:
- tokenizer (keras.preprocessing.text.Tokenizer): keras tokenizer object containing word indexes
- word_vectors (dict): dict containing word and their respective vectors
- embedding_dim (int): dimension of word vector
-
- Returns:
-
- """
- nb_words = len(tokenizer.word_index) + 1
- word_index = tokenizer.word_index
- embedding_matrix = np.zeros((nb_words, embedding_dim))
- print("Embedding matrix shape: %s" % str(embedding_matrix.shape))
- for word, i in word_index.items():
- try:
- embedding_vector = word_vectors[word]
- if embedding_vector is not None:
- embedding_matrix[i] = embedding_vector
- except KeyError:
- print("vector not found for word - %s" % word)
- print('Null word embeddings: %d' % np.sum(np.sum(embedding_matrix, axis=1) == 0))
- return embedding_matrix
-
-
-def word_embed_meta_data(documents, embedding_dim):
- """
- Load tokenizer object for given vocabs list
- Args:
- documents (list): list of document
- embedding_dim (int): embedding dimension
- Returns:
- tokenizer (keras.preprocessing.text.Tokenizer): keras tokenizer object
- embedding_matrix (dict): dict with word_index and vector mapping
- """
- documents = [str(x).lower().split() for x in documents]
- tokenizer = Tokenizer()
- tokenizer.fit_on_texts(documents)
- word_vector = train_word2vec(documents, embedding_dim)
- embedding_matrix = create_embedding_matrix(tokenizer, word_vector, embedding_dim)
- del word_vector
- gc.collect()
- return tokenizer, embedding_matrix
-
-
-def create_train_dev_set(tokenizer, sentences_pair, is_similar, max_sequence_length, validation_split_ratio):
- """
- Create training and validation dataset
- Args:
- tokenizer (keras.preprocessing.text.Tokenizer): keras tokenizer object
- sentences_pair (list): list of tuple of sentences pairs
- is_similar (list): list containing labels if respective sentences in sentence1 and sentence2
- are same or not (1 if same else 0)
- max_sequence_length (int): max sequence length of sentences to apply padding
- validation_split_ratio (float): contain ratio to split training data into validation data
-
- Returns:
- train_data_1 (list): list of input features for training set from sentences1
- train_data_2 (list): list of input features for training set from sentences2
- labels_train (np.array): array containing similarity score for training data
- leaks_train(np.array): array of training leaks features
-
- val_data_1 (list): list of input features for validation set from sentences1
- val_data_2 (list): list of input features for validation set from sentences1
- labels_val (np.array): array containing similarity score for validation data
- leaks_val (np.array): array of validation leaks features
- """
- sentences1 = [x[0].lower() for x in sentences_pair]
- sentences2 = [x[1].lower() for x in sentences_pair]
- train_sequences_1 = tokenizer.texts_to_sequences(sentences1)
- train_sequences_2 = tokenizer.texts_to_sequences(sentences2)
- leaks = [[len(set(x1)), len(set(x2)), len(set(x1).intersection(x2))]
- for x1, x2 in zip(train_sequences_1, train_sequences_2)]
-
- train_padded_data_1 = pad_sequences(train_sequences_1, maxlen=max_sequence_length)
- train_padded_data_2 = pad_sequences(train_sequences_2, maxlen=max_sequence_length)
- train_labels = np.array(is_similar)
- leaks = np.array(leaks)
-
- shuffle_indices = np.random.permutation(np.arange(len(train_labels)))
- train_data_1_shuffled = train_padded_data_1[shuffle_indices]
- train_data_2_shuffled = train_padded_data_2[shuffle_indices]
- train_labels_shuffled = train_labels[shuffle_indices]
- leaks_shuffled = leaks[shuffle_indices]
-
- dev_idx = max(1, int(len(train_labels_shuffled) * validation_split_ratio))
-
- del train_padded_data_1
- del train_padded_data_2
- gc.collect()
-
- train_data_1, val_data_1 = train_data_1_shuffled[:-dev_idx], train_data_1_shuffled[-dev_idx:]
- train_data_2, val_data_2 = train_data_2_shuffled[:-dev_idx], train_data_2_shuffled[-dev_idx:]
- labels_train, labels_val = train_labels_shuffled[:-dev_idx], train_labels_shuffled[-dev_idx:]
- leaks_train, leaks_val = leaks_shuffled[:-dev_idx], leaks_shuffled[-dev_idx:]
-
- return train_data_1, train_data_2, labels_train, leaks_train, val_data_1, val_data_2, labels_val, leaks_val
-
-
-def create_test_data(tokenizer, test_sentences_pair, max_sequence_length):
- """
- Create training and validation dataset
- Args:
- tokenizer (keras.preprocessing.text.Tokenizer): keras tokenizer object
- test_sentences_pair (list): list of tuple of sentences pairs
- max_sequence_length (int): max sequence length of sentences to apply padding
-
- Returns:
- test_data_1 (list): list of input features for training set from sentences1
- test_data_2 (list): list of input features for training set from sentences2
- """
- test_sentences1 = [str(x[0]).lower() for x in test_sentences_pair]
- test_sentences2 = [x[1].lower() for x in test_sentences_pair]
-
- test_sequences_1 = tokenizer.texts_to_sequences(test_sentences1)
- test_sequences_2 = tokenizer.texts_to_sequences(test_sentences2)
- leaks_test = [[len(set(x1)), len(set(x2)), len(set(x1).intersection(x2))]
- for x1, x2 in zip(test_sequences_1, test_sequences_2)]
-
- leaks_test = np.array(leaks_test)
- test_data_1 = pad_sequences(test_sequences_1, maxlen=max_sequence_length)
- test_data_2 = pad_sequences(test_sequences_2, maxlen=max_sequence_length)
-
- return test_data_1, test_data_2, leaks_test
diff --git a/spaces/Sakil/image_generator/app.py b/spaces/Sakil/image_generator/app.py
deleted file mode 100644
index 149135a488659ffe91f03ecbc60e06f4f75f121e..0000000000000000000000000000000000000000
--- a/spaces/Sakil/image_generator/app.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import os
-# os.system('pip install gradio==2.8.0b22')
-import gradio as gr
-clip = gr.Interface.load("spaces/DrishtiSharma/Text-to-Image-search-using-CLIP")
-def text2image(text):
- image = clip(text)[0]
- return gr.processing_utils.decode_base64_to_image(image)
-
-iface = gr.Interface(
- text2image,inputs=gr.inputs.Textbox(lines=2, placeholder="Enter your text here"),
- outputs="image",
- examples=[["cat"],["Lion"],["Nature"],["water"],["house"]],
- theme="dark-peach",
- #css="https://www.w3schools.com/cssref/playit.asp?filename=playcss_background-color",
- title='Image Generator',
- description="This application supports in the creation of images from your text.")
-iface.launch(inline=False)
\ No newline at end of file
diff --git a/spaces/Sapphire-356/Video2MC/common/visualization.py b/spaces/Sapphire-356/Video2MC/common/visualization.py
deleted file mode 100644
index 7fbec868763b91b5088e984966ea25f5c217f191..0000000000000000000000000000000000000000
--- a/spaces/Sapphire-356/Video2MC/common/visualization.py
+++ /dev/null
@@ -1,253 +0,0 @@
-# Copyright (c) 2018-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-#
-
-import time
-
-import cv2
-import matplotlib.pyplot as plt
-import numpy as np
-from matplotlib.animation import FuncAnimation, writers
-from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
-from mpl_toolkits.mplot3d import Axes3D
-from tqdm import tqdm
-
-from common.utils import read_video
-
-
-def ckpt_time(ckpt=None, display=0, desc=''):
- if not ckpt:
- return time.time()
- else:
- if display:
- print(desc + ' consume time {:0.4f}'.format(time.time() - float(ckpt)))
- return time.time() - float(ckpt), time.time()
-
-
-def set_equal_aspect(ax, data):
- """
- Create white cubic bounding box to make sure that 3d axis is in equal aspect.
- :param ax: 3D axis
- :param data: shape of(frames, 3), generated from BVH using convert_bvh2dataset.py
- """
- X, Y, Z = data[..., 0], data[..., 1], data[..., 2]
-
- # Create cubic bounding box to simulate equal aspect ratio
- max_range = np.array([X.max() - X.min(), Y.max() - Y.min(), Z.max() - Z.min()]).max()
- Xb = 0.5 * max_range * np.mgrid[-1:2:2, -1:2:2, -1:2:2][0].flatten() + 0.5 * (X.max() + X.min())
- Yb = 0.5 * max_range * np.mgrid[-1:2:2, -1:2:2, -1:2:2][1].flatten() + 0.5 * (Y.max() + Y.min())
- Zb = 0.5 * max_range * np.mgrid[-1:2:2, -1:2:2, -1:2:2][2].flatten() + 0.5 * (Z.max() + Z.min())
-
- for xb, yb, zb in zip(Xb, Yb, Zb):
- ax.plot([xb], [yb], [zb], 'w')
-
-
-def downsample_tensor(X, factor):
- length = X.shape[0] // factor * factor
- return np.mean(X[:length].reshape(-1, factor, *X.shape[1:]), axis=1)
-
-
-def render_animation(keypoints, poses, skeleton, fps, bitrate, azim, output, progress, viewport,
- limit=-1, downsample=1, size=6, input_video_path=None, input_video_skip=0):
- """
- TODO
- Render an animation. The supported output modes are:
- -- 'interactive': display an interactive figure
- (also works on notebooks if associated with %matplotlib inline)
- -- 'html': render the animation as HTML5 video. Can be displayed in a notebook using HTML(...).
- -- 'filename.mp4': render and export the animation as an h264 video (requires ffmpeg).
- -- 'filename.gif': render and export the animation a gif file (requires imagemagick).
- """
- plt.ioff()
- fig = plt.figure(figsize=(size * (1 + len(poses)), size))
- ax_in = fig.add_subplot(1, 1 + len(poses), 1)
- ax_in.get_xaxis().set_visible(False)
- ax_in.get_yaxis().set_visible(False)
- ax_in.set_axis_off()
- ax_in.set_title('Input')
-
- # prevent wired error
- _ = Axes3D.__class__.__name__
-
- ax_3d = []
- lines_3d = []
- trajectories = []
- radius = 1.7
- for index, (title, data) in enumerate(poses.items()):
- ax = fig.add_subplot(1, 1 + len(poses), index + 2, projection='3d')
- ax.view_init(elev=15., azim=azim)
- ax.set_xlim3d([-radius / 2, radius / 2])
- ax.set_zlim3d([0, radius])
- ax.set_ylim3d([-radius / 2, radius / 2])
- # ax.set_aspect('equal')
- ax.set_xticklabels([])
- ax.set_yticklabels([])
- ax.set_zticklabels([])
- ax.dist = 12.5
- ax.set_title(title) # , pad=35
- ax_3d.append(ax)
- lines_3d.append([])
- trajectories.append(data[:, 0, [0, 1]])
- poses = list(poses.values())
-
- # Decode video
- if input_video_path is None:
- # Black background
- all_frames = np.zeros((keypoints.shape[0], viewport[1], viewport[0]), dtype='uint8')
- else:
- # Load video using ffmpeg
- all_frames = []
- for f in read_video(input_video_path, fps=None, skip=input_video_skip):
- all_frames.append(f)
-
- effective_length = min(keypoints.shape[0], len(all_frames))
- all_frames = all_frames[:effective_length]
-
- if downsample > 1:
- keypoints = downsample_tensor(keypoints, downsample)
- all_frames = downsample_tensor(np.array(all_frames), downsample).astype('uint8')
- for idx in range(len(poses)):
- poses[idx] = downsample_tensor(poses[idx], downsample)
- trajectories[idx] = downsample_tensor(trajectories[idx], downsample)
- fps /= downsample
-
- initialized = False
- image = None
- lines = []
- points = None
-
- if limit < 1:
- limit = len(all_frames)
- else:
- limit = min(limit, len(all_frames))
-
- parents = skeleton.parents()
- pbar = tqdm(total=limit)
- # probar = progress.tqdm(total=limit, desc="Step 3: 3D Rendering")
-
- def update_video(i):
- nonlocal initialized, image, lines, points
-
- for n, ax in enumerate(ax_3d):
- ax.set_xlim3d([-radius / 2 + trajectories[n][i, 0], radius / 2 + trajectories[n][i, 0]])
- ax.set_ylim3d([-radius / 2 + trajectories[n][i, 1], radius / 2 + trajectories[n][i, 1]])
-
- # Update 2D poses
- if not initialized:
- image = ax_in.imshow(all_frames[i], aspect='equal')
-
- for j, j_parent in enumerate(parents):
- if j_parent == -1:
- continue
-
- # if len(parents) == keypoints.shape[1] and 1 == 2:
- # # Draw skeleton only if keypoints match (otherwise we don't have the parents definition)
- # lines.append(ax_in.plot([keypoints[i, j, 0], keypoints[i, j_parent, 0]],
- # [keypoints[i, j, 1], keypoints[i, j_parent, 1]], color='pink'))
-
- col = 'red' if j in skeleton.joints_right() else 'black'
- for n, ax in enumerate(ax_3d):
- pos = poses[n][i]
- lines_3d[n].append(ax.plot([pos[j, 0], pos[j_parent, 0]],
- [pos[j, 1], pos[j_parent, 1]],
- [pos[j, 2], pos[j_parent, 2]], zdir='z', c=col))
-
- points = ax_in.scatter(*keypoints[i].T, 5, color='red', edgecolors='white', zorder=10)
-
- initialized = True
- else:
- image.set_data(all_frames[i])
-
- for j, j_parent in enumerate(parents):
- if j_parent == -1:
- continue
-
- # if len(parents) == keypoints.shape[1] and 1 == 2:
- # lines[j - 1][0].set_data([keypoints[i, j, 0], keypoints[i, j_parent, 0]],
- # [keypoints[i, j, 1], keypoints[i, j_parent, 1]])
-
- for n, ax in enumerate(ax_3d):
- pos = poses[n][i]
- lines_3d[n][j - 1][0].set_xdata(np.array([pos[j, 0], pos[j_parent, 0]])) # Hotfix matplotlib's bug. https://github.com/matplotlib/matplotlib/pull/20555
- lines_3d[n][j - 1][0].set_ydata([pos[j, 1], pos[j_parent, 1]])
- lines_3d[n][j - 1][0].set_3d_properties([pos[j, 2], pos[j_parent, 2]], zdir='z')
-
- points.set_offsets(keypoints[i])
-
- pbar.update()
- # probar.update()
-
- fig.tight_layout()
-
- anim = FuncAnimation(fig, update_video, frames=limit, interval=1000.0 / fps, repeat=False)
- if output.endswith('.mp4'):
- Writer = writers['ffmpeg']
- writer = Writer(fps=fps, metadata={}, bitrate=bitrate)
- anim.save(output, writer=writer)
- elif output.endswith('.gif'):
- anim.save(output, dpi=60, writer='imagemagick')
- else:
- raise ValueError('Unsupported output format (only .mp4 and .gif are supported)')
- pbar.close()
- plt.close()
-
-
-def render_animation_test(keypoints, poses, skeleton, fps, bitrate, azim, output, viewport, limit=-1, downsample=1, size=6, input_video_frame=None,
- input_video_skip=0, num=None):
- t0 = ckpt_time()
- fig = plt.figure(figsize=(12, 6))
- canvas = FigureCanvas(fig)
- fig.add_subplot(121)
- plt.imshow(input_video_frame)
- # 3D
- ax = fig.add_subplot(122, projection='3d')
- ax.view_init(elev=15., azim=azim)
- # set 长度范围
- radius = 1.7
- ax.set_xlim3d([-radius / 2, radius / 2])
- ax.set_zlim3d([0, radius])
- ax.set_ylim3d([-radius / 2, radius / 2])
- ax.set_aspect('equal')
- # 坐标轴刻度
- ax.set_xticklabels([])
- ax.set_yticklabels([])
- ax.set_zticklabels([])
- ax.dist = 7.5
-
- # lxy add
- ax.set_xlabel('X Label')
- ax.set_ylabel('Y Label')
- ax.set_zlabel('Z Label')
-
- # array([-1, 0, 1, 2, 0, 4, 5, 0, 7, 8, 9, 8, 11, 12, 8, 14, 15])
- parents = skeleton.parents()
-
- pos = poses['Reconstruction'][-1]
- _, t1 = ckpt_time(t0, desc='1 ')
- for j, j_parent in enumerate(parents):
- if j_parent == -1:
- continue
-
- if len(parents) == keypoints.shape[1]:
- color_pink = 'pink'
- if j == 1 or j == 2:
- color_pink = 'black'
-
- col = 'red' if j in skeleton.joints_right() else 'black'
- # 画图3D
- ax.plot([pos[j, 0], pos[j_parent, 0]],
- [pos[j, 1], pos[j_parent, 1]],
- [pos[j, 2], pos[j_parent, 2]], zdir='z', c=col)
-
- # plt.savefig('test/3Dimage_{}.png'.format(1000+num))
- width, height = fig.get_size_inches() * fig.get_dpi()
- _, t2 = ckpt_time(t1, desc='2 ')
- canvas.draw() # draw the canvas, cache the renderer
- image = np.fromstring(canvas.tostring_rgb(), dtype='uint8').reshape(int(height), int(width), 3)
- cv2.imshow('im', image)
- cv2.waitKey(5)
- _, t3 = ckpt_time(t2, desc='3 ')
- return image
diff --git a/spaces/Sense-X/uniformer_video_demo/transforms.py b/spaces/Sense-X/uniformer_video_demo/transforms.py
deleted file mode 100644
index 2483fdf8569e25978b922774e84cc2244315fe61..0000000000000000000000000000000000000000
--- a/spaces/Sense-X/uniformer_video_demo/transforms.py
+++ /dev/null
@@ -1,443 +0,0 @@
-import torchvision
-import random
-from PIL import Image, ImageOps
-import numpy as np
-import numbers
-import math
-import torch
-
-
-class GroupRandomCrop(object):
- def __init__(self, size):
- if isinstance(size, numbers.Number):
- self.size = (int(size), int(size))
- else:
- self.size = size
-
- def __call__(self, img_group):
-
- w, h = img_group[0].size
- th, tw = self.size
-
- out_images = list()
-
- x1 = random.randint(0, w - tw)
- y1 = random.randint(0, h - th)
-
- for img in img_group:
- assert(img.size[0] == w and img.size[1] == h)
- if w == tw and h == th:
- out_images.append(img)
- else:
- out_images.append(img.crop((x1, y1, x1 + tw, y1 + th)))
-
- return out_images
-
-
-class MultiGroupRandomCrop(object):
- def __init__(self, size, groups=1):
- if isinstance(size, numbers.Number):
- self.size = (int(size), int(size))
- else:
- self.size = size
- self.groups = groups
-
- def __call__(self, img_group):
-
- w, h = img_group[0].size
- th, tw = self.size
-
- out_images = list()
-
- for i in range(self.groups):
- x1 = random.randint(0, w - tw)
- y1 = random.randint(0, h - th)
-
- for img in img_group:
- assert(img.size[0] == w and img.size[1] == h)
- if w == tw and h == th:
- out_images.append(img)
- else:
- out_images.append(img.crop((x1, y1, x1 + tw, y1 + th)))
-
- return out_images
-
-
-class GroupCenterCrop(object):
- def __init__(self, size):
- self.worker = torchvision.transforms.CenterCrop(size)
-
- def __call__(self, img_group):
- return [self.worker(img) for img in img_group]
-
-
-class GroupRandomHorizontalFlip(object):
- """Randomly horizontally flips the given PIL.Image with a probability of 0.5
- """
-
- def __init__(self, is_flow=False):
- self.is_flow = is_flow
-
- def __call__(self, img_group, is_flow=False):
- v = random.random()
- if v < 0.5:
- ret = [img.transpose(Image.FLIP_LEFT_RIGHT) for img in img_group]
- if self.is_flow:
- for i in range(0, len(ret), 2):
- # invert flow pixel values when flipping
- ret[i] = ImageOps.invert(ret[i])
- return ret
- else:
- return img_group
-
-
-class GroupNormalize(object):
- def __init__(self, mean, std):
- self.mean = mean
- self.std = std
-
- def __call__(self, tensor):
- rep_mean = self.mean * (tensor.size()[0] // len(self.mean))
- rep_std = self.std * (tensor.size()[0] // len(self.std))
-
- # TODO: make efficient
- for t, m, s in zip(tensor, rep_mean, rep_std):
- t.sub_(m).div_(s)
-
- return tensor
-
-
-class GroupScale(object):
- """ Rescales the input PIL.Image to the given 'size'.
- 'size' will be the size of the smaller edge.
- For example, if height > width, then image will be
- rescaled to (size * height / width, size)
- size: size of the smaller edge
- interpolation: Default: PIL.Image.BILINEAR
- """
-
- def __init__(self, size, interpolation=Image.BILINEAR):
- self.worker = torchvision.transforms.Resize(size, interpolation)
-
- def __call__(self, img_group):
- return [self.worker(img) for img in img_group]
-
-
-class GroupOverSample(object):
- def __init__(self, crop_size, scale_size=None, flip=True):
- self.crop_size = crop_size if not isinstance(
- crop_size, int) else (crop_size, crop_size)
-
- if scale_size is not None:
- self.scale_worker = GroupScale(scale_size)
- else:
- self.scale_worker = None
- self.flip = flip
-
- def __call__(self, img_group):
-
- if self.scale_worker is not None:
- img_group = self.scale_worker(img_group)
-
- image_w, image_h = img_group[0].size
- crop_w, crop_h = self.crop_size
-
- offsets = GroupMultiScaleCrop.fill_fix_offset(
- False, image_w, image_h, crop_w, crop_h)
- oversample_group = list()
- for o_w, o_h in offsets:
- normal_group = list()
- flip_group = list()
- for i, img in enumerate(img_group):
- crop = img.crop((o_w, o_h, o_w + crop_w, o_h + crop_h))
- normal_group.append(crop)
- flip_crop = crop.copy().transpose(Image.FLIP_LEFT_RIGHT)
-
- if img.mode == 'L' and i % 2 == 0:
- flip_group.append(ImageOps.invert(flip_crop))
- else:
- flip_group.append(flip_crop)
-
- oversample_group.extend(normal_group)
- if self.flip:
- oversample_group.extend(flip_group)
- return oversample_group
-
-
-class GroupFullResSample(object):
- def __init__(self, crop_size, scale_size=None, flip=True):
- self.crop_size = crop_size if not isinstance(
- crop_size, int) else (crop_size, crop_size)
-
- if scale_size is not None:
- self.scale_worker = GroupScale(scale_size)
- else:
- self.scale_worker = None
- self.flip = flip
-
- def __call__(self, img_group):
-
- if self.scale_worker is not None:
- img_group = self.scale_worker(img_group)
-
- image_w, image_h = img_group[0].size
- crop_w, crop_h = self.crop_size
-
- w_step = (image_w - crop_w) // 4
- h_step = (image_h - crop_h) // 4
-
- offsets = list()
- offsets.append((0 * w_step, 2 * h_step)) # left
- offsets.append((4 * w_step, 2 * h_step)) # right
- offsets.append((2 * w_step, 2 * h_step)) # center
-
- oversample_group = list()
- for o_w, o_h in offsets:
- normal_group = list()
- flip_group = list()
- for i, img in enumerate(img_group):
- crop = img.crop((o_w, o_h, o_w + crop_w, o_h + crop_h))
- normal_group.append(crop)
- if self.flip:
- flip_crop = crop.copy().transpose(Image.FLIP_LEFT_RIGHT)
-
- if img.mode == 'L' and i % 2 == 0:
- flip_group.append(ImageOps.invert(flip_crop))
- else:
- flip_group.append(flip_crop)
-
- oversample_group.extend(normal_group)
- oversample_group.extend(flip_group)
- return oversample_group
-
-
-class GroupMultiScaleCrop(object):
-
- def __init__(self, input_size, scales=None, max_distort=1,
- fix_crop=True, more_fix_crop=True):
- self.scales = scales if scales is not None else [1, .875, .75, .66]
- self.max_distort = max_distort
- self.fix_crop = fix_crop
- self.more_fix_crop = more_fix_crop
- self.input_size = input_size if not isinstance(input_size, int) else [
- input_size, input_size]
- self.interpolation = Image.BILINEAR
-
- def __call__(self, img_group):
-
- im_size = img_group[0].size
-
- crop_w, crop_h, offset_w, offset_h = self._sample_crop_size(im_size)
- crop_img_group = [
- img.crop(
- (offset_w,
- offset_h,
- offset_w +
- crop_w,
- offset_h +
- crop_h)) for img in img_group]
- ret_img_group = [img.resize((self.input_size[0], self.input_size[1]), self.interpolation)
- for img in crop_img_group]
- return ret_img_group
-
- def _sample_crop_size(self, im_size):
- image_w, image_h = im_size[0], im_size[1]
-
- # find a crop size
- base_size = min(image_w, image_h)
- crop_sizes = [int(base_size * x) for x in self.scales]
- crop_h = [
- self.input_size[1] if abs(
- x - self.input_size[1]) < 3 else x for x in crop_sizes]
- crop_w = [
- self.input_size[0] if abs(
- x - self.input_size[0]) < 3 else x for x in crop_sizes]
-
- pairs = []
- for i, h in enumerate(crop_h):
- for j, w in enumerate(crop_w):
- if abs(i - j) <= self.max_distort:
- pairs.append((w, h))
-
- crop_pair = random.choice(pairs)
- if not self.fix_crop:
- w_offset = random.randint(0, image_w - crop_pair[0])
- h_offset = random.randint(0, image_h - crop_pair[1])
- else:
- w_offset, h_offset = self._sample_fix_offset(
- image_w, image_h, crop_pair[0], crop_pair[1])
-
- return crop_pair[0], crop_pair[1], w_offset, h_offset
-
- def _sample_fix_offset(self, image_w, image_h, crop_w, crop_h):
- offsets = self.fill_fix_offset(
- self.more_fix_crop, image_w, image_h, crop_w, crop_h)
- return random.choice(offsets)
-
- @staticmethod
- def fill_fix_offset(more_fix_crop, image_w, image_h, crop_w, crop_h):
- w_step = (image_w - crop_w) // 4
- h_step = (image_h - crop_h) // 4
-
- ret = list()
- ret.append((0, 0)) # upper left
- ret.append((4 * w_step, 0)) # upper right
- ret.append((0, 4 * h_step)) # lower left
- ret.append((4 * w_step, 4 * h_step)) # lower right
- ret.append((2 * w_step, 2 * h_step)) # center
-
- if more_fix_crop:
- ret.append((0, 2 * h_step)) # center left
- ret.append((4 * w_step, 2 * h_step)) # center right
- ret.append((2 * w_step, 4 * h_step)) # lower center
- ret.append((2 * w_step, 0 * h_step)) # upper center
-
- ret.append((1 * w_step, 1 * h_step)) # upper left quarter
- ret.append((3 * w_step, 1 * h_step)) # upper right quarter
- ret.append((1 * w_step, 3 * h_step)) # lower left quarter
- ret.append((3 * w_step, 3 * h_step)) # lower righ quarter
-
- return ret
-
-
-class GroupRandomSizedCrop(object):
- """Random crop the given PIL.Image to a random size of (0.08 to 1.0) of the original size
- and and a random aspect ratio of 3/4 to 4/3 of the original aspect ratio
- This is popularly used to train the Inception networks
- size: size of the smaller edge
- interpolation: Default: PIL.Image.BILINEAR
- """
-
- def __init__(self, size, interpolation=Image.BILINEAR):
- self.size = size
- self.interpolation = interpolation
-
- def __call__(self, img_group):
- for attempt in range(10):
- area = img_group[0].size[0] * img_group[0].size[1]
- target_area = random.uniform(0.08, 1.0) * area
- aspect_ratio = random.uniform(3. / 4, 4. / 3)
-
- w = int(round(math.sqrt(target_area * aspect_ratio)))
- h = int(round(math.sqrt(target_area / aspect_ratio)))
-
- if random.random() < 0.5:
- w, h = h, w
-
- if w <= img_group[0].size[0] and h <= img_group[0].size[1]:
- x1 = random.randint(0, img_group[0].size[0] - w)
- y1 = random.randint(0, img_group[0].size[1] - h)
- found = True
- break
- else:
- found = False
- x1 = 0
- y1 = 0
-
- if found:
- out_group = list()
- for img in img_group:
- img = img.crop((x1, y1, x1 + w, y1 + h))
- assert(img.size == (w, h))
- out_group.append(
- img.resize(
- (self.size, self.size), self.interpolation))
- return out_group
- else:
- # Fallback
- scale = GroupScale(self.size, interpolation=self.interpolation)
- crop = GroupRandomCrop(self.size)
- return crop(scale(img_group))
-
-
-class ConvertDataFormat(object):
- def __init__(self, model_type):
- self.model_type = model_type
-
- def __call__(self, images):
- if self.model_type == '2D':
- return images
- tc, h, w = images.size()
- t = tc // 3
- images = images.view(t, 3, h, w)
- images = images.permute(1, 0, 2, 3)
- return images
-
-
-class Stack(object):
-
- def __init__(self, roll=False):
- self.roll = roll
-
- def __call__(self, img_group):
- if img_group[0].mode == 'L':
- return np.concatenate([np.expand_dims(x, 2)
- for x in img_group], axis=2)
- elif img_group[0].mode == 'RGB':
- if self.roll:
- return np.concatenate([np.array(x)[:, :, ::-1]
- for x in img_group], axis=2)
- else:
- #print(np.concatenate(img_group, axis=2).shape)
- # print(img_group[0].shape)
- return np.concatenate(img_group, axis=2)
-
-
-class ToTorchFormatTensor(object):
- """ Converts a PIL.Image (RGB) or numpy.ndarray (H x W x C) in the range [0, 255]
- to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] """
-
- def __init__(self, div=True):
- self.div = div
-
- def __call__(self, pic):
- if isinstance(pic, np.ndarray):
- # handle numpy array
- img = torch.from_numpy(pic).permute(2, 0, 1).contiguous()
- else:
- # handle PIL Image
- img = torch.ByteTensor(
- torch.ByteStorage.from_buffer(
- pic.tobytes()))
- img = img.view(pic.size[1], pic.size[0], len(pic.mode))
- # put it from HWC to CHW format
- # yikes, this transpose takes 80% of the loading time/CPU
- img = img.transpose(0, 1).transpose(0, 2).contiguous()
- return img.float().div(255) if self.div else img.float()
-
-
-class IdentityTransform(object):
-
- def __call__(self, data):
- return data
-
-
-if __name__ == "__main__":
- trans = torchvision.transforms.Compose([
- GroupScale(256),
- GroupRandomCrop(224),
- Stack(),
- ToTorchFormatTensor(),
- GroupNormalize(
- mean=[.485, .456, .406],
- std=[.229, .224, .225]
- )]
- )
-
- im = Image.open('../tensorflow-model-zoo.torch/lena_299.png')
-
- color_group = [im] * 3
- rst = trans(color_group)
-
- gray_group = [im.convert('L')] * 9
- gray_rst = trans(gray_group)
-
- trans2 = torchvision.transforms.Compose([
- GroupRandomSizedCrop(256),
- Stack(),
- ToTorchFormatTensor(),
- GroupNormalize(
- mean=[.485, .456, .406],
- std=[.229, .224, .225])
- ])
- print(trans2(color_group))
diff --git a/spaces/ServerX/PorcoDiaz/venv.sh b/spaces/ServerX/PorcoDiaz/venv.sh
deleted file mode 100644
index aa230992e892292cb8aa5924ecdafc5758f14e95..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/venv.sh
+++ /dev/null
@@ -1 +0,0 @@
-python3.8 -m venv .venv
diff --git a/spaces/Shakeb100/GroomingGenie_AI/inpainting.py b/spaces/Shakeb100/GroomingGenie_AI/inpainting.py
deleted file mode 100644
index 798c3fd252f826762aee6970f867eee537249db8..0000000000000000000000000000000000000000
--- a/spaces/Shakeb100/GroomingGenie_AI/inpainting.py
+++ /dev/null
@@ -1,194 +0,0 @@
-import inspect
-from typing import List, Optional, Union
-
-import numpy as np
-import torch
-
-import PIL
-from diffusers import AutoencoderKL, DDIMScheduler, DiffusionPipeline, PNDMScheduler, UNet2DConditionModel
-from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker
-from tqdm.auto import tqdm
-from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
-
-
-def preprocess_image(image):
- w, h = image.size
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
- image = image.resize((w, h), resample=PIL.Image.LANCZOS)
- image = np.array(image).astype(np.float32) / 255.0
- image = image[None].transpose(0, 3, 1, 2)
- image = torch.from_numpy(image)
- return 2.0 * image - 1.0
-
-
-def preprocess_mask(mask):
- mask = mask.convert("L")
- w, h = mask.size
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
- mask = mask.resize((w // 8, h // 8), resample=PIL.Image.NEAREST)
- mask = np.array(mask).astype(np.float32) / 255.0
- mask = np.tile(mask, (4, 1, 1))
- mask = mask[None].transpose(0, 1, 2, 3) # what does this step do?
- mask = 1 - mask # repaint white, keep black
- mask = torch.from_numpy(mask)
- return mask
-
-class StableDiffusionInpaintingPipeline(DiffusionPipeline):
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: Union[DDIMScheduler, PNDMScheduler],
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPFeatureExtractor,
- ):
- super().__init__()
- scheduler = scheduler.set_format("pt")
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
-
- @torch.no_grad()
- def __call__(
- self,
- prompt: Union[str, List[str]],
- init_image: torch.FloatTensor,
- mask_image: torch.FloatTensor,
- strength: float = 0.8,
- num_inference_steps: Optional[int] = 50,
- guidance_scale: Optional[float] = 7.5,
- eta: Optional[float] = 0.0,
- generator: Optional[torch.Generator] = None,
- output_type: Optional[str] = "pil",
- ):
-
- if isinstance(prompt, str):
- batch_size = 1
- elif isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if strength < 0 or strength > 1:
- raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
-
- # set timesteps
- accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
- extra_set_kwargs = {}
- offset = 0
- if accepts_offset:
- offset = 1
- extra_set_kwargs["offset"] = 1
-
- self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
-
- # preprocess image
- init_image = preprocess_image(init_image).to(self.device)
-
- # encode the init image into latents and scale the latents
- init_latent_dist = self.vae.encode(init_image).latent_dist
- init_latents = init_latent_dist.sample(generator=generator)
- init_latents = 0.18215 * init_latents
-
- # prepare init_latents noise to latents
- init_latents = torch.cat([init_latents] * batch_size)
- init_latents_orig = init_latents
-
- # preprocess mask
- mask = preprocess_mask(mask_image).to(self.device)
- mask = torch.cat([mask] * batch_size)
-
- # check sizes
- if not mask.shape == init_latents.shape:
- raise ValueError(f"The mask and init_image should be the same size!")
-
- # get the original timestep using init_timestep
- init_timestep = int(num_inference_steps * strength) + offset
- init_timestep = min(init_timestep, num_inference_steps)
- timesteps = self.scheduler.timesteps[-init_timestep]
- timesteps = torch.tensor([timesteps] * batch_size, dtype=torch.long, device=self.device)
-
- # add noise to latents using the timesteps
- noise = torch.randn(init_latents.shape, generator=generator, device=self.device)
- init_latents = self.scheduler.add_noise(init_latents, noise, timesteps)
-
- # get prompt text embeddings
- text_input = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0]
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- max_length = text_input.input_ids.shape[-1]
- uncond_input = self.tokenizer(
- [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt"
- )
- uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
-
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- latents = init_latents
- t_start = max(num_inference_steps - init_timestep + offset, 0)
- for i, t in tqdm(enumerate(self.scheduler.timesteps[t_start:])):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings)["sample"]
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs)["prev_sample"]
-
- # masking
- init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, t)
- latents = (init_latents_proper * mask) + (latents * (1 - mask))
-
- # scale and decode the image latents with vae
- latents = 1 / 0.18215 * latents
- image = self.vae.decode(latents).sample
-
- image = (image / 2 + 0.5).clamp(0, 1)
- image = image.cpu().permute(0, 2, 3, 1).numpy()
-
- # run safety checker
- safety_cheker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device)
- image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_cheker_input.pixel_values)
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- return {"sample": image, "nsfw_content_detected": has_nsfw_concept}
\ No newline at end of file
diff --git a/spaces/Shawn37/UTR_LM/esm/model/esm2_secondarystructure.py b/spaces/Shawn37/UTR_LM/esm/model/esm2_secondarystructure.py
deleted file mode 100644
index a5f08e7b98a66213d9097b44002aa0f865feb9d8..0000000000000000000000000000000000000000
--- a/spaces/Shawn37/UTR_LM/esm/model/esm2_secondarystructure.py
+++ /dev/null
@@ -1,179 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import Union
-import torch
-import torch.nn as nn
-
-import esm
-from esm.modules import ContactPredictionHead, ESM1bLayerNorm, RobertaLMHead, TransformerLayer
-# ```该代码定义了一个名为 ESM2 的 PyTorch 模型,继承自 nn.Module。在 __init__ 方法中,定义了一些超参数,例如 num_layers、embed_dim、attention_heads 等等。同时,它还初始化了一些子模块,例如 Embedding 层 embed_tokens、一系列 Transformer 层 layers、预测接触的 ContactPredictionHead 层 contact_head,以及一些线性层 lm_head、supervised_linear、structure_linear 等。该模型的前向传播在 forward 方法中定义,接收一个表示序列的 token 序列 tokens,返回预测的标签和其他附加信息。```
-
-class ESM2(nn.Module):
- def __init__(
- self,
- num_layers: int = 33,
- embed_dim: int = 1280,
- attention_heads: int = 20,
- alphabet: Union[esm.data.Alphabet, str] = "ESM-1b",
- token_dropout: bool = True,
- ):
- super().__init__()
- self.num_layers = num_layers
- self.embed_dim = embed_dim
- self.attention_heads = attention_heads
- if not isinstance(alphabet, esm.data.Alphabet):
- alphabet = esm.data.Alphabet.from_architecture(alphabet)
- self.alphabet = alphabet
- self.alphabet_size = len(alphabet)
- self.padding_idx = alphabet.padding_idx
- self.mask_idx = alphabet.mask_idx
- self.cls_idx = alphabet.cls_idx
- self.eos_idx = alphabet.eos_idx
- self.prepend_bos = alphabet.prepend_bos
- self.append_eos = alphabet.append_eos
- self.token_dropout = token_dropout
-
- self._init_submodules()
-
- def _init_submodules(self):
- self.embed_scale = 1
- self.embed_tokens = nn.Embedding(
- self.alphabet_size,
- self.embed_dim,
- padding_idx=self.padding_idx,
- )
-
- self.layers = nn.ModuleList(
- [
- TransformerLayer(
- self.embed_dim,
- 4 * self.embed_dim,
- self.attention_heads,
- add_bias_kv=False,
- use_esm1b_layer_norm=True,
- use_rotary_embeddings=True,
- )
- for _ in range(self.num_layers)
- ]
- )
-
- self.contact_head = ContactPredictionHead(
- self.num_layers * self.attention_heads,
- self.prepend_bos,
- self.append_eos,
- eos_idx=self.eos_idx,
- )
- self.emb_layer_norm_after = ESM1bLayerNorm(self.embed_dim)
-
- self.lm_head = RobertaLMHead(
- embed_dim=self.embed_dim,
- output_dim=self.alphabet_size,
- weight=self.embed_tokens.weight,
- )
- self.supervised_linear = nn.Linear(self.embed_dim, 1)
- self.structure_linear = nn.Linear(self.embed_dim, 3)
- def forward(self, tokens, repr_layers=[], need_head_weights=True, return_contacts=True, return_representation=True, return_attentions_symm = False, return_attentions = False):
- if return_contacts:
- need_head_weights = True
-
- assert tokens.ndim == 2
- padding_mask = tokens.eq(self.padding_idx) # B, T
-
- x = self.embed_scale * self.embed_tokens(tokens)
-
- if self.token_dropout:
- x.masked_fill_((tokens == self.mask_idx).unsqueeze(-1), 0.0)
- #print(f'tokens = {tokens}')
- #print(f'self.mask_idx = {self.mask_idx}')
- #print('x.shape = ', x.shape)
- # x: B x T x C
- mask_ratio_train = 0.15 * 0.8
- src_lengths = (~padding_mask).sum(-1)
- #print(f'mask_ratio_train = {mask_ratio_train}')
- #print(f'padding_mask = {padding_mask}')
- #print(f'src_lengths = {src_lengths}')
- mask_ratio_observed = (tokens == self.mask_idx).sum(-1).to(x.dtype) / src_lengths
- #print('mask_ratio_observed = ',mask_ratio_observed)
- x = x * (1 - mask_ratio_train) / (1 - mask_ratio_observed)[:, None, None]
- #print(f'x.shape = {x.shape}:\n', x)
- if padding_mask is not None:
- x = x * (1 - padding_mask.unsqueeze(-1).type_as(x))
- #print(f'x.shape = {x.shape}:\n', x)
- repr_layers = set(repr_layers)
- hidden_representations = {}
- if 0 in repr_layers:
- hidden_representations[0] = x
-
- if need_head_weights:
- attn_weights = []
-
- # (B, T, E) => (T, B, E)
- x = x.transpose(0, 1)
-
- if not padding_mask.any():
- padding_mask = None
-
- for layer_idx, layer in enumerate(self.layers):
- x, attn = layer(
- x,
- self_attn_padding_mask=padding_mask,
- need_head_weights=need_head_weights,
- )
- if (layer_idx + 1) in repr_layers:
- hidden_representations[layer_idx + 1] = x.transpose(0, 1)
- if need_head_weights:
- # (H, B, T, T) => (B, H, T, T)
- attn_weights.append(attn.transpose(1, 0))
-# print(x.shape) # 73, 2, 1280
- x = self.emb_layer_norm_after(x)
- x = x.transpose(0, 1) # (T, B, E) => (B, T, E)
-
- # last hidden representation should have layer norm applied
- if (layer_idx + 1) in repr_layers:
- hidden_representations[layer_idx + 1] = x
- x_supervised = self.supervised_linear(x[:,0,:])
- x_structure = self.structure_linear(x)
- x = self.lm_head(x)
-
- if return_representation:
- result = {"logits": x, "logits_supervised": x_supervised, "logits_structure": x_structure, "representations": hidden_representations}
- else:
- result = {"logits": x, "logits_supervised": x_supervised, "logits_structure": x_structure}
- if need_head_weights:
- # attentions: B x L x H x T x T
- attentions = torch.stack(attn_weights, 1)
- if padding_mask is not None:
- attention_mask = 1 - padding_mask.type_as(attentions)
- attention_mask = attention_mask.unsqueeze(1) * attention_mask.unsqueeze(2)
- attentions = attentions * attention_mask[:, None, None, :, :]
- if return_attentions: result["attentions"] = attentions
- if return_contacts:
- attentions_symm, contacts = self.contact_head(tokens, attentions)
- result["contacts"] = contacts
- if return_attentions_symm: result["attentions_symm"] = attentions_symm
-
- return result
-
- def predict_contacts(self, tokens):
- return self(tokens, return_contacts=True)["contacts"]
-
- def predict_symmetric_attentions(self, tokens):
- return self(tokens, return_contacts=True)["attentions_symm"]
-
- def predict_attentions(self, tokens):
- return self(tokens, need_head_weights=True)["attentions"]
-
- def predict_representations(self, tokens):
- return self(tokens, return_representation=True)['representations']
-
- def predict_logits(self, tokens):
- return self(tokens)['logits']
-
- def predict_logits_supervised(self, tokens):
- return self(tokens)['logits_supervised']
-
- def predict_logits_structure(self, tokens):
- return self(tokens)['logits_structure']
diff --git a/spaces/SmonF/Dialogue_summarizer/README.md b/spaces/SmonF/Dialogue_summarizer/README.md
deleted file mode 100644
index 8bc7f6b448eb6e8a2590232843391a5a2475f69b..0000000000000000000000000000000000000000
--- a/spaces/SmonF/Dialogue_summarizer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Dialogue Summarizer
-emoji: ⚡
-colorFrom: blue
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SpacesExamples/vscode/on_startup.sh b/spaces/SpacesExamples/vscode/on_startup.sh
deleted file mode 100644
index 448000271bbc7142681947fd1a447772f12ecfff..0000000000000000000000000000000000000000
--- a/spaces/SpacesExamples/vscode/on_startup.sh
+++ /dev/null
@@ -1,5 +0,0 @@
-#!/bin/bash
-# Write some commands here that will run on root user before startup.
-# For example, to clone transformers and install it in dev mode:
-# git clone https://github.com/huggingface/transformers.git
-# cd transformers && pip install -e ".[dev]"
\ No newline at end of file
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/backgroundjobs.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/backgroundjobs.py
deleted file mode 100644
index e7ad51eb6771838001c23e515ecc47d47111a70b..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/backgroundjobs.py
+++ /dev/null
@@ -1,491 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Manage background (threaded) jobs conveniently from an interactive shell.
-
-This module provides a BackgroundJobManager class. This is the main class
-meant for public usage, it implements an object which can create and manage
-new background jobs.
-
-It also provides the actual job classes managed by these BackgroundJobManager
-objects, see their docstrings below.
-
-
-This system was inspired by discussions with B. Granger and the
-BackgroundCommand class described in the book Python Scripting for
-Computational Science, by H. P. Langtangen:
-
-http://folk.uio.no/hpl/scripting
-
-(although ultimately no code from this text was used, as IPython's system is a
-separate implementation).
-
-An example notebook is provided in our documentation illustrating interactive
-use of the system.
-"""
-
-#*****************************************************************************
-# Copyright (C) 2005-2006 Fernando Perez
-#
-# Distributed under the terms of the BSD License. The full license is in
-# the file COPYING, distributed as part of this software.
-#*****************************************************************************
-
-# Code begins
-import sys
-import threading
-
-from IPython import get_ipython
-from IPython.core.ultratb import AutoFormattedTB
-from logging import error, debug
-
-
-class BackgroundJobManager(object):
- """Class to manage a pool of backgrounded threaded jobs.
-
- Below, we assume that 'jobs' is a BackgroundJobManager instance.
-
- Usage summary (see the method docstrings for details):
-
- jobs.new(...) -> start a new job
-
- jobs() or jobs.status() -> print status summary of all jobs
-
- jobs[N] -> returns job number N.
-
- foo = jobs[N].result -> assign to variable foo the result of job N
-
- jobs[N].traceback() -> print the traceback of dead job N
-
- jobs.remove(N) -> remove (finished) job N
-
- jobs.flush() -> remove all finished jobs
-
- As a convenience feature, BackgroundJobManager instances provide the
- utility result and traceback methods which retrieve the corresponding
- information from the jobs list:
-
- jobs.result(N) <--> jobs[N].result
- jobs.traceback(N) <--> jobs[N].traceback()
-
- While this appears minor, it allows you to use tab completion
- interactively on the job manager instance.
- """
-
- def __init__(self):
- # Lists for job management, accessed via a property to ensure they're
- # up to date.x
- self._running = []
- self._completed = []
- self._dead = []
- # A dict of all jobs, so users can easily access any of them
- self.all = {}
- # For reporting
- self._comp_report = []
- self._dead_report = []
- # Store status codes locally for fast lookups
- self._s_created = BackgroundJobBase.stat_created_c
- self._s_running = BackgroundJobBase.stat_running_c
- self._s_completed = BackgroundJobBase.stat_completed_c
- self._s_dead = BackgroundJobBase.stat_dead_c
- self._current_job_id = 0
-
- @property
- def running(self):
- self._update_status()
- return self._running
-
- @property
- def dead(self):
- self._update_status()
- return self._dead
-
- @property
- def completed(self):
- self._update_status()
- return self._completed
-
- def new(self, func_or_exp, *args, **kwargs):
- """Add a new background job and start it in a separate thread.
-
- There are two types of jobs which can be created:
-
- 1. Jobs based on expressions which can be passed to an eval() call.
- The expression must be given as a string. For example:
-
- job_manager.new('myfunc(x,y,z=1)'[,glob[,loc]])
-
- The given expression is passed to eval(), along with the optional
- global/local dicts provided. If no dicts are given, they are
- extracted automatically from the caller's frame.
-
- A Python statement is NOT a valid eval() expression. Basically, you
- can only use as an eval() argument something which can go on the right
- of an '=' sign and be assigned to a variable.
-
- For example,"print 'hello'" is not valid, but '2+3' is.
-
- 2. Jobs given a function object, optionally passing additional
- positional arguments:
-
- job_manager.new(myfunc, x, y)
-
- The function is called with the given arguments.
-
- If you need to pass keyword arguments to your function, you must
- supply them as a dict named kw:
-
- job_manager.new(myfunc, x, y, kw=dict(z=1))
-
- The reason for this asymmetry is that the new() method needs to
- maintain access to its own keywords, and this prevents name collisions
- between arguments to new() and arguments to your own functions.
-
- In both cases, the result is stored in the job.result field of the
- background job object.
-
- You can set `daemon` attribute of the thread by giving the keyword
- argument `daemon`.
-
- Notes and caveats:
-
- 1. All threads running share the same standard output. Thus, if your
- background jobs generate output, it will come out on top of whatever
- you are currently writing. For this reason, background jobs are best
- used with silent functions which simply return their output.
-
- 2. Threads also all work within the same global namespace, and this
- system does not lock interactive variables. So if you send job to the
- background which operates on a mutable object for a long time, and
- start modifying that same mutable object interactively (or in another
- backgrounded job), all sorts of bizarre behaviour will occur.
-
- 3. If a background job is spending a lot of time inside a C extension
- module which does not release the Python Global Interpreter Lock
- (GIL), this will block the IPython prompt. This is simply because the
- Python interpreter can only switch between threads at Python
- bytecodes. While the execution is inside C code, the interpreter must
- simply wait unless the extension module releases the GIL.
-
- 4. There is no way, due to limitations in the Python threads library,
- to kill a thread once it has started."""
-
- if callable(func_or_exp):
- kw = kwargs.get('kw',{})
- job = BackgroundJobFunc(func_or_exp,*args,**kw)
- elif isinstance(func_or_exp, str):
- if not args:
- frame = sys._getframe(1)
- glob, loc = frame.f_globals, frame.f_locals
- elif len(args)==1:
- glob = loc = args[0]
- elif len(args)==2:
- glob,loc = args
- else:
- raise ValueError(
- 'Expression jobs take at most 2 args (globals,locals)')
- job = BackgroundJobExpr(func_or_exp, glob, loc)
- else:
- raise TypeError('invalid args for new job')
-
- if kwargs.get('daemon', False):
- job.daemon = True
- job.num = self._current_job_id
- self._current_job_id += 1
- self.running.append(job)
- self.all[job.num] = job
- debug('Starting job # %s in a separate thread.' % job.num)
- job.start()
- return job
-
- def __getitem__(self, job_key):
- num = job_key if isinstance(job_key, int) else job_key.num
- return self.all[num]
-
- def __call__(self):
- """An alias to self.status(),
-
- This allows you to simply call a job manager instance much like the
- Unix `jobs` shell command."""
-
- return self.status()
-
- def _update_status(self):
- """Update the status of the job lists.
-
- This method moves finished jobs to one of two lists:
- - self.completed: jobs which completed successfully
- - self.dead: jobs which finished but died.
-
- It also copies those jobs to corresponding _report lists. These lists
- are used to report jobs completed/dead since the last update, and are
- then cleared by the reporting function after each call."""
-
- # Status codes
- srun, scomp, sdead = self._s_running, self._s_completed, self._s_dead
- # State lists, use the actual lists b/c the public names are properties
- # that call this very function on access
- running, completed, dead = self._running, self._completed, self._dead
-
- # Now, update all state lists
- for num, job in enumerate(running):
- stat = job.stat_code
- if stat == srun:
- continue
- elif stat == scomp:
- completed.append(job)
- self._comp_report.append(job)
- running[num] = False
- elif stat == sdead:
- dead.append(job)
- self._dead_report.append(job)
- running[num] = False
- # Remove dead/completed jobs from running list
- running[:] = filter(None, running)
-
- def _group_report(self,group,name):
- """Report summary for a given job group.
-
- Return True if the group had any elements."""
-
- if group:
- print('%s jobs:' % name)
- for job in group:
- print('%s : %s' % (job.num,job))
- print()
- return True
-
- def _group_flush(self,group,name):
- """Flush a given job group
-
- Return True if the group had any elements."""
-
- njobs = len(group)
- if njobs:
- plural = {1:''}.setdefault(njobs,'s')
- print('Flushing %s %s job%s.' % (njobs,name,plural))
- group[:] = []
- return True
-
- def _status_new(self):
- """Print the status of newly finished jobs.
-
- Return True if any new jobs are reported.
-
- This call resets its own state every time, so it only reports jobs
- which have finished since the last time it was called."""
-
- self._update_status()
- new_comp = self._group_report(self._comp_report, 'Completed')
- new_dead = self._group_report(self._dead_report,
- 'Dead, call jobs.traceback() for details')
- self._comp_report[:] = []
- self._dead_report[:] = []
- return new_comp or new_dead
-
- def status(self,verbose=0):
- """Print a status of all jobs currently being managed."""
-
- self._update_status()
- self._group_report(self.running,'Running')
- self._group_report(self.completed,'Completed')
- self._group_report(self.dead,'Dead')
- # Also flush the report queues
- self._comp_report[:] = []
- self._dead_report[:] = []
-
- def remove(self,num):
- """Remove a finished (completed or dead) job."""
-
- try:
- job = self.all[num]
- except KeyError:
- error('Job #%s not found' % num)
- else:
- stat_code = job.stat_code
- if stat_code == self._s_running:
- error('Job #%s is still running, it can not be removed.' % num)
- return
- elif stat_code == self._s_completed:
- self.completed.remove(job)
- elif stat_code == self._s_dead:
- self.dead.remove(job)
-
- def flush(self):
- """Flush all finished jobs (completed and dead) from lists.
-
- Running jobs are never flushed.
-
- It first calls _status_new(), to update info. If any jobs have
- completed since the last _status_new() call, the flush operation
- aborts."""
-
- # Remove the finished jobs from the master dict
- alljobs = self.all
- for job in self.completed+self.dead:
- del(alljobs[job.num])
-
- # Now flush these lists completely
- fl_comp = self._group_flush(self.completed, 'Completed')
- fl_dead = self._group_flush(self.dead, 'Dead')
- if not (fl_comp or fl_dead):
- print('No jobs to flush.')
-
- def result(self,num):
- """result(N) -> return the result of job N."""
- try:
- return self.all[num].result
- except KeyError:
- error('Job #%s not found' % num)
-
- def _traceback(self, job):
- num = job if isinstance(job, int) else job.num
- try:
- self.all[num].traceback()
- except KeyError:
- error('Job #%s not found' % num)
-
- def traceback(self, job=None):
- if job is None:
- self._update_status()
- for deadjob in self.dead:
- print("Traceback for: %r" % deadjob)
- self._traceback(deadjob)
- print()
- else:
- self._traceback(job)
-
-
-class BackgroundJobBase(threading.Thread):
- """Base class to build BackgroundJob classes.
-
- The derived classes must implement:
-
- - Their own __init__, since the one here raises NotImplementedError. The
- derived constructor must call self._init() at the end, to provide common
- initialization.
-
- - A strform attribute used in calls to __str__.
-
- - A call() method, which will make the actual execution call and must
- return a value to be held in the 'result' field of the job object.
- """
-
- # Class constants for status, in string and as numerical codes (when
- # updating jobs lists, we don't want to do string comparisons). This will
- # be done at every user prompt, so it has to be as fast as possible
- stat_created = 'Created'; stat_created_c = 0
- stat_running = 'Running'; stat_running_c = 1
- stat_completed = 'Completed'; stat_completed_c = 2
- stat_dead = 'Dead (Exception), call jobs.traceback() for details'
- stat_dead_c = -1
-
- def __init__(self):
- """Must be implemented in subclasses.
-
- Subclasses must call :meth:`_init` for standard initialisation.
- """
- raise NotImplementedError("This class can not be instantiated directly.")
-
- def _init(self):
- """Common initialization for all BackgroundJob objects"""
-
- for attr in ['call','strform']:
- assert hasattr(self,attr), "Missing attribute <%s>" % attr
-
- # The num tag can be set by an external job manager
- self.num = None
-
- self.status = BackgroundJobBase.stat_created
- self.stat_code = BackgroundJobBase.stat_created_c
- self.finished = False
- self.result = ''
-
- # reuse the ipython traceback handler if we can get to it, otherwise
- # make a new one
- try:
- make_tb = get_ipython().InteractiveTB.text
- except:
- make_tb = AutoFormattedTB(mode = 'Context',
- color_scheme='NoColor',
- tb_offset = 1).text
- # Note that the actual API for text() requires the three args to be
- # passed in, so we wrap it in a simple lambda.
- self._make_tb = lambda : make_tb(None, None, None)
-
- # Hold a formatted traceback if one is generated.
- self._tb = None
-
- threading.Thread.__init__(self)
-
- def __str__(self):
- return self.strform
-
- def __repr__(self):
- return '' % (self.num, self.strform)
-
- def traceback(self):
- print(self._tb)
-
- def run(self):
- try:
- self.status = BackgroundJobBase.stat_running
- self.stat_code = BackgroundJobBase.stat_running_c
- self.result = self.call()
- except:
- self.status = BackgroundJobBase.stat_dead
- self.stat_code = BackgroundJobBase.stat_dead_c
- self.finished = None
- self.result = ('')
- self._tb = self._make_tb()
- else:
- self.status = BackgroundJobBase.stat_completed
- self.stat_code = BackgroundJobBase.stat_completed_c
- self.finished = True
-
-
-class BackgroundJobExpr(BackgroundJobBase):
- """Evaluate an expression as a background job (uses a separate thread)."""
-
- def __init__(self, expression, glob=None, loc=None):
- """Create a new job from a string which can be fed to eval().
-
- global/locals dicts can be provided, which will be passed to the eval
- call."""
-
- # fail immediately if the given expression can't be compiled
- self.code = compile(expression,'','eval')
-
- glob = {} if glob is None else glob
- loc = {} if loc is None else loc
- self.expression = self.strform = expression
- self.glob = glob
- self.loc = loc
- self._init()
-
- def call(self):
- return eval(self.code,self.glob,self.loc)
-
-
-class BackgroundJobFunc(BackgroundJobBase):
- """Run a function call as a background job (uses a separate thread)."""
-
- def __init__(self, func, *args, **kwargs):
- """Create a new job from a callable object.
-
- Any positional arguments and keyword args given to this constructor
- after the initial callable are passed directly to it."""
-
- if not callable(func):
- raise TypeError(
- 'first argument to BackgroundJobFunc must be callable')
-
- self.func = func
- self.args = args
- self.kwargs = kwargs
- # The string form will only include the function passed, because
- # generating string representations of the arguments is a potentially
- # _very_ expensive operation (e.g. with large arrays).
- self.strform = str(func)
- self._init()
-
- def call(self):
- return self.func(*self.args, **self.kwargs)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/expr/core.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/expr/core.py
deleted file mode 100644
index 9cc258c8b723613453d4033c85035e335a537318..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/expr/core.py
+++ /dev/null
@@ -1,234 +0,0 @@
-from ..utils import SchemaBase
-
-
-class DatumType:
- """An object to assist in building Vega-Lite Expressions"""
-
- def __repr__(self):
- return "datum"
-
- def __getattr__(self, attr):
- if attr.startswith("__") and attr.endswith("__"):
- raise AttributeError(attr)
- return GetAttrExpression("datum", attr)
-
- def __getitem__(self, attr):
- return GetItemExpression("datum", attr)
-
- def __call__(self, datum, **kwargs):
- """Specify a datum for use in an encoding"""
- return dict(datum=datum, **kwargs)
-
-
-datum = DatumType()
-
-
-def _js_repr(val):
- """Return a javascript-safe string representation of val"""
- if val is True:
- return "true"
- elif val is False:
- return "false"
- elif val is None:
- return "null"
- elif isinstance(val, OperatorMixin):
- return val._to_expr()
- else:
- return repr(val)
-
-
-# Designed to work with Expression and VariableParameter
-class OperatorMixin:
- def _to_expr(self):
- return repr(self)
-
- def _from_expr(self, expr):
- return expr
-
- def __add__(self, other):
- comp_value = BinaryExpression("+", self, other)
- return self._from_expr(comp_value)
-
- def __radd__(self, other):
- comp_value = BinaryExpression("+", other, self)
- return self._from_expr(comp_value)
-
- def __sub__(self, other):
- comp_value = BinaryExpression("-", self, other)
- return self._from_expr(comp_value)
-
- def __rsub__(self, other):
- comp_value = BinaryExpression("-", other, self)
- return self._from_expr(comp_value)
-
- def __mul__(self, other):
- comp_value = BinaryExpression("*", self, other)
- return self._from_expr(comp_value)
-
- def __rmul__(self, other):
- comp_value = BinaryExpression("*", other, self)
- return self._from_expr(comp_value)
-
- def __truediv__(self, other):
- comp_value = BinaryExpression("/", self, other)
- return self._from_expr(comp_value)
-
- def __rtruediv__(self, other):
- comp_value = BinaryExpression("/", other, self)
- return self._from_expr(comp_value)
-
- __div__ = __truediv__
-
- __rdiv__ = __rtruediv__
-
- def __mod__(self, other):
- comp_value = BinaryExpression("%", self, other)
- return self._from_expr(comp_value)
-
- def __rmod__(self, other):
- comp_value = BinaryExpression("%", other, self)
- return self._from_expr(comp_value)
-
- def __pow__(self, other):
- # "**" Javascript operator is not supported in all browsers
- comp_value = FunctionExpression("pow", (self, other))
- return self._from_expr(comp_value)
-
- def __rpow__(self, other):
- # "**" Javascript operator is not supported in all browsers
- comp_value = FunctionExpression("pow", (other, self))
- return self._from_expr(comp_value)
-
- def __neg__(self):
- comp_value = UnaryExpression("-", self)
- return self._from_expr(comp_value)
-
- def __pos__(self):
- comp_value = UnaryExpression("+", self)
- return self._from_expr(comp_value)
-
- # comparison operators
-
- def __eq__(self, other):
- comp_value = BinaryExpression("===", self, other)
- return self._from_expr(comp_value)
-
- def __ne__(self, other):
- comp_value = BinaryExpression("!==", self, other)
- return self._from_expr(comp_value)
-
- def __gt__(self, other):
- comp_value = BinaryExpression(">", self, other)
- return self._from_expr(comp_value)
-
- def __lt__(self, other):
- comp_value = BinaryExpression("<", self, other)
- return self._from_expr(comp_value)
-
- def __ge__(self, other):
- comp_value = BinaryExpression(">=", self, other)
- return self._from_expr(comp_value)
-
- def __le__(self, other):
- comp_value = BinaryExpression("<=", self, other)
- return self._from_expr(comp_value)
-
- def __abs__(self):
- comp_value = FunctionExpression("abs", (self,))
- return self._from_expr(comp_value)
-
- # logical operators
-
- def __and__(self, other):
- comp_value = BinaryExpression("&&", self, other)
- return self._from_expr(comp_value)
-
- def __rand__(self, other):
- comp_value = BinaryExpression("&&", other, self)
- return self._from_expr(comp_value)
-
- def __or__(self, other):
- comp_value = BinaryExpression("||", self, other)
- return self._from_expr(comp_value)
-
- def __ror__(self, other):
- comp_value = BinaryExpression("||", other, self)
- return self._from_expr(comp_value)
-
- def __invert__(self):
- comp_value = UnaryExpression("!", self)
- return self._from_expr(comp_value)
-
-
-class Expression(OperatorMixin, SchemaBase):
- """Expression
-
- Base object for enabling build-up of Javascript expressions using
- a Python syntax. Calling ``repr(obj)`` will return a Javascript
- representation of the object and the operations it encodes.
- """
-
- _schema = {"type": "string"}
-
- def to_dict(self, *args, **kwargs):
- return repr(self)
-
- def __setattr__(self, attr, val):
- # We don't need the setattr magic defined in SchemaBase
- return object.__setattr__(self, attr, val)
-
- # item access
- def __getitem__(self, val):
- return GetItemExpression(self, val)
-
-
-class UnaryExpression(Expression):
- def __init__(self, op, val):
- super(UnaryExpression, self).__init__(op=op, val=val)
-
- def __repr__(self):
- return "({op}{val})".format(op=self.op, val=_js_repr(self.val))
-
-
-class BinaryExpression(Expression):
- def __init__(self, op, lhs, rhs):
- super(BinaryExpression, self).__init__(op=op, lhs=lhs, rhs=rhs)
-
- def __repr__(self):
- return "({lhs} {op} {rhs})".format(
- op=self.op, lhs=_js_repr(self.lhs), rhs=_js_repr(self.rhs)
- )
-
-
-class FunctionExpression(Expression):
- def __init__(self, name, args):
- super(FunctionExpression, self).__init__(name=name, args=args)
-
- def __repr__(self):
- args = ",".join(_js_repr(arg) for arg in self.args)
- return "{name}({args})".format(name=self.name, args=args)
-
-
-class ConstExpression(Expression):
- def __init__(self, name, doc):
- self.__doc__ = """{}: {}""".format(name, doc)
- super(ConstExpression, self).__init__(name=name, doc=doc)
-
- def __repr__(self):
- return str(self.name)
-
-
-class GetAttrExpression(Expression):
- def __init__(self, group, name):
- super(GetAttrExpression, self).__init__(group=group, name=name)
-
- def __repr__(self):
- return "{}.{}".format(self.group, self.name)
-
-
-class GetItemExpression(Expression):
- def __init__(self, group, name):
- super(GetItemExpression, self).__init__(group=group, name=name)
-
- def __repr__(self):
- return "{}[{!r}]".format(self.group, self.name)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/v5/schema/channels.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/v5/schema/channels.py
deleted file mode 100644
index 07f9f43e8e1387a374e60ae99ee9a92e1549d1e1..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/v5/schema/channels.py
+++ /dev/null
@@ -1,17317 +0,0 @@
-# The contents of this file are automatically written by
-# tools/generate_schema_wrapper.py. Do not modify directly.
-
-import sys
-from . import core
-import pandas as pd
-from altair.utils.schemapi import Undefined, with_property_setters
-from altair.utils import parse_shorthand
-from typing import overload, List
-
-if sys.version_info >= (3, 8):
- from typing import Literal
-else:
- from typing_extensions import Literal
-
-
-class FieldChannelMixin:
- def to_dict(self, validate=True, ignore=(), context=None):
- context = context or {}
- shorthand = self._get('shorthand')
- field = self._get('field')
-
- if shorthand is not Undefined and field is not Undefined:
- raise ValueError("{} specifies both shorthand={} and field={}. "
- "".format(self.__class__.__name__, shorthand, field))
-
- if isinstance(shorthand, (tuple, list)):
- # If given a list of shorthands, then transform it to a list of classes
- kwds = self._kwds.copy()
- kwds.pop('shorthand')
- return [self.__class__(sh, **kwds).to_dict(validate=validate, ignore=ignore, context=context)
- for sh in shorthand]
-
- if shorthand is Undefined:
- parsed = {}
- elif isinstance(shorthand, str):
- parsed = parse_shorthand(shorthand, data=context.get('data', None))
- type_required = 'type' in self._kwds
- type_in_shorthand = 'type' in parsed
- type_defined_explicitly = self._get('type') is not Undefined
- if not type_required:
- # Secondary field names don't require a type argument in VegaLite 3+.
- # We still parse it out of the shorthand, but drop it here.
- parsed.pop('type', None)
- elif not (type_in_shorthand or type_defined_explicitly):
- if isinstance(context.get('data', None), pd.DataFrame):
- raise ValueError(
- 'Unable to determine data type for the field "{}";'
- " verify that the field name is not misspelled."
- " If you are referencing a field from a transform,"
- " also confirm that the data type is specified correctly.".format(shorthand)
- )
- else:
- raise ValueError("{} encoding field is specified without a type; "
- "the type cannot be automatically inferred because "
- "the data is not specified as a pandas.DataFrame."
- "".format(shorthand))
- else:
- # Shorthand is not a string; we pass the definition to field,
- # and do not do any parsing.
- parsed = {'field': shorthand}
- context["parsed_shorthand"] = parsed
-
- return super(FieldChannelMixin, self).to_dict(
- validate=validate,
- ignore=ignore,
- context=context
- )
-
-
-class ValueChannelMixin:
- def to_dict(self, validate=True, ignore=(), context=None):
- context = context or {}
- condition = self._get('condition', Undefined)
- copy = self # don't copy unless we need to
- if condition is not Undefined:
- if isinstance(condition, core.SchemaBase):
- pass
- elif 'field' in condition and 'type' not in condition:
- kwds = parse_shorthand(condition['field'], context.get('data', None))
- copy = self.copy(deep=['condition'])
- copy['condition'].update(kwds)
- return super(ValueChannelMixin, copy).to_dict(validate=validate,
- ignore=ignore,
- context=context)
-
-
-class DatumChannelMixin:
- def to_dict(self, validate=True, ignore=(), context=None):
- context = context or {}
- datum = self._get('datum', Undefined)
- copy = self # don't copy unless we need to
- if datum is not Undefined:
- if isinstance(datum, core.SchemaBase):
- pass
- return super(DatumChannelMixin, copy).to_dict(validate=validate,
- ignore=ignore,
- context=context)
-
-
-@with_property_setters
-class Angle(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefnumber):
- """Angle schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- legend : anyOf(:class:`Legend`, None)
- An object defining properties of the legend. If ``null``, the legend for the
- encoding channel will be removed.
-
- **Default value:** If undefined, default `legend properties
- `__ are applied.
-
- **See also:** `legend `__
- documentation.
- scale : anyOf(:class:`Scale`, None)
- An object defining properties of the channel's scale, which is the function that
- transforms values in the data domain (numbers, dates, strings, etc) to visual values
- (pixels, colors, sizes) of the encoding channels.
-
- If ``null``, the scale will be `disabled and the data value will be directly encoded
- `__.
-
- **Default value:** If undefined, default `scale properties
- `__ are applied.
-
- **See also:** `scale `__
- documentation.
- sort : :class:`Sort`
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- JavaScript.
- * `A string indicating an encoding channel name to sort by
- `__ (e.g.,
- ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g.,
- ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a
- sort-by-encoding definition
- `__. For
- example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order":
- "descending"}``.
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects
- `__. In addition, for time
- units ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` and sorting by another channel is not supported for ``row`` and
- ``column``.
-
- **See also:** `sort `__
- documentation.
- timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "angle"
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmax=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmin=Undefined, **kwds) -> 'Angle':
- ...
-
- def bandPosition(self, _: float, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: bool, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: None, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, _: str, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, repeat=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def legend(self, _: None, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def scale(self, _: None, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[float], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[str], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[bool], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[core.DateTime], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: None, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'Angle':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Angle':
- ...
-
-
- def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined,
- condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined,
- sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds):
- super(Angle, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition,
- bin=bin, condition=condition, field=field, legend=legend,
- scale=scale, sort=sort, timeUnit=timeUnit, title=title, type=type,
- **kwds)
-
-
-@with_property_setters
-class AngleDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefnumber):
- """AngleDatum schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`)
- A constant value in data domain.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`Type`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "angle"
-
- def bandPosition(self, _: float, **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'AngleDatum':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'AngleDatum':
- ...
-
-
- def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined,
- type=Undefined, **kwds):
- super(AngleDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition,
- title=title, type=type, **kwds)
-
-
-@with_property_setters
-class AngleValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefnumber):
- """AngleValue schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`))
- A field definition or one or more value definition(s) with a parameter predicate.
- value : anyOf(float, :class:`ExprRef`)
- A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient
- definition `__ for color,
- values between ``0`` to ``1`` for opacity).
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "angle"
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'AngleValue':
- ...
-
-
- def __init__(self, value, condition=Undefined, **kwds):
- super(AngleValue, self).__init__(value=value, condition=condition, **kwds)
-
-
-@with_property_setters
-class Color(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefGradientstringnull):
- """Color schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- condition : anyOf(:class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- legend : anyOf(:class:`Legend`, None)
- An object defining properties of the legend. If ``null``, the legend for the
- encoding channel will be removed.
-
- **Default value:** If undefined, default `legend properties
- `__ are applied.
-
- **See also:** `legend `__
- documentation.
- scale : anyOf(:class:`Scale`, None)
- An object defining properties of the channel's scale, which is the function that
- transforms values in the data domain (numbers, dates, strings, etc) to visual values
- (pixels, colors, sizes) of the encoding channels.
-
- If ``null``, the scale will be `disabled and the data value will be directly encoded
- `__.
-
- **Default value:** If undefined, default `scale properties
- `__ are applied.
-
- **See also:** `scale `__
- documentation.
- sort : :class:`Sort`
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- JavaScript.
- * `A string indicating an encoding channel name to sort by
- `__ (e.g.,
- ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g.,
- ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a
- sort-by-encoding definition
- `__. For
- example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order":
- "descending"}``.
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects
- `__. In addition, for time
- units ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` and sorting by another channel is not supported for ``row`` and
- ``column``.
-
- **See also:** `sort `__
- documentation.
- timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "color"
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmax=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmin=Undefined, **kwds) -> 'Color':
- ...
-
- def bandPosition(self, _: float, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: bool, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: None, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, _: str, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, repeat=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def legend(self, _: None, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def scale(self, _: None, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[float], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[str], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[bool], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[core.DateTime], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: None, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'Color':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Color':
- ...
-
-
- def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined,
- condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined,
- sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds):
- super(Color, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition,
- bin=bin, condition=condition, field=field, legend=legend,
- scale=scale, sort=sort, timeUnit=timeUnit, title=title, type=type,
- **kwds)
-
-
-@with_property_setters
-class ColorDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefGradientstringnull):
- """ColorDatum schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- condition : anyOf(:class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`)
- A constant value in data domain.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`Type`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "color"
-
- def bandPosition(self, _: float, **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'ColorDatum':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'ColorDatum':
- ...
-
-
- def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined,
- type=Undefined, **kwds):
- super(ColorDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition,
- title=title, type=type, **kwds)
-
-
-@with_property_setters
-class ColorValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefGradientstringnull):
- """ColorValue schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`))
- A field definition or one or more value definition(s) with a parameter predicate.
- value : anyOf(:class:`Gradient`, string, None, :class:`ExprRef`)
- A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient
- definition `__ for color,
- values between ``0`` to ``1`` for opacity).
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "color"
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'ColorValue':
- ...
-
-
- def __init__(self, value, condition=Undefined, **kwds):
- super(ColorValue, self).__init__(value=value, condition=condition, **kwds)
-
-
-@with_property_setters
-class Column(FieldChannelMixin, core.RowColumnEncodingFieldDef):
- """Column schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- align : :class:`LayoutAlign`
- The alignment to apply to row/column facet's subplot. The supported string values
- are ``"all"``, ``"each"``, and ``"none"``.
-
-
- * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply
- placed one after the other.
- * For ``"each"``, subviews will be aligned into a clean grid structure, but each row
- or column may be of variable size.
- * For ``"all"``, subviews will be aligned and each row or column will be sized
- identically based on the maximum observed size. String values for this property
- will be applied to both grid rows and columns.
-
- **Default value:** ``"all"``.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- center : boolean
- Boolean flag indicating if facet's subviews should be centered relative to their
- respective rows or columns.
-
- **Default value:** ``false``
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- header : anyOf(:class:`Header`, None)
- An object defining properties of a facet's header.
- sort : anyOf(:class:`SortArray`, :class:`SortOrder`, :class:`EncodingSortField`, None)
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- JavaScript.
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects
- `__. In addition, for time
- units ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` is not supported for ``row`` and ``column``.
- spacing : float
- The spacing in pixels between facet's sub-views.
-
- **Default value** : Depends on ``"spacing"`` property of `the view composition
- configuration `__ (
- ``20`` by default)
- timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "column"
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmax=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmin=Undefined, **kwds) -> 'Column':
- ...
-
- def align(self, _: Literal["all", "each", "none"], **kwds) -> 'Column':
- ...
-
- def bandPosition(self, _: float, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: bool, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: None, **kwds) -> 'Column':
- ...
-
- def center(self, _: bool, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, _: str, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, repeat=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def header(self, format=Undefined, formatType=Undefined, labelAlign=Undefined, labelAnchor=Undefined, labelAngle=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelLineHeight=Undefined, labelOrient=Undefined, labelPadding=Undefined, labels=Undefined, orient=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOrient=Undefined, titlePadding=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def header(self, _: None, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[float], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[str], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[bool], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[core.DateTime], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: None, **kwds) -> 'Column':
- ...
-
- def spacing(self, _: float, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'Column':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Column':
- ...
-
-
- def __init__(self, shorthand=Undefined, aggregate=Undefined, align=Undefined,
- bandPosition=Undefined, bin=Undefined, center=Undefined, field=Undefined,
- header=Undefined, sort=Undefined, spacing=Undefined, timeUnit=Undefined,
- title=Undefined, type=Undefined, **kwds):
- super(Column, self).__init__(shorthand=shorthand, aggregate=aggregate, align=align,
- bandPosition=bandPosition, bin=bin, center=center, field=field,
- header=header, sort=sort, spacing=spacing, timeUnit=timeUnit,
- title=title, type=type, **kwds)
-
-
-@with_property_setters
-class Description(FieldChannelMixin, core.StringFieldDefWithCondition):
- """Description schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, string, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- condition : anyOf(:class:`ConditionalValueDefstringExprRef`, List(:class:`ConditionalValueDefstringExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- format : anyOf(string, :class:`Dict`)
- When used with the default ``"number"`` and ``"time"`` format type, the text
- formatting pattern for labels of guides (axes, legends, headers) and text marks.
-
-
- * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's
- `number format pattern `__.
- * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time
- format pattern `__.
-
- See the `format documentation `__
- for more examples.
-
- When used with a `custom formatType
- `__, this
- value will be passed as ``format`` alongside ``datum.value`` to the registered
- function.
-
- **Default value:** Derived from `numberFormat
- `__ config for number
- format and from `timeFormat
- `__ config for time
- format.
- formatType : string
- The format type for labels. One of ``"number"``, ``"time"``, or a `registered custom
- format type
- `__.
-
- **Default value:**
-
-
- * ``"time"`` for temporal fields and ordinal and nominal fields with ``timeUnit``.
- * ``"number"`` for quantitative fields as well as ordinal and nominal fields without
- ``timeUnit``.
- timeUnit : anyOf(:class:`TimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
-