diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GTA IV EFLC Crack Razor1911 Download Get the Full Experience of GTA 4 with this Easy and Safe Method.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GTA IV EFLC Crack Razor1911 Download Get the Full Experience of GTA 4 with this Easy and Safe Method.md deleted file mode 100644 index c055047e604c8424aa7979bf736b418a876c2550..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GTA IV EFLC Crack Razor1911 Download Get the Full Experience of GTA 4 with this Easy and Safe Method.md +++ /dev/null @@ -1,113 +0,0 @@ - -

GTA IV EFLC Crack Razor1911 Download

-

Grand Theft Auto IV: Episodes from Liberty City (GTA IV EFLC) is a standalone expansion pack for the popular open-world action game Grand Theft Auto IV. It contains two separate stories, The Lost and Damned and The Ballad of Gay Tony, that add new characters, missions, weapons, vehicles, and music to the original game. However, to play GTA IV EFLC, you need to have a valid disc or online activation, which can be inconvenient or problematic for some players. That's why many people look for a crack that can bypass these requirements and let them enjoy the game without any hassle.

-

gta iv eflc crack razor1911 download


Download File ★★★★★ https://byltly.com/2uKzb7



-

One of the most famous and reliable cracks for GTA IV EFLC is made by Razor1911, a group of skilled hackers and programmers who have cracked many games in the past. Their crack replaces some of the original game files with modified ones that allow you to run the game without needing a disc or online activation. It also removes the need for Rockstar Games Social Club or Windows Live, which are often considered annoying or unnecessary by many players. With Razor1911's crack, you can play GTA IV EFLC as if it was a regular offline game.

-

How to download and install the crack

-

Downloading and installing Razor1911's crack for GTA IV EFLC is not very difficult, but you need to follow some steps carefully. First of all, you need to have GTA IV EFLC installed on your PC, preferably with patch 1.0.2.0. You also need to have enough space on your hard drive to backup the original files and copy the cracked ones. Here are the steps you need to take:

-
    -
  1. Download the crack file from one of these sources:
    -https://libertycity.net/files/gta-4/62708-krjak-ot-razor1911-dlja-jepizodov.html
    -https://archive.org/details/gta4-Razor1911
    -The file size is about 4.4 MB and it contains two files: data.dll and LaunchEFLC.exe.
  2. -
  3. Extract the crack file using a program like WinRAR or 7-Zip. You should see two files: data.dll and LaunchEFLC.exe.
  4. -
  5. Go to your GTA IV EFLC installation folder, which is usually located at C:\Program Files (x86)\Rockstar Games\EFLC or C:\Program Files\Rockstar Games\EFLC.
  6. -
  7. Make a backup of your original files by copying them to another folder or renaming them. The files you need to backup are data.dll and LaunchEFLC.exe.
  8. -
  9. Copy the cracked files (data.dll and LaunchEFLC.exe) from the extracted folder to your GTA IV EFLC installation folder and overwrite the original files.
  10. -
  11. Run the game by double-clicking on LaunchEFLC.exe. You should see a message saying "Razor 1911" before the game starts.
  12. -
-

Congratulations! You have successfully installed Razor1911's crack for GTA IV EFLC. You can now play the game without needing a disc or online activation.

-

Benefits of using the crack

-

Using Razor1911's crack for GTA IV EFLC has several benefits that can enhance your gaming experience. Here are some of them:

- -

Risks and drawbacks of using the crack

-

While using Razor1911's crack for GTA IV EFLC has many benefits, it also has some risks and drawbacks that you should be aware of before using it. Here are some of them:

- -

Conclusion

-

Razor1911's crack for GTA IV EFLC is a useful tool that can help you play GTA IV EFLC without needing a disc or online activation. It also offers some advantages such as improved game performance and compatibility with mods, as well as the ability to switch between old and new radio stations with Radio Downgrader. However, it also comes with some risks and drawbacks such as possible legal issues and malware infections, loss of some original radio tracks and online features, and incompatibility with some patches and updates.

-

gta iv episodes from liberty city razor1911 crack download
-how to install gta iv eflc crack by razor1911
-gta iv eflc razor1911 crack fix
-gta iv eflc crack razor1911 free download
-gta iv eflc razor1911 crack only
-download gta iv eflc crack razor1911 rar
-gta iv eflc crack razor1911 working 100
-gta iv eflc crack razor1911 windows 10
-gta iv eflc crack razor1911 kickass
-gta iv eflc crack razor1911 no cd
-gta iv eflc crack razor1911 steam
-gta iv eflc crack razor1911 update
-gta iv eflc crack razor1911 error
-gta iv eflc crack razor1911 tutorial
-gta iv eflc crack razor1911 mega
-gta iv eflc crack razor1911 mediafire
-gta iv eflc crack razor1911 skidrow
-gta iv eflc crack razor1911 reloaded
-gta iv eflc crack razor1911 patch
-gta iv eflc crack razor1911 direct link
-gta iv eflc crack razor1911 iso
-gta iv eflc crack razor1911 torrent
-gta iv eflc crack razor1911 full version
-gta iv eflc crack razor1911 online
-gta iv eflc crack razor1911 multiplayer
-gta iv eflc crack razor1911 mods
-gta iv eflc crack razor1911 cheats
-gta iv eflc crack razor1911 trainer
-gta iv eflc crack razor1911 gameplay
-gta iv eflc crack razor1911 graphics
-gta iv eflc crack razor1911 sound fix
-gta iv eflc crack razor1911 lag fix
-gta iv eflc crack razor1911 activation bypass
-gta iv eflc crack razor1911 serial key
-gta iv eflc crack razor1911 product key
-gta iv eflc crack razor1911 license key
-gta iv eflc crack razor1911 keygen
-gta iv eflc crack razor1911 generator
-gta iv eflc crack razor1911 download pc
-gta iv eflc crack razor1911 download mac
-gta iv eflc crack razor1911 download android
-gta iv eflc crack razor1911 download ios
-gta iv eflc crack razor1911 download apk
-gta iv eflc crack razor1911 download zip
-gta iv eflc crack razor1911 download utorrent
-gta iv eflc crack razor1911 download google drive
-gta iv eflc crack razor1911 download dropbox
-gta iv eflc crack razor1911 download zippyshare
-gta iv eflc crack razor1911 download 4shared

-

If you decide to use Razor1911's crack for GTA IV EFLC, make sure you download it from trusted sources, scan it with an antivirus program, backup your original files, follow the installation instructions carefully, and use it at your own risk. Also, remember that this article is not an endorsement or promotion of piracy or illegal activities; it is only meant for informational purposes.

-

We hope this article has helped you understand what Razor1911's crack for GTA IV EFLC does and how to use it properly. If you have any feedback or questions about this topic, feel free to leave a comment below.

-

Frequently Asked Questions

-
    -
  1. What is GTA IV EFLC?
    - GTA IV EFLC stands for Grand Theft Auto IV: Episodes from Liberty City, which is a standalone expansion pack for Grand Theft Auto IV that contains two separate stories: The Lost and Damned and The Ballad of Gay Tony.
  2. -
  3. What is Razor1911?
    - Here is the rest of the article: Razor1911 is a group of skilled hackers and programmers who have cracked many games in the past. Their crack for GTA IV EFLC replaces some of the original game files with modified ones that allow you to run the game without needing a disc or online activation.
  4. -
  5. How to download and install Razor1911's crack for GTA IV EFLC?
    - You can download Razor1911's crack for GTA IV EFLC from one of these sources:
    -https://libertycity.net/files/gta-4/62708-krjak-ot-razor1911-dlja-jepizodov.html
    -https://archive.org/details/gta4-Razor1911
    -The file size is about 4.4 MB and it contains two files: data.dll and LaunchEFLC.exe. To install the crack, you need to backup your original files and replace them with the cracked ones. Then, you can launch the game by double-clicking on LaunchEFLC.exe.
  6. -
  7. What are the benefits of using Razor1911's crack for GTA IV EFLC?
    - Some of the benefits of using Razor1911's crack for GTA IV EFLC are:
    -- You don't need a disc or online activation to play the game.
    -- You don't need Rockstar Games Social Club or Windows Live to play the game.
    -- You can improve your game performance and compatibility with mods by using DXVK & DxWrapper.
    -- You can switch between old and new radio stations with Radio Downgrader.
  8. -
  9. What are the risks and drawbacks of using Razor1911's crack for GTA IV EFLC?
    - Some of the risks and drawbacks of using Razor1911's crack for GTA IV EFLC are:
    -- You may face legal issues or malware infections if you download the crack from untrusted sources or use it for piracy purposes.
    -- You may lose some original radio tracks and online features if you use the crack.
    -- You may encounter incompatibility issues with some patches and updates if you use the crack.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gta Eflc Crack [UPD]-razor1911 Update Patch V1.1.1.0.epub.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gta Eflc Crack [UPD]-razor1911 Update Patch V1.1.1.0.epub.md deleted file mode 100644 index be623e8031fb5e8a361855a6ed52f722087aa245..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gta Eflc Crack [UPD]-razor1911 Update Patch V1.1.1.0.epub.md +++ /dev/null @@ -1,39 +0,0 @@ - -

How to Install GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0

-

GTA EFLC is a standalone expansion pack for the popular Grand Theft Auto IV game, which features two new stories: The Lost and Damned and The Ballad of Gay Tony. However, some players may encounter issues with the game's performance or compatibility, especially on newer systems. That's why you may need to install the GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0, which fixes some bugs and improves the game's stability.

-

In this article, we will show you how to download and install the GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0 in a few simple steps.

-

Gta Eflc Crack-razor1911 Update Patch V1.1.1.0.epub


DOWNLOAD ✶✶✶ https://byltly.com/2uKvok



-

Step 1: Download the GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0

-

The first thing you need to do is to download the GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0 from a reliable source. You can find it on various torrent sites or file-sharing platforms, but make sure you scan it for viruses before opening it.

-

The file name should be Gta Eflc Crack-razor1911 Update Patch V1.1.1.0.epub, and it should be around 200 MB in size.

-

Step 2: Extract the GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0

-

Once you have downloaded the file, you need to extract it using a program like WinRAR or 7-Zip. You can right-click on the file and choose Extract Here or Extract to Gta Eflc Crack-razor1911 Update Patch V1.1.1.0.

-

You should see a folder named Gta Eflc Crack-razor1911 Update Patch V1.1.1.0, which contains several files and subfolders.

-

Step 3: Copy the GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0 Files

-

The next step is to copy the files from the extracted folder to your GTA EFLC installation directory. You can find it by right-clicking on the GTA EFLC shortcut on your desktop and choosing Open File Location.

-

You should see a folder named EFLC, which contains the game's files and folders.

-

Now, you need to copy all the files and folders from the Gta Eflc Crack-razor1911 Update Patch V1.1.1.0 folder and paste them into the EFLC folder, replacing any existing files if prompted.

-

-

Step 4: Run the GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0 Launcher

-

The final step is to run the GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0 Launcher, which will apply the patch and launch the game.

-

You can find it in the EFLC folder, and it should be named EFLC.exe.

-

Double-click on it and wait for it to load.

-

You should see a message saying GTA IV: Episodes from Liberty City v 1110 (Razor19111), followed by the game's menu.

-

Congratulations!

-

You have successfully installed the GTA EFLC Crack-Razor19111 Update Patch V1110 on your PC.

-

Now you can enjoy playing GTA EFLC with improved performance and stability.

- -

Troubleshooting

-

If you encounter any problems with the GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0, here are some possible solutions:

- -

Disclaimer

-

This article is for educational purposes only. We do not condone or encourage piracy or illegal downloading of any software or content. We are not affiliated with or endorsed by Rockstar Games, Razor1911, or any other parties involved in the development or distribution of GTA EFLC or GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0.

-

Use this article and the GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0 at your own risk. We are not responsible for any damage or loss that may occur as a result of following this article or using the GTA EFLC Crack-Razor1911 Update Patch V1.1.1.0.

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Derivations In Physics Class 12 Cbse Pdf Download !!BETTER!!.md b/spaces/1gistliPinn/ChatGPT4/Derivations In Physics Class 12 Cbse Pdf Download !!BETTER!!.md deleted file mode 100644 index 7430c1ac27d163d06aa41711b9f0c7d6254161fb..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Derivations In Physics Class 12 Cbse Pdf Download !!BETTER!!.md +++ /dev/null @@ -1,80 +0,0 @@ -## Derivations In Physics Class 12 Cbse Pdf Download - - - - - - ![Derivations In Physics Class 12 Cbse Pdf Download !!BETTER!!](https://cdn1.byjus.com/wp-content/uploads/2021/09/OG-images-Instance-Wise-Physics.png) - - - - - -**Derivations In Physics Class 12 Cbse Pdf Download [https://lomasmavi.blogspot.com/?c=2txmNI](https://lomasmavi.blogspot.com/?c=2txmNI)** - - - - - - - - - - - - - -# How to Master Derivations in Physics Class 12 CBSE with PDF Download - - - -Physics is one of the most important subjects for students who are preparing for the CBSE board exams. Physics requires a lot of conceptual understanding and problem-solving skills, especially when it comes to derivations. Derivations are the mathematical expressions that show how a physical quantity or law is derived from the basic principles of physics. Derivations are essential for scoring high marks in physics, as they test your knowledge of the concepts and their applications. - - - -However, many students find derivations difficult and confusing, and often struggle to remember them or write them correctly in the exams. If you are one of those students who want to master derivations in physics class 12 CBSE, then this article is for you. In this article, we will provide you with some tips and tricks to learn and practice derivations effectively, as well as a PDF download link where you can access all the important derivations for physics class 12 CBSE. - - - -## Why are Derivations Important for Physics Class 12 CBSE? - - - -Derivations are important for physics class 12 CBSE for several reasons. Some of them are: - - - -- Derivations help you understand the concepts and principles of physics in depth. By deriving a formula or a law, you can see how it is related to the fundamental concepts and assumptions of physics, and how it can be applied to different situations. - -- Derivations help you improve your analytical and logical thinking skills. By deriving a formula or a law, you can learn how to use mathematical tools such as calculus, trigonometry, algebra, and vectors to manipulate physical quantities and equations. - -- Derivations help you score high marks in the exams. Derivations are often asked in the CBSE board exams, either directly or indirectly. By knowing how to derive a formula or a law, you can answer any question related to it with confidence and accuracy. - - - -## How to Learn Derivations in Physics Class 12 CBSE? - - - -Learning derivations in physics class 12 CBSE is not as hard as it may seem. You just need to follow some simple steps and strategies. Here are some of them: - - - -1. Understand the concept behind the derivation. Before you start deriving a formula or a law, make sure you understand what it means and what it is used for. Read the theory carefully and try to visualize the physical situation or phenomenon that it describes. For example, if you want to derive the expression for the electric potential due to a point charge, you should first understand what electric potential is, how it is measured, and how it depends on the charge and distance. - -2. Write down the given information and the required result. Before you start deriving a formula or a law, write down what is given and what is required in the derivation. This will help you organize your thoughts and plan your steps. For example, if you want to derive the expression for the electric potential due to a point charge, you should write down that the given information is the charge Q and the distance r from the charge, and the required result is the electric potential V at that point. - -3. Identify the basic principles and equations that are relevant to the derivation. Before you start deriving a formula or a law, identify which basic principles and equations of physics are relevant to the derivation. This will help you choose the right approach and avoid unnecessary complications. For example, if you want to derive the expression for the electric potential due to a point charge, you should identify that the relevant principle is Coulomb's law, which gives the force between two point charges, and the relevant equation is V = W/q0, -where W is the work done by an external agent in bringing a unit positive charge from infinity to that point. - -4. Follow a logical sequence of steps to derive the formula or law. Once you have identified the basic principles and equations that are relevant to the derivation, follow a logical sequence of steps to derive -the formula or law. Use appropriate symbols and units for all physical quantities and equations. Show all your calculations clearly and neatly. Explain each step with words if necessary. For example, if you want to dfd1c89656 - - - - - - - - - diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Anthony De Mello The Way To Love Pdf.md b/spaces/1gistliPinn/ChatGPT4/Examples/Anthony De Mello The Way To Love Pdf.md deleted file mode 100644 index f01b9fe57ae0f8e87e70228379b7789015b02c5b..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Anthony De Mello The Way To Love Pdf.md +++ /dev/null @@ -1,7 +0,0 @@ - -

anthony de mello was born in 1920 in cork, ireland. in his twenties he trained as a jesuit priest and spent his career at various jesuit institutions in india and west africa. he is now retired and lives in new delhi. he has authored more than twenty books in english and spanish including the way to love and wake up! (both published in the usa by doubleday), the light of asia, the river of love, the seed, the heart of love, and the unfolding of love.

-

the way to love is the title given to the first of three books of spiritual reflections by anthony de mello, a renowned spiritual teacher and author. the way to love (doubleday) is a compilation of many of his teachings on spirituality, including insights on spirituality in the catholic church, the spiritual life and writing, the art of prayer, and the development of the spiritual director. the way to love is a convenient and easy read with anecdotes, stories, and short meditations, and a subtle blend of humor, intellectual depth, and simplicity.

-

Anthony De Mello The Way To Love Pdf


Download ✺✺✺ https://imgfil.com/2uxYP2



-

no more anxiety is the second book in the trilogy of spiritual reflections by anthony de mello. no more anxiety (doubleday) is a compilation of many of his teachings on spirituality, including insights on spirituality in the catholic church, the spiritual life and writing, the art of prayer, and the development of the spiritual director. no more anxiety is a convenient and easy read with anecdotes, stories, and short meditations, and a subtle blend of humor, intellectual depth, and simplicity.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Libro Ecuaciones Diferenciales Moises Lazaro LINK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Libro Ecuaciones Diferenciales Moises Lazaro LINK.md deleted file mode 100644 index c6d88523e5994a53a48b9e34ad9ba999edc66b13..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Libro Ecuaciones Diferenciales Moises Lazaro LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

descargar libro ecuaciones diferenciales moises lazaro


Download https://imgfil.com/2uxXtP



- -Descargar Libro Ecuaciones Diferenciales Moises Lazaro March 27th, 2019 - Descargar Solucionario Del Libro Calculo Integral Moises Lazaro Descargar libros ... 4d29de3e1b
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Explorations In Basic Biology 12th Edition Answer 45 VERIFIED.md b/spaces/1gistliPinn/ChatGPT4/Examples/Explorations In Basic Biology 12th Edition Answer 45 VERIFIED.md deleted file mode 100644 index 760e49d7802f36308064ef12ac030c18553d0a4e..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Explorations In Basic Biology 12th Edition Answer 45 VERIFIED.md +++ /dev/null @@ -1,15 +0,0 @@ -
-

How to Find the Answer to Exercise 45 in Explorations in Basic Biology 12th Edition

-

Explorations in Basic Biology is a self-contained laboratory manual designed for one- or two-semester introductory biology courses for non-biology and mixed biology majors. The exercises are appropriate for three-hour laboratory sessions, but are also adaptable to a two-hour laboratory format. The manual covers various topics, such as microscopy, cell structure, enzymes, photosynthesis, genetics, evolution, ecology, and animal diversity.

-

One of the exercises in the manual is Exercise 45: The Human Heart. This exercise introduces the students to the structure and function of the human heart, and guides them through the dissection of a sheep heart. The exercise also includes questions that test the students' understanding of the concepts and skills learned.

-

explorations in basic biology 12th edition answer 45


Download File ☆☆☆☆☆ https://imgfil.com/2uxZ5v



-

One of the questions in Exercise 45 is question 45: "What is the function of the chordae tendineae?" This question refers to the thin strands of connective tissue that attach the atrioventricular valves to the papillary muscles in the ventricles. The function of the chordae tendineae is to prevent the valves from being pushed back into the atria when the ventricles contract. This ensures that blood flows in one direction through the heart.

-

To find the answer to this question, students can refer to Figure 45.5 in the manual, which shows a diagram of a sheep heart with labels for the chordae tendineae and other structures. Students can also consult other sources, such as textbooks, websites, or videos, that explain the anatomy and physiology of the heart. For example, one possible source is [^4^], which provides a PDF version of Explorations in Basic Biology 12th Edition that can be downloaded for free.

-

By using these resources, students can learn more about the human heart and its function, and be able to answer question 45 and other questions in Exercise 45.

- -

In addition to question 45, Exercise 45 also includes questions that ask students to compare the sheep heart and the human heart, to identify the blood vessels that enter and exit the heart, to trace the pathway of blood through the heart, and to measure the heart rate and blood pressure of a human subject. These questions help students to apply their knowledge of the heart to different scenarios and contexts.

-

Exercise 45 is one of the many exercises in Explorations in Basic Biology 12th Edition that provide students with a hands-on experience of learning biology. By performing laboratory activities, students not only learn basic biological information but also gain experience practicing laboratory techniques. The manual also provides clear background information and directions for conducting laboratory activities, as well as guidelines for writing a scientific paper.

-

-

Explorations in Basic Biology 12th Edition is a useful resource for students who want to learn more about biology and its applications. The manual can be used as a standalone text or as a supplement to other biology texts. The manual is also suitable for instructors who want to offer their students a variety of options and activities for learning biology.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash of Warpath Mod Apk A Review of the Best Strategy Game with Unlimited Money and Gems.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash of Warpath Mod Apk A Review of the Best Strategy Game with Unlimited Money and Gems.md deleted file mode 100644 index ae60323199be104f2faec39947cb9f858549dc2c..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash of Warpath Mod Apk A Review of the Best Strategy Game with Unlimited Money and Gems.md +++ /dev/null @@ -1,101 +0,0 @@ - -

Clash of Warpath Mod APK (Unlimited Money and Gems)

-

Are you looking for a thrilling and addictive strategy game that will keep you hooked for hours? Do you want to experience the thrill of building your own base, army, and empire in a fantasy world? Do you want to have unlimited resources and access to all the premium features and items in the game? If you answered yes to any of these questions, then you should definitely try Clash of Warpath Mod APK.

-

What is Clash of Warpath?

-

Clash of Warpath is a real-time strategy game that lets you create your own kingdom, recruit powerful heroes, train your troops, and fight against other players from around the world. You can also join alliances, chat with other players, trade resources, and participate in events and quests. The game has stunning graphics, smooth gameplay, and a rich storyline that will immerse you in a fantasy world full of magic, adventure, and war.

-

clash of warpath mod apk (unlimited money and gems)


Download File --->>> https://urlin.us/2uSXLH



-

Features of Clash of Warpath

-

Build your base and army

-

In Clash of Warpath, you can build your own base from scratch, customize it with various buildings, defenses, traps, and decorations. You can also upgrade your buildings to improve their functions and efficiency. You can also recruit and train different types of troops, such as infantry, cavalry, archers, mages, and siege units. Each troop has its own strengths, weaknesses, and skills that you can use strategically in battle.

-

Fight epic battles and conquer territories

-

Clash of Warpath is not just about building your base and army, but also about fighting epic battles and conquering territories. You can attack other players' bases, loot their resources, destroy their defenses, and capture their heroes. You can also defend your own base from enemy attacks, set up traps, deploy your troops, and use your heroes' abilities to repel them. You can also explore the map, capture resource points, occupy strategic locations, and expand your territory.

-

Join alliances and chat with other players

-

Clash of Warpath is more fun when you play with other players. You can join or create an alliance, chat with other members, share resources, help each other out, and coordinate attacks. You can also compete with other alliances in alliance wars, alliance tournaments, alliance quests, and alliance events. You can also make friends or enemies with other players, send them messages, gifts, or challenges.

-

Why download Clash of Warpath Mod APK?

-

Unlimited money and gems

-

One of the main reasons why you should download Clash of Warpath Mod APK is because it gives you unlimited money and gems. Money and gems are the main currencies in the game that you can use to buy various items, such as buildings, troops, heroes, equipment, chests, boosts, etc. However, money and gems are not easy to come by in the game. You have to complete tasks, win battles, or spend real money to get them. But with Clash of Warpath Mod APK, you don't have to worry about that anymore. You can have as much money and gems as you want without spending a dime.

-

Unlock premium features and items

-

Another reason why you should download Clash of Warpath Mod APK is because it unlocks all the premium features and items in the game.

Some of the premium features and items that you can unlock with Clash of Warpath Mod APK are:

- -

Enjoy the game without ads or restrictions

-

The last reason why you should download Clash of Warpath Mod APK is because it lets you enjoy the game without ads or restrictions. Ads can be annoying and distracting when you are playing the game. They can also consume your data and battery. But with Clash of Warpath Mod APK, you don't have to see any ads in the game. You can play the game smoothly and uninterrupted. Moreover, Clash of Warpath Mod APK also removes any restrictions or limitations in the game. You don't have to worry about running out of resources, energy, or time. You can play the game as much as you want and how you want.

-

How to download and install Clash of Warpath Mod APK?

-

If you are convinced by now that Clash of Warpath Mod APK is the best version of the game for you, then you might be wondering how to download and install it on your device. Don't worry, it's very easy and simple. Just follow these steps:

-

* Warpath mod apk download free with unlimited money and gems
-* How to install Warpath mod apk + OBB for Android devices
-* Warpath hack mod apk latest version v6.20.41 with unlimited resources
-* Warpath mod apk unlimited everything: money, gems, gold, oil, etc.
-* Warpath cheats mod apk: tips and tricks to win every battle
-* Warpath mod apk offline mode: play without internet connection
-* Warpath mod apk no root required: easy and safe installation
-* Warpath mod apk unlimited troops: recruit and upgrade your army
-* Warpath mod apk unlimited weapons: unlock and customize your arsenal
-* Warpath mod apk unlimited skills: activate and use special abilities
-* Warpath mod apk unlimited missions: explore and conquer the world map
-* Warpath mod apk unlimited events: participate and win rewards
-* Warpath mod apk unlimited alliances: join and cooperate with other players
-* Warpath mod apk unlimited chat: communicate and chat with friends
-* Warpath mod apk unlimited fun: enjoy the best real-time strategy game
-* Warpath modded apk free download for Android phones and tablets
-* Warpath cracked apk with unlimited money and gems for free
-* Warpath premium apk with all features unlocked and unlimited resources
-* Warpath pro apk with advanced settings and options for better performance
-* Warpath full apk with all content and updates available for download
-* Warpath hacked apk with cheat codes and mods for easy gameplay
-* Warpath patched apk with bug fixes and improvements for smooth experience
-* Warpath unlocked apk with all items and characters accessible for use
-* Warpath updated apk with new features and enhancements for more fun
-* Warpath latest apk with the most recent version and changes for download
-* Download Warpath mod apk (unlimited money and gems) from Reddit
-* Download Warpath mod apk (unlimited money and gems) from APKPure
-* Download Warpath mod apk (unlimited money and gems) from APKMirror
-* Download Warpath mod apk (unlimited money and gems) from APKMody
-* Download Warpath mod apk (unlimited money and gems) from APKDone
-* Download Warpath mod apk (unlimited money and gems) from ModDroid
-* Download Warpath mod apk (unlimited money and gems) from HappyMod
-* Download Warpath mod apk (unlimited money and gems) from PandaHelper
-* Download Warpath mod apk (unlimited money and gems) from ACMarket
-* Download Warpath mod apk (unlimited money and gems) from APKCombo

-

Step 1: Download the APK file from a trusted source

-

The first step is to download the APK file of Clash of Warpath Mod APK from a trusted source. There are many websites that offer modded APK files for various games and apps, but not all of them are safe and reliable. Some of them might contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you should be careful and choose a reputable website that provides authentic and updated modded APK files. One such website is [clashofwarpathmodapk.com], where you can find the latest version of Clash of Warpath Mod APK with unlimited money and gems.

-

Step 2: Enable unknown sources on your device

-

The second step is to enable unknown sources on your device. This is necessary because Clash of Warpath Mod APK is not available on the official app store, so you have to install it from an external source. To do this, you have to allow your device to install apps from unknown sources. This is a simple process that varies depending on your device model and operating system. But generally, you can follow these steps:

- -

Step 3: Install the APK file and launch the game

-

The third step is to install the APK file and launch the game. This is also very easy and quick. Just follow these steps:

- -

Step 4: Enjoy the mod features and have fun

-

The final step is to enjoy the mod features and have fun. Once you launch the game, you will notice that you have unlimited money and gems in your account. You can use them to buy anything you want in the game, such as buildings, troops, heroes, equipment, chests, boosts, etc. You will also notice that all the premium features and items are unlocked and available for you to use. You will also enjoy the game without any ads or restrictions. You can play the game as much as you want and how you want. You can build your base and army, fight epic battles and conquer territories, join alliances and chat with other players, and have fun with the game.

-

Conclusion

-

Clash of Warpath is a real-time strategy game that lets you create your own kingdom, recruit powerful heroes, train your troops, and fight against other players from around the world. It is a thrilling and addictive game that will keep you hooked for hours. However, if you want to have more fun and excitement with the game, you should download Clash of Warpath Mod APK. This modded version of the game gives you unlimited money and gems, unlocks all the premium features and items, and lets you enjoy the game without ads or restrictions. It is easy and simple to download and install on your device. Just follow the steps above and you will be ready to play the game with the mod features. So what are you waiting for? Download Clash of Warpath Mod APK now and have fun!

-

FAQs

-

Here are some frequently asked questions about Clash of Warpath Mod APK:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download SmartGaga Garena 3.0 The Best Free Fire Emulator for Low End PC.md b/spaces/1phancelerku/anime-remove-background/Download SmartGaga Garena 3.0 The Best Free Fire Emulator for Low End PC.md deleted file mode 100644 index b15ec3483ee5e0b07fce40baacc10e3384a2d58d..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download SmartGaga Garena 3.0 The Best Free Fire Emulator for Low End PC.md +++ /dev/null @@ -1,129 +0,0 @@ - -

Download Smartgaga Garena 3.0: The Best Emulator for Free Fire on Low-End PC

-

If you are a fan of Free Fire, one of the most popular battle royale games on mobile devices, you might want to play it on your PC for a better gaming experience. However, not all PCs can run Free Fire smoothly, especially if they have low-end specifications. That's why you need an emulator that can optimize Free Fire for your PC without compromising the performance or quality.

-

download smartgaga garena 3.0


Download Zip >>> https://jinyurl.com/2uNSb3



-

One of the best emulators that you can use for Free Fire is Smartgaga Garena 3.0. This is a special version of Smartgaga that is designed specifically for Free Fire players who have low-end PCs. In this article, we will tell you everything you need to know about Smartgaga Garena 3.0, including its features, how to download and install it, how to optimize it for Free Fire, and its pros and cons.

-

Features of Smartgaga Garena 3.0

-

Smartgaga Garena 3.0 is not just an ordinary emulator. It is a super lite and fast emulator that can run Free Fire smoothly on any PC, even if it has only 1GB RAM or no graphics card. Here are some of the features that make Smartgaga Garena 3.0 stand out from other emulators:

- -

How to Download and Install Smartgaga Garena 3.0

-

Downloading and installing Smartgaga Garena 3.0 is very easy and simple. You just need to follow these steps:

-
    -
  1. Step 1: Go to the official website of Smartgaga Garena 3.0: The official website of Smartgaga Garena 3.0 is https://smartgagagarena.com/. You can also search for "Smartgaga Garena" on Google or any other search engine and click on the first result.
  2. -
  3. Step 2: Click on the download button and wait for the file to be downloaded: On the homepage of the website, you will see a big download button that says "Download Smartgaga Garena". Click on it and wait for the file to be downloaded. The file size is about 300 MB, so it may take some time depending on your internet speed.
  4. -
  5. Step 3: Run the installer and follow the instructions on the screen: After the file is downloaded, locate it on your PC and double-click on it to run the installer. Follow the instructions on the screen to install Smartgaga Garena 3.0 on your PC. The installation process is very quick and easy.
  6. -
  7. Step 4: Launch the emulator and log in with your Google account: After the installation is completed, launch the emulator from your desktop or start menu. You will see a welcome screen that asks you to log in with your Google account. Log in with your Google account to access the Play Store and other Google services.
  8. -
  9. Step 5: Download Free Fire from the Play Store or use the APK file: Once you are logged in, you can download Free Fire from the Play Store or use the APK file if you have it. To download Free Fire from the Play Store, click on the Play Store icon on the emulator's home screen and search for "Free Fire". Click on the install button and wait for the game to be downloaded and installed. To use the APK file, drag and drop it onto the emulator's window and follow the prompts to install it.
  10. -
-

How to Optimize Smartgaga Garena 3.0 for Free Fire

-

To get the best gaming experience with Free Fire on Smartgaga Garena 3.0, you need to optimize some settings and controls according to your PC specifications and preference. Here are some steps that you can follow to optimize Smartgaga Garena 3.0 for Free Fire:

-
    -
  1. Step 1: Adjust the resolution and frame rate according to your PC specifications: To adjust the resolution and frame rate of Free Fire on Smartgaga Garena 3.0, click on the gear icon on the top right corner of the emulator's window and go to "Settings". Under "Display", you can choose from different resolutions and frame rates that suit your PC specifications. The higher the resolution and frame rate, the better the graphics quality, but also the more resources required. Choose a balance between quality and performance that works for you.
  2. -
  3. Step 2: Enable or disable the virtualization technology (VT) depending on your CPU support: To enable or disable VT on Smartgaga Garena 3.0, go to "Settings" again and under "Engine", you will see an option called "Enable VT". VT is a feature that can improve the performance and stability of the emulator, but it may not be supported by some CPUs. To check if your CPU supports VT, you can use a tool like https://www.intel.com/content/www/us/en/support/articles/000005486/processors.html. If your CPU supports VT, you can enable it on Smartgaga Garena 3.0 and also on your BIOS settings. If your CPU does not support VT, you can disable it on Smartgaga Garena 3.0 and still enjoy the emulator.
  4. -
  5. Step 3: Change the graphics quality and sensitivity settings according to your preference: To change the graphics quality and sensitivity settings of Free Fire on Smartgaga Garena 3.0, launch the game and go to "Settings". Under "Graphics", you can choose from different graphics modes, such as Smooth, Standard, Ultra, and HDR. You can also adjust the brightness, shadows, anti-aliasing, and other options. Under "Sensitivity", you can adjust the sensitivity of the mouse and the touch screen for different scenarios, such as general, red dot, scope, and vehicle.
  6. -
  7. Step 4: Map the keyboard and mouse buttons for better control and accuracy: To map the keyboard and mouse buttons for Free Fire on Smartgaga Garena 3.0, click on the keyboard icon on the right side of the emulator's window and go to "Keymapping". You will see a list of preset keys that correspond to different actions in the game, such as move, aim, shoot, jump, crouch, reload, and more. You can modify these keys or add new ones according to your preference. You can also save different profiles for different games or modes.
  8. -
-

Pros and Cons of Smartgaga Garena 3.0

-

Smartgaga Garena 3.0 is undoubtedly one of the best emulators for Free Fire on low-end PC, but it also has some drawbacks that you should be aware of. Here are some of the pros and cons of Smartgaga Garena 3.0:

- - - - - - - - - -
ProsCons
    -
  • Lightweight, fast, stable, customizable, compatible, free
  • -
  • Can run Free Fire smoothly on any PC with low-end specifications
  • -
  • Has a turbo engine that can boost the speed and performance of Free Fire
  • -
  • Has a low latency mode that can reduce the ping and improve the network stability of Free Fire
  • -
  • Allows you to customize the settings and controls of Free Fire according to your preference
  • -
  • Can run on any Windows operating system from Windows 7 to Windows 11
  • -
  • Supports both 32-bit and 64-bit systems
  • -
    -
  • May encounter some bugs or errors occasionally
  • -
  • May require VT for some PCs to run smoothly
  • -
  • May not support some games or apps other than Free Fire
  • -
-

Conclusion

-

In conclusion, Smartgaga Garena 3.0 is a great emulator for Free Fire players who have low-end PCs. It can run Free Fire smoothly and stably on any PC with minimal resources. It also has many features that can enhance the gaming experience of Free Fire, such as customizable settings and controls, turbo engine, low latency mode, and more. It is also compatible with any Windows operating system from Windows 7 to Windows 11.

-

If you want to play Free Fire on your PC without any lag or stuttering, you should download Smartgaga Garena 3.0 from its official website and follow the steps that we have provided in this article. You will not regret it!

-

FAQs

-

Here are some of the frequently asked questions about Smartgaga Garena 3.0:

-

-

download smartgaga garena 3.0 best emulator for free fire
-download smartgaga garena 3.0 low end pc emulator
-download smartgaga garena 3.0 ob37 updated emulator
-download smartgaga garena 3.0 super lite emulator
-download smartgaga garena 3.0 for windows 10
-download smartgaga garena 3.0 for windows 7
-download smartgaga garena 3.0 for windows 11
-download smartgaga garena 3.0 for pc without graphics card
-download smartgaga garena 3.0 for pc with 1gb ram
-download smartgaga garena 3.0 for pc with 2gb ram
-download smartgaga garena 3.0 for free fire max
-download smartgaga garena 3.0 for free fire ob37
-download smartgaga garena 3.0 for free fire low end pc
-download smartgaga garena 3.0 for free fire headshot
-download smartgaga garena 3.0 for free fire gameplay
-download smartgaga garena 3.0 apk for android
-download smartgaga garena 3.0 offline installer
-download smartgaga garena 3.0 latest version
-download smartgaga garena 3.0 new version
-download smartgaga garena 3.0 best version
-how to download smartgaga garena 3.0 on pc
-how to download smartgaga garena 3.0 on laptop
-how to download smartgaga garena 3.0 on mac
-how to download smartgaga garena 3.0 on windows
-how to download smartgaga garena 3.0 and install it
-how to download smartgaga garena 3.0 and play free fire
-how to download smartgaga garena 3.0 and fix lag
-how to download smartgaga garena 3.0 and fix error
-how to download smartgaga garena 3.0 and fix mouse problem
-how to download smartgaga garena 3.0 and fix key mapping problem
-where to download smartgaga garena 3.0 for free
-where to download smartgaga garena 3.0 for pc
-where to download smartgaga garena 3.0 for laptop
-where to download smartgaga garena 3.0 for windows
-where to download smartgaga garena 3.0 apk file
-where to download smartgaga garena 3.0 offline installer file
-where to download smartgaga garena 3.0 latest version file
-where to download smartgaga garena 3.0 new version file
-where to download smartgaga garena 3.0 best version file
-why download smartgaga garena 3.0 emulator for free fire
-why download smartgaga garena 3.0 emulator for low end pc
-why download smartgaga garena 3.0 emulator for ob37 update
-why download smartgaga garena 3.0 emulator for super lite performance
-why download smartgaga garena 3.0 emulator for windows compatibility
-why download smartgaga garena 3.0 emulator for no graphics card requirement
-why download smartgaga garena 3.0 emulator for smooth gameplay
-why download smartgaga garena 3.0 emulator for easy installation
-why download smartgaga garena 3.0 emulator for best settings
-why download smartgaga garena 3.0 emulator for best sensitivity

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/7hao/bingo/src/components/ui/voice/index.tsx b/spaces/7hao/bingo/src/components/ui/voice/index.tsx deleted file mode 100644 index 4adcb632226bfced8b97092782811edf08b56569..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/components/ui/voice/index.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import './index.scss' - -export interface VoiceProps extends CSSPropertyRule { - num?: number; - duration?: number; -} -export default function Voice({ duration = 400, num = 7, ...others }) { - return ( -
- {Array.from({ length: num }).map((_, index) => { - const randomDuration = Math.random() * 100 + duration - const initialDelay = Math.random() * 2 * duration - const initialScale = Math.sin((index + 1) * Math.PI / num) - return ( -
- ) - })} -
- ) -} diff --git a/spaces/A00001/bingothoo/src/components/ui/dialog.tsx b/spaces/A00001/bingothoo/src/components/ui/dialog.tsx deleted file mode 100644 index 925e77fe7858fb218b5115b4e225174a886e0f02..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/components/ui/dialog.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DialogPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Dialog = DialogPrimitive.Root - -const DialogTrigger = DialogPrimitive.Trigger - -const DialogPortal = ({ - className, - children, - ...props -}: DialogPrimitive.DialogPortalProps) => ( - -
- {children} -
-
-) -DialogPortal.displayName = DialogPrimitive.Portal.displayName - -const DialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogOverlay.displayName = DialogPrimitive.Overlay.displayName - -const DialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - {children} - - - Close - - - -)) -DialogContent.displayName = DialogPrimitive.Content.displayName - -const DialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -DialogHeader.displayName = 'DialogHeader' - -const DialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
-) -DialogFooter.displayName = 'DialogFooter' - -const DialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogTitle.displayName = DialogPrimitive.Title.displayName - -const DialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogDescription.displayName = DialogPrimitive.Description.displayName - -export { - Dialog, - DialogTrigger, - DialogContent, - DialogHeader, - DialogFooter, - DialogTitle, - DialogDescription -} diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/Changelog_EN.md b/spaces/AI-Hobbyist/Hoyo-RVC/Changelog_EN.md deleted file mode 100644 index 8e2a5d1d5177c6e369a6052883b2b27262278fdb..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/Changelog_EN.md +++ /dev/null @@ -1,83 +0,0 @@ -### 2023-06-18 -- New pretrained v2 models: 32k and 48k -- Fix non-f0 model inference errors -- For training-set exceeding 1 hour, do automatic minibatch-kmeans to reduce feature shape, so that index training, adding, and searching will be much faster. -- Provide a toy vocal2guitar huggingface space -- Auto delete outlier short cut training-set audios -- Onnx export tab - -Failed experiments: -- ~~Feature retrieval: add temporal feature retrieval: not effective~~ -- ~~Feature retrieval: add PCAR dimensionality reduction: searching is even slower~~ -- ~~Random data augmentation when training: not effective~~ - -todolist: -- Vocos-RVC (tiny vocoder) -- Crepe support for training -- Half precision crepe inference -- F0 editor support - -### 2023-05-28 -- Add v2 jupyter notebook, korean changelog, fix some environment requirments -- Add voiceless consonant and breath protection mode -- Support crepe-full pitch detect -- UVR5 vocal separation: support dereverb models and de-echo models -- Add experiment name and version on the name of index -- Support users to manually select export format of output audios when batch voice conversion processing and UVR5 vocal separation -- v1 32k model training is no more supported - -### 2023-05-13 -- Clear the redundant codes in the old version of runtime in the one-click-package: infer_pack and uvr5_pack -- Fix pseudo multiprocessing bug in training set preprocessing -- Adding median filtering radius adjustment for harvest pitch recognize algorithm -- Support post processing resampling for exporting audio -- Multi processing "n_cpu" setting for training is changed from "f0 extraction" to "data preprocessing and f0 extraction" -- Automatically detect the index paths under the logs folder and provide a drop-down list function -- Add "Frequently Asked Questions and Answers" on the tab page (you can also refer to github RVC wiki) -- When inference, harvest pitch is cached when using same input audio path (purpose: using harvest pitch extraction, the entire pipeline will go through a long and repetitive pitch extraction process. If caching is not used, users who experiment with different timbre, index, and pitch median filtering radius settings will experience a very painful waiting process after the first inference) - -### 2023-05-14 -- Use volume envelope of input to mix or replace the volume envelope of output (can alleviate the problem of "input muting and output small amplitude noise". If the input audio background noise is high, it is not recommended to turn it on, and it is not turned on by default (1 can be considered as not turned on) -- Support saving extracted small models at a specified frequency (if you want to see the performance under different epochs, but do not want to save all large checkpoints and manually extract small models by ckpt-processing every time, this feature will be very practical) -- Resolve the issue of "connection errors" caused by the server's global proxy by setting environment variables -- Supports pre-trained v2 models (currently only 40k versions are publicly available for testing, and the other two sampling rates have not been fully trained yet) -- Limit excessive volume exceeding 1 before inference -- Slightly adjusted the settings of training-set preprocessing - - -####################### - -History changelogs: - -### 2023-04-09 -- Fixed training parameters to improve GPU utilization rate: A100 increased from 25% to around 90%, V100: 50% to around 90%, 2060S: 60% to around 85%, P40: 25% to around 95%; significantly improved training speed -- Changed parameter: total batch_size is now per GPU batch_size -- Changed total_epoch: maximum limit increased from 100 to 1000; default increased from 10 to 20 -- Fixed issue of ckpt extraction recognizing pitch incorrectly, causing abnormal inference -- Fixed issue of distributed training saving ckpt for each rank -- Applied nan feature filtering for feature extraction -- Fixed issue with silent input/output producing random consonants or noise (old models need to retrain with a new dataset) - -### 2023-04-16 Update -- Added local real-time voice changing mini-GUI, start by double-clicking go-realtime-gui.bat -- Applied filtering for frequency bands below 50Hz during training and inference -- Lowered the minimum pitch extraction of pyworld from the default 80 to 50 for training and inference, allowing male low-pitched voices between 50-80Hz not to be muted -- WebUI supports changing languages according to system locale (currently supporting en_US, ja_JP, zh_CN, zh_HK, zh_SG, zh_TW; defaults to en_US if not supported) -- Fixed recognition of some GPUs (e.g., V100-16G recognition failure, P4 recognition failure) - -### 2023-04-28 Update -- Upgraded faiss index settings for faster speed and higher quality -- Removed dependency on total_npy; future model sharing will not require total_npy input -- Unlocked restrictions for the 16-series GPUs, providing 4GB inference settings for 4GB VRAM GPUs -- Fixed bug in UVR5 vocal accompaniment separation for certain audio formats -- Real-time voice changing mini-GUI now supports non-40k and non-lazy pitch models - -### Future Plans: -Features: -- Add option: extract small models for each epoch save -- Add option: export additional mp3 to the specified path during inference -- Support multi-person training tab (up to 4 people) - -Base model: -- Collect breathing wav files to add to the training dataset to fix the issue of distorted breath sounds -- We are currently training a base model with an extended singing dataset, which will be released in the future diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/export_onnx.py b/spaces/AI-Hobbyist/Hoyo-RVC/export_onnx.py deleted file mode 100644 index 95376d4294ebc4d8972c5ab4a72454419f3e8cdf..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/export_onnx.py +++ /dev/null @@ -1,54 +0,0 @@ -from infer_pack.models_onnx import SynthesizerTrnMsNSFsidM -import torch - -if __name__ == "__main__": - MoeVS = True # 模型是否为MoeVoiceStudio(原MoeSS)使用 - - ModelPath = "Shiroha/shiroha.pth" # 模型路径 - ExportedPath = "model.onnx" # 输出路径 - hidden_channels = 256 # hidden_channels,为768Vec做准备 - cpt = torch.load(ModelPath, map_location="cpu") - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - print(*cpt["config"]) - - test_phone = torch.rand(1, 200, hidden_channels) # hidden unit - test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用) - test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹) - test_pitchf = torch.rand(1, 200) # nsf基频 - test_ds = torch.LongTensor([0]) # 说话人ID - test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子) - - device = "cpu" # 导出时设备(不影响使用模型) - - net_g = SynthesizerTrnMsNSFsidM( - *cpt["config"], is_half=False - ) # fp32导出(C++要支持fp16必须手动将内存重新排列所以暂时不用fp16) - net_g.load_state_dict(cpt["weight"], strict=False) - input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"] - output_names = [ - "audio", - ] - # net_g.construct_spkmixmap(n_speaker) 多角色混合轨道导出 - torch.onnx.export( - net_g, - ( - test_phone.to(device), - test_phone_lengths.to(device), - test_pitch.to(device), - test_pitchf.to(device), - test_ds.to(device), - test_rnd.to(device), - ), - ExportedPath, - dynamic_axes={ - "phone": [1], - "pitch": [1], - "pitchf": [1], - "rnd": [2], - }, - do_constant_folding=False, - opset_version=16, - verbose=False, - input_names=input_names, - output_names=output_names, - ) diff --git a/spaces/AIConsultant/MusicGen/audiocraft/models/audiogen.py b/spaces/AIConsultant/MusicGen/audiocraft/models/audiogen.py deleted file mode 100644 index 6adefb97401c10422c9711d222c0857f5593dceb..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/models/audiogen.py +++ /dev/null @@ -1,276 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Main model for using AudioGen. This will combine all the required components -and provide easy access to the generation API. -""" - -import typing as tp - -import torch - -from .encodec import CompressionModel -from .lm import LMModel -from .builders import get_debug_compression_model, get_debug_lm_model -from .loaders import load_compression_model, load_lm_model -from ..data.audio_utils import convert_audio -from ..modules.conditioners import ConditioningAttributes -from ..utils.autocast import TorchAutocast - - -class AudioGen: - """AudioGen main model with convenient generation API. - - Args: - name (str): name of the model. - compression_model (CompressionModel): Compression model - used to map audio to invertible discrete representations. - lm (LMModel): Language model over discrete representations. - max_duration (float, optional): maximum duration the model can produce, - otherwise, inferred from the training params. - """ - def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel, - max_duration: tp.Optional[float] = None): - self.name = name - self.compression_model = compression_model - self.lm = lm - if max_duration is None: - if hasattr(lm, 'cfg'): - max_duration = lm.cfg.dataset.segment_duration # type: ignore - else: - raise ValueError("You must provide max_duration when building directly AudioGen") - assert max_duration is not None - self.max_duration: float = max_duration - self.device = next(iter(lm.parameters())).device - self.generation_params: dict = {} - self.set_generation_params(duration=5) # 5 seconds by default - self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None - if self.device.type == 'cpu': - self.autocast = TorchAutocast(enabled=False) - else: - self.autocast = TorchAutocast( - enabled=True, device_type=self.device.type, dtype=torch.float16) - - @property - def frame_rate(self) -> float: - """Roughly the number of AR steps per seconds.""" - return self.compression_model.frame_rate - - @property - def sample_rate(self) -> int: - """Sample rate of the generated audio.""" - return self.compression_model.sample_rate - - @property - def audio_channels(self) -> int: - """Audio channels of the generated audio.""" - return self.compression_model.channels - - @staticmethod - def get_pretrained(name: str = 'facebook/audiogen-medium', device=None): - """Return pretrained model, we provide a single model for now: - - facebook/audiogen-medium (1.5B), text to sound, - # see: https://huggingface.co/facebook/audiogen-medium - """ - if device is None: - if torch.cuda.device_count(): - device = 'cuda' - else: - device = 'cpu' - - if name == 'debug': - # used only for unit tests - compression_model = get_debug_compression_model(device, sample_rate=16000) - lm = get_debug_lm_model(device) - return AudioGen(name, compression_model, lm, max_duration=10) - - compression_model = load_compression_model(name, device=device) - lm = load_lm_model(name, device=device) - assert 'self_wav' not in lm.condition_provider.conditioners, \ - "AudioGen do not support waveform conditioning for now" - return AudioGen(name, compression_model, lm) - - def set_generation_params(self, use_sampling: bool = True, top_k: int = 250, - top_p: float = 0.0, temperature: float = 1.0, - duration: float = 10.0, cfg_coef: float = 3.0, - two_step_cfg: bool = False, extend_stride: float = 2): - """Set the generation parameters for AudioGen. - - Args: - use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True. - top_k (int, optional): top_k used for sampling. Defaults to 250. - top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0. - temperature (float, optional): Softmax temperature parameter. Defaults to 1.0. - duration (float, optional): Duration of the generated waveform. Defaults to 10.0. - cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0. - two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance, - instead of batching together the two. This has some impact on how things - are padded but seems to have little impact in practice. - extend_stride: when doing extended generation (i.e. more than 10 seconds), by how much - should we extend the audio each time. Larger values will mean less context is - preserved, and shorter value will require extra computations. - """ - assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration." - self.extend_stride = extend_stride - self.duration = duration - self.generation_params = { - 'use_sampling': use_sampling, - 'temp': temperature, - 'top_k': top_k, - 'top_p': top_p, - 'cfg_coef': cfg_coef, - 'two_step_cfg': two_step_cfg, - } - - def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None): - """Override the default progress callback.""" - self._progress_callback = progress_callback - - def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text. - - Args: - descriptions (list of str): A list of strings used as text conditioning. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int, - descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None, - progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on audio prompts. - - Args: - prompt (torch.Tensor): A batch of waveforms used for continuation. - Prompt should be [B, C, T], or [C, T] if only one sample is generated. - prompt_sample_rate (int): Sampling rate of the given audio waveforms. - descriptions (list of str, optional): A list of strings used as text conditioning. Defaults to None. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if prompt.dim() == 2: - prompt = prompt[None] - if prompt.dim() != 3: - raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).") - prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels) - if descriptions is None: - descriptions = [None] * len(prompt) - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt) - assert prompt_tokens is not None - return self._generate_tokens(attributes, prompt_tokens, progress) - - @torch.no_grad() - def _prepare_tokens_and_attributes( - self, - descriptions: tp.Sequence[tp.Optional[str]], - prompt: tp.Optional[torch.Tensor], - ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]: - """Prepare model inputs. - - Args: - descriptions (list of str): A list of strings used as text conditioning. - prompt (torch.Tensor): A batch of waveforms used for continuation. - """ - attributes = [ - ConditioningAttributes(text={'description': description}) - for description in descriptions] - - if prompt is not None: - if descriptions is not None: - assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match" - prompt = prompt.to(self.device) - prompt_tokens, scale = self.compression_model.encode(prompt) - assert scale is None - else: - prompt_tokens = None - return attributes, prompt_tokens - - def _generate_tokens(self, attributes: tp.List[ConditioningAttributes], - prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor: - """Generate discrete audio tokens given audio prompt and/or conditions. - - Args: - attributes (list of ConditioningAttributes): Conditions used for generation (here text). - prompt_tokens (torch.Tensor, optional): Audio prompt used for continuation. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - Returns: - torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params. - """ - i = 0 - prompt_list = attributes[0].text['description'] - total_gen_len = int(self.duration * self.frame_rate) - max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate) - current_gen_offset: int = 0 - - def _progress_callback(generated_tokens: int, tokens_to_generate: int): - generated_tokens += current_gen_offset - if self._progress_callback is not None: - # Note that total_gen_len might be quite wrong depending on the - # codebook pattern used, but with delay it is almost accurate. - self._progress_callback(generated_tokens, total_gen_len) - else: - print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r') - - if prompt_tokens is not None: - assert max_prompt_len >= prompt_tokens.shape[-1], \ - "Prompt is longer than audio to generate" - - callback = None - if progress: - callback = _progress_callback - - if self.duration <= self.max_duration: - # generate by sampling from LM, simple case. - with self.autocast: - attributes[0].text['description'] = prompt_list[0] - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=total_gen_len, **self.generation_params) - - else: - all_tokens = [] - if prompt_tokens is None: - prompt_length = 0 - else: - all_tokens.append(prompt_tokens) - prompt_length = prompt_tokens.shape[-1] - - stride_tokens = int(self.frame_rate * self.extend_stride) - - while current_gen_offset + prompt_length < total_gen_len: - time_offset = current_gen_offset / self.frame_rate - chunk_duration = min(self.duration - time_offset, self.max_duration) - max_gen_len = int(chunk_duration * self.frame_rate) - with self.autocast: - if i >= len(prompt_list): - i = len(prompt_list) - 1 - attributes[0].text['description'] = prompt_list[i] - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=max_gen_len, **self.generation_params) - i = i + 1 - if prompt_tokens is None: - all_tokens.append(gen_tokens) - else: - all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:]) - prompt_tokens = gen_tokens[:, :, stride_tokens:] - prompt_length = prompt_tokens.shape[-1] - current_gen_offset += stride_tokens - - gen_tokens = torch.cat(all_tokens, dim=-1) - - # generate audio - assert gen_tokens.dim() == 3 - with torch.no_grad(): - gen_audio = self.compression_model.decode(gen_tokens, None) - return gen_audio - - def to(self, device: str): - self.compression_model.to(device) - self.lm.to(device) - return self \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/emotion/params_data.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/emotion/params_data.py deleted file mode 100644 index bdb1716ed45617f2b127a7fb8885afe6cc74fb71..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/emotion/params_data.py +++ /dev/null @@ -1,29 +0,0 @@ - -## Mel-filterbank -mel_window_length = 25 # In milliseconds -mel_window_step = 10 # In milliseconds -mel_n_channels = 40 - - -## Audio -sampling_rate = 16000 -# Number of spectrogram frames in a partial utterance -partials_n_frames = 160 # 1600 ms -# Number of spectrogram frames at inference -inference_n_frames = 80 # 800 ms - - -## Voice Activation Detection -# Window size of the VAD. Must be either 10, 20 or 30 milliseconds. -# This sets the granularity of the VAD. Should not need to be changed. -vad_window_length = 30 # In milliseconds -# Number of frames to average together when performing the moving average smoothing. -# The larger this value, the larger the VAD variations must be to not get smoothed out. -vad_moving_average_width = 8 -# Maximum number of consecutive silent frames a segment can have. -vad_max_silence_length = 6 - - -## Audio volume normalization -audio_norm_target_dBFS = -30 - diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/filter.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/filter.py deleted file mode 100644 index 7ad6ea87c1f10ddd94c544037791d7a4634d5ae1..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/filter.py +++ /dev/null @@ -1,95 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch -import torch.nn as nn -import torch.nn.functional as F -import math - -if 'sinc' in dir(torch): - sinc = torch.sinc -else: - # This code is adopted from adefossez's julius.core.sinc under the MIT License - # https://adefossez.github.io/julius/julius/core.html - # LICENSE is in incl_licenses directory. - def sinc(x: torch.Tensor): - """ - Implementation of sinc, i.e. sin(pi * x) / (pi * x) - __Warning__: Different to julius.sinc, the input is multiplied by `pi`! - """ - return torch.where(x == 0, - torch.tensor(1., device=x.device, dtype=x.dtype), - torch.sin(math.pi * x) / math.pi / x) - - -# This code is adopted from adefossez's julius.lowpass.LowPassFilters under the MIT License -# https://adefossez.github.io/julius/julius/lowpass.html -# LICENSE is in incl_licenses directory. -def kaiser_sinc_filter1d(cutoff, half_width, kernel_size): # return filter [1,1,kernel_size] - even = (kernel_size % 2 == 0) - half_size = kernel_size // 2 - - #For kaiser window - delta_f = 4 * half_width - A = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95 - if A > 50.: - beta = 0.1102 * (A - 8.7) - elif A >= 21.: - beta = 0.5842 * (A - 21)**0.4 + 0.07886 * (A - 21.) - else: - beta = 0. - window = torch.kaiser_window(kernel_size, beta=beta, periodic=False) - - # ratio = 0.5/cutoff -> 2 * cutoff = 1 / ratio - if even: - time = (torch.arange(-half_size, half_size) + 0.5) - else: - time = torch.arange(kernel_size) - half_size - if cutoff == 0: - filter_ = torch.zeros_like(time) - else: - filter_ = 2 * cutoff * window * sinc(2 * cutoff * time) - # Normalize filter to have sum = 1, otherwise we will have a small leakage - # of the constant component in the input signal. - filter_ /= filter_.sum() - filter = filter_.view(1, 1, kernel_size) - - return filter - - -class LowPassFilter1d(nn.Module): - def __init__(self, - cutoff=0.5, - half_width=0.6, - stride: int = 1, - padding: bool = True, - padding_mode: str = 'replicate', - kernel_size: int = 12): - # kernel_size should be even number for stylegan3 setup, - # in this implementation, odd number is also possible. - super().__init__() - if cutoff < -0.: - raise ValueError("Minimum cutoff must be larger than zero.") - if cutoff > 0.5: - raise ValueError("A cutoff above 0.5 does not make sense.") - self.kernel_size = kernel_size - self.even = (kernel_size % 2 == 0) - self.pad_left = kernel_size // 2 - int(self.even) - self.pad_right = kernel_size // 2 - self.stride = stride - self.padding = padding - self.padding_mode = padding_mode - filter = kaiser_sinc_filter1d(cutoff, half_width, kernel_size) - self.register_buffer("filter", filter) - - #input [B, C, T] - def forward(self, x): - _, C, _ = x.shape - - if self.padding: - x = F.pad(x, (self.pad_left, self.pad_right), - mode=self.padding_mode) - out = F.conv1d(x, self.filter.expand(C, -1, -1), - stride=self.stride, groups=C) - - return out \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/txt_processors/en.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/txt_processors/en.py deleted file mode 100644 index 6455d20404702fc8ced8571c2f187a744f7a20a9..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/txt_processors/en.py +++ /dev/null @@ -1,78 +0,0 @@ -import re -import unicodedata - -from g2p_en import G2p -from g2p_en.expand import normalize_numbers -from nltk import pos_tag -from nltk.tokenize import TweetTokenizer - -from text_to_speech.data_gen.tts.txt_processors.base_text_processor import BaseTxtProcessor, register_txt_processors -from text_to_speech.utils.text.text_encoder import PUNCS, is_sil_phoneme - - -class EnG2p(G2p): - word_tokenize = TweetTokenizer().tokenize - - def __call__(self, text): - # preprocessing - words = EnG2p.word_tokenize(text) - tokens = pos_tag(words) # tuples of (word, tag) - - # steps - prons = [] - for word, pos in tokens: - if re.search("[a-z]", word) is None: - pron = [word] - - elif word in self.homograph2features: # Check homograph - pron1, pron2, pos1 = self.homograph2features[word] - if pos.startswith(pos1): - pron = pron1 - else: - pron = pron2 - elif word in self.cmu: # lookup CMU dict - pron = self.cmu[word][0] - else: # predict for oov - pron = self.predict(word) - - prons.extend(pron) - prons.extend([" "]) - - return prons[:-1] - - -@register_txt_processors('en') -class TxtProcessor(BaseTxtProcessor): - g2p = EnG2p() - - @staticmethod - def preprocess_text(text): - text = normalize_numbers(text) - text = ''.join(char for char in unicodedata.normalize('NFD', text) - if unicodedata.category(char) != 'Mn') # Strip accents - text = text.lower() - text = re.sub("[\'\"()]+", "", text) - text = re.sub("[-]+", " ", text) - text = re.sub(f"[^ a-z{PUNCS}]", "", text) - text = re.sub(f" ?([{PUNCS}]) ?", r"\1", text) # !! -> ! - text = re.sub(f"([{PUNCS}])+", r"\1", text) # !! -> ! - text = text.replace("i.e.", "that is") - text = text.replace("i.e.", "that is") - text = text.replace("etc.", "etc") - text = re.sub(f"([{PUNCS}])", r" \1 ", text) - text = re.sub(rf"\s+", r" ", text) - return text - - @classmethod - def process(cls, txt, preprocess_args): - txt = cls.preprocess_text(txt).strip() - phs = cls.g2p(txt) - txt_struct = [[w, []] for w in txt.split(" ")] - i_word = 0 - for p in phs: - if p == ' ': - i_word += 1 - else: - txt_struct[i_word][1].append(p) - txt_struct = cls.postprocess(txt_struct, preprocess_args) - return txt_struct, txt diff --git a/spaces/ALSv/FSW/CONTRIBUTING.md b/spaces/ALSv/FSW/CONTRIBUTING.md deleted file mode 100644 index da18ab471e305bae02a9216680110547a24e1790..0000000000000000000000000000000000000000 --- a/spaces/ALSv/FSW/CONTRIBUTING.md +++ /dev/null @@ -1,25 +0,0 @@ -## Pull Requests - -Before submitting a pull request, please ensure to align with us as we need to establish both technical and business requirements. - - -### Do - -- ...consider to fix bugs over adding features -- ...one pull request for one feature or improvement -- ...consult us about implementation details -- ...proper testing before you submit your code -- ...resolve failed CI pipelines - - -### Don't - -- ...introduce fundamental changes in terms of software architecture -- ...introduce OOP - we accept functional programming only -- ...ignore given requirements or try to work around them -- ...submit code to a development branch without consulting us -- ...submit massive amount of code changes -- ...submit a proof of concept -- ...submit code that is using undocumented and private APIs -- ...solve third party issues in our project -- ...comment what your code does - use proper naming instead diff --git a/spaces/ARTeLab/DTM_Estimation_SRandD/models/modelNetA.py b/spaces/ARTeLab/DTM_Estimation_SRandD/models/modelNetA.py deleted file mode 100644 index 40b91f3bbf9bb829499b22f7b3e27be2d9e9d907..0000000000000000000000000000000000000000 --- a/spaces/ARTeLab/DTM_Estimation_SRandD/models/modelNetA.py +++ /dev/null @@ -1,381 +0,0 @@ -# Copyright 2021 Dakewe Biotech Corporation. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -# ============================================================================== -# File description: Realize the model definition function. -# ============================================================================== -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision.models as models -from torch import Tensor - -__all__ = [ - "ResidualDenseBlock", "ResidualResidualDenseBlock", - "Discriminator", "Generator", - "DownSamplingNetwork" -] - - -class ResidualDenseBlock(nn.Module): - """Achieves densely connected convolutional layers. - `Densely Connected Convolutional Networks" ` paper. - - Args: - channels (int): The number of channels in the input image. - growths (int): The number of channels that increase in each layer of convolution. - """ - - def __init__(self, channels: int, growths: int) -> None: - super(ResidualDenseBlock, self).__init__() - self.conv1 = nn.Conv2d(channels + growths * 0, growths, (3, 3), (1, 1), (1, 1)) - self.conv2 = nn.Conv2d(channels + growths * 1, growths, (3, 3), (1, 1), (1, 1)) - self.conv3 = nn.Conv2d(channels + growths * 2, growths, (3, 3), (1, 1), (1, 1)) - self.conv4 = nn.Conv2d(channels + growths * 3, growths, (3, 3), (1, 1), (1, 1)) - self.conv5 = nn.Conv2d(channels + growths * 4, channels, (3, 3), (1, 1), (1, 1)) - - self.leaky_relu = nn.LeakyReLU(0.2, True) - self.identity = nn.Identity() - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out1 = self.leaky_relu(self.conv1(x)) - out2 = self.leaky_relu(self.conv2(torch.cat([x, out1], 1))) - out3 = self.leaky_relu(self.conv3(torch.cat([x, out1, out2], 1))) - out4 = self.leaky_relu(self.conv4(torch.cat([x, out1, out2, out3], 1))) - out5 = self.identity(self.conv5(torch.cat([x, out1, out2, out3, out4], 1))) - out = out5 * 0.2 + identity - - return out - - - -class ResidualDenseBlock(nn.Module): - """Achieves densely connected convolutional layers. - `Densely Connected Convolutional Networks" ` paper. - - Args: - channels (int): The number of channels in the input image. - growths (int): The number of channels that increase in each layer of convolution. - """ - - def __init__(self, channels: int, growths: int) -> None: - super(ResidualDenseBlock, self).__init__() - self.conv1 = nn.Conv2d(channels + growths * 0, growths, (3, 3), (1, 1), (1, 1)) - self.conv2 = nn.Conv2d(channels + growths * 1, growths, (3, 3), (1, 1), (1, 1)) - self.conv3 = nn.Conv2d(channels + growths * 2, growths, (3, 3), (1, 1), (1, 1)) - self.conv4 = nn.Conv2d(channels + growths * 3, growths, (3, 3), (1, 1), (1, 1)) - self.conv5 = nn.Conv2d(channels + growths * 4, channels, (3, 3), (1, 1), (1, 1)) - - self.leaky_relu = nn.LeakyReLU(0.2, True) - self.identity = nn.Identity() - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out1 = self.leaky_relu(self.conv1(x)) - out2 = self.leaky_relu(self.conv2(torch.cat([x, out1], 1))) - out3 = self.leaky_relu(self.conv3(torch.cat([x, out1, out2], 1))) - out4 = self.leaky_relu(self.conv4(torch.cat([x, out1, out2, out3], 1))) - out5 = self.identity(self.conv5(torch.cat([x, out1, out2, out3, out4], 1))) - out = out5 * 0.2 + identity - - return out - - - -class MiniResidualDenseBlock(nn.Module): - """Achieves densely connected convolutional layers. - `Densely Connected Convolutional Networks" ` paper. - - Args: - channels (int): The number of channels in the input image. - growths (int): The number of channels that increase in each layer of convolution. - """ - - def __init__(self, channels: int, growths: int) -> None: - super(MiniResidualDenseBlock, self).__init__() - self.conv1 = nn.Conv2d(channels + growths * 0, growths, (3, 3), (1, 1), (1, 1)) - self.conv2 = nn.Conv2d(channels + growths * 1, growths, (3, 3), (1, 1), (1, 1)) - self.conv3 = nn.Conv2d(channels + growths * 2, growths, (3, 3), (1, 1), (1, 1)) - self.conv4 = nn.Conv2d(channels + growths * 3, growths, (3, 3), (1, 1), (1, 1)) - self.conv5 = nn.Conv2d(channels + growths * 4, channels, (3, 3), (1, 1), (1, 1)) - - self.leaky_relu = nn.LeakyReLU(0.2, True) - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out1 = self.leaky_relu(self.conv1(x)) - out2 = self.leaky_relu(self.conv2(torch.cat([x, out1], 1))) - out3 = self.leaky_relu(self.conv3(torch.cat([x, out1, out2], 1))) - out4 = self.leaky_relu(self.conv4(torch.cat([x, out1, out2, out3], 1))) - out5 = self.leaky_relu(self.conv5(torch.cat([x, out1, out2, out3, out4], 1))) - out = out5 * 0.2 + identity - - return out - - - -class ResidualResidualDenseBlock(nn.Module): - """Multi-layer residual dense convolution block. - - Args: - channels (int): The number of channels in the input image. - growths (int): The number of channels that increase in each layer of convolution. - """ - - def __init__(self, channels: int, growths: int) -> None: - super(ResidualResidualDenseBlock, self).__init__() - self.rdb1 = ResidualDenseBlock(channels, growths) - self.rdb2 = ResidualDenseBlock(channels, growths) - self.rdb3 = ResidualDenseBlock(channels, growths) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - identity = x - - out = self.rdb1(x) - out = self.rdb2(out) - out = self.rdb3(out) - out = out * 0.2 + identity - - return out - - -class MiniResidualResidualDenseBlock(nn.Module): - """Multi-layer residual dense convolution block. - - Args: - channels (int): The number of channels in the input image. - growths (int): The number of channels that increase in each layer of convolution. - """ - - def __init__(self, channels: int, growths: int) -> None: - super(MiniResidualResidualDenseBlock, self).__init__() - self.M_rdb1 = MiniResidualDenseBlock(channels, growths) - self.M_rdb2 = MiniResidualDenseBlock(channels, growths) - self.M_rdb3 = MiniResidualDenseBlock(channels, growths) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - identity = x - out = self.M_rdb1(x) - out = self.M_rdb2(out) - out = self.M_rdb3(out) - out = out * 0.2 + identity - return out - - - -class Discriminator(nn.Module): - def __init__(self) -> None: - super(Discriminator, self).__init__() - self.features = nn.Sequential( - # input size. (3) x 512 x 512 - nn.Conv2d(2, 32, (3, 3), (1, 1), (1, 1), bias=True), - nn.LeakyReLU(0.2, True), - nn.Conv2d(32, 64, (4, 4), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(64), - nn.LeakyReLU(0.2, True), - nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(64), - nn.LeakyReLU(0.2, True), - # state size. (128) x 256 x 256 - nn.Conv2d(64, 128, (4, 4), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(128), - nn.LeakyReLU(0.2, True), - nn.Conv2d(128, 128, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(128), - nn.LeakyReLU(0.2, True), - # state size. (256) x 64 x 64 - nn.Conv2d(128, 256, (4, 4), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(256), - nn.LeakyReLU(0.2, True), - nn.Conv2d(256, 256, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(256), - nn.LeakyReLU(0.2, True), - nn.Conv2d(256, 256, (4, 4), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(256), - nn.LeakyReLU(0.2, True), - nn.Conv2d(256, 256, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(256), - nn.LeakyReLU(0.2, True), - # state size. (512) x 16 x 16 - nn.Conv2d(256, 256, (4, 4), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(256), - nn.LeakyReLU(0.2, True), - - nn.Conv2d(256, 256, (4, 4), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(256), - nn.LeakyReLU(0.2, True), - # state size. (512) x 8 x 8 - ) - - self.classifier = nn.Sequential( - nn.Linear(256 * 8 * 8, 100), - nn.LeakyReLU(0.2, True), - nn.Linear(100, 1), - ) - - def forward(self, x: Tensor) -> Tensor: - out = self.features(x) - out = torch.flatten(out, 1) - out = self.classifier(out) - return out - -class Generator(nn.Module): - def __init__(self) -> None: - super(Generator, self).__init__() - #RLNet - self.RLNetconv_block1 = nn.Conv2d(1, 64, (3, 3), (1, 1), (1, 1)) - RLNettrunk = [] - for _ in range(4): - RLNettrunk += [ResidualResidualDenseBlock(64, 32)] - self.RLNettrunk = nn.Sequential(*RLNettrunk) - self.RLNetconv_block2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)) - self.RLNetconv_block3 = nn.Sequential( - nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)), - nn.LeakyReLU(0.2, True) - ) - self.RLNetconv_block4 = nn.Sequential( - nn.Conv2d(64, 1, (3, 3), (1, 1), (1, 1)), - nn.Tanh() - ) - - ############################################################################# - #Generator - self.conv_block1 = nn.Conv2d(1, 64, (3, 3), (1, 1), (1, 1)) - - trunk = [] - for _ in range(16): - trunk += [ResidualResidualDenseBlock(64, 32)] - self.trunk = nn.Sequential(*trunk) - - # After the feature extraction network, reconnect a layer of convolutional blocks. - self.conv_block2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)) - - - # Upsampling convolutional layer. - self.upsampling = nn.Sequential( - nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)), - nn.LeakyReLU(0.2, True) - ) - - # Reconnect a layer of convolution block after upsampling. - self.conv_block3 = nn.Sequential( - nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)), - nn.LeakyReLU(0.2, True) - ) - - self.conv_block4 = nn.Sequential( - nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)), - #nn.Sigmoid() - ) - - self.conv_block0_branch0 = nn.Sequential( - nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)), - nn.LeakyReLU(0.2, True), - nn.Conv2d(64, 128, (3, 3), (1, 1), (1, 1)), - nn.LeakyReLU(0.2, True), - nn.Conv2d(128, 128, (3, 3), (1, 1), (1, 1)), - nn.LeakyReLU(0.2, True), - nn.Conv2d(128, 64, (3, 3), (1, 1), (1, 1)), - nn.Tanh() - ) - - self.conv_block0_branch1 = nn.Sequential( - nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)), - nn.LeakyReLU(0.2, True), - nn.Conv2d(64, 128, (3, 3), (1, 1), (1, 1)), - nn.LeakyReLU(0.2, True), - nn.Conv2d(128, 128, (3, 3), (1, 1), (1, 1)), - nn.LeakyReLU(0.2, True), - nn.Conv2d(128, 64, (3, 3), (1, 1), (1, 1)), - nn.Tanh() - ) - - self.conv_block1_branch0 = nn.Sequential( - nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)), - nn.LeakyReLU(0.2, True), - nn.Conv2d(64, 1, (3, 3), (1, 1), (1, 1)), - #nn.LeakyReLU(0.2, True), - #nn.Conv2d(32, 1, (3, 3), (1, 1), (1, 1)), - nn.Sigmoid() - ) - - - - self.conv_block1_branch1 = nn.Sequential( - nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1)), - nn.LeakyReLU(0.2, True), - nn.Conv2d(64, 1, (3, 3), (1, 1), (1, 1)), - nn.Sigmoid()) - - - - - def _forward_impl(self, x: Tensor) -> Tensor: - #RLNet - out1 = self.RLNetconv_block1(x) - out = self.RLNettrunk(out1) - out2 = self.RLNetconv_block2(out) - out = out1 + out2 - out = self.RLNetconv_block3(out) - out = self.RLNetconv_block4(out) - rlNet_out = out + x - - #Generator - out1 = self.conv_block1(rlNet_out) - out = self.trunk(out1) - out2 = self.conv_block2(out) - out = out1 + out2 - out = self.upsampling(F.interpolate(out, scale_factor=2, mode="bicubic")) - out = self.upsampling(F.interpolate(out, scale_factor=2, mode="bicubic")) - out = self.conv_block3(out) - # - out = self.conv_block4(out) - - #demResidual = out[:, 1:2, :, :] - #grayResidual = out[:, 0:1, :, :] - - # out = self.trunkRGB(out_4) - # - # out_dem = out[:, 3:4, :, :] * 0.2 + demResidual # DEM images extracted - # out_rgb = out[:, 0:3, :, :] * 0.2 + rgbResidual # RGB images extracted - - #ra0 - #out_rgb= rgbResidual + self.conv_block0_branch0(rgbResidual) - - out_dem = out + self.conv_block0_branch1(out) #out+ tanh() - out_gray = out + self.conv_block0_branch0(out) #out+ tanh() - - out_gray = self.conv_block1_branch0(out_gray) #sigmoid() - out_dem = self.conv_block1_branch1(out_dem) #sigmoid() - - return out_gray, out_dem, rlNet_out - - - def forward(self, x: Tensor) -> Tensor: - return self._forward_impl(x) - - def _initialize_weights(self) -> None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - m.weight.data *= 0.1 - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - m.weight.data *= 0.1 diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb32-60e_deepfashion2_short_sleeved_shirt_256x192/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb32-60e_deepfashion2_short_sleeved_shirt_256x192/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/extensions/dataset_info.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/extensions/dataset_info.py deleted file mode 100644 index ef0d62e43089770797ef565d2153c8d42e4956c5..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/extensions/dataset_info.py +++ /dev/null @@ -1,104 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - - -class DatasetInfo: - - def __init__(self, dataset_info): - self._dataset_info = dataset_info - self.dataset_name = self._dataset_info['dataset_name'] - self.paper_info = self._dataset_info['paper_info'] - self.keypoint_info = self._dataset_info['keypoint_info'] - self.skeleton_info = self._dataset_info['skeleton_info'] - self.joint_weights = np.array( - self._dataset_info['joint_weights'], dtype=np.float32)[:, None] - - self.sigmas = np.array(self._dataset_info['sigmas']) - - self._parse_keypoint_info() - self._parse_skeleton_info() - - def _parse_skeleton_info(self): - """Parse skeleton information. - - - link_num (int): number of links. - - skeleton (list((2,))): list of links (id). - - skeleton_name (list((2,))): list of links (name). - - pose_link_color (np.ndarray): the color of the link for - visualization. - """ - self.link_num = len(self.skeleton_info.keys()) - self.pose_link_color = [] - - self.skeleton_name = [] - self.skeleton = [] - for skid in self.skeleton_info.keys(): - link = self.skeleton_info[skid]['link'] - self.skeleton_name.append(link) - self.skeleton.append([ - self.keypoint_name2id[link[0]], self.keypoint_name2id[link[1]] - ]) - self.pose_link_color.append(self.skeleton_info[skid].get( - 'color', [255, 128, 0])) - self.pose_link_color = np.array(self.pose_link_color) - - def _parse_keypoint_info(self): - """Parse keypoint information. - - - keypoint_num (int): number of keypoints. - - keypoint_id2name (dict): mapping keypoint id to keypoint name. - - keypoint_name2id (dict): mapping keypoint name to keypoint id. - - upper_body_ids (list): a list of keypoints that belong to the - upper body. - - lower_body_ids (list): a list of keypoints that belong to the - lower body. - - flip_index (list): list of flip index (id) - - flip_pairs (list((2,))): list of flip pairs (id) - - flip_index_name (list): list of flip index (name) - - flip_pairs_name (list((2,))): list of flip pairs (name) - - pose_kpt_color (np.ndarray): the color of the keypoint for - visualization. - """ - - self.keypoint_num = len(self.keypoint_info.keys()) - self.keypoint_id2name = {} - self.keypoint_name2id = {} - - self.pose_kpt_color = [] - self.upper_body_ids = [] - self.lower_body_ids = [] - - self.flip_index_name = [] - self.flip_pairs_name = [] - - for kid in self.keypoint_info.keys(): - - keypoint_name = self.keypoint_info[kid]['name'] - self.keypoint_id2name[kid] = keypoint_name - self.keypoint_name2id[keypoint_name] = kid - self.pose_kpt_color.append(self.keypoint_info[kid].get( - 'color', [255, 128, 0])) - - type = self.keypoint_info[kid].get('type', '') - if type == 'upper': - self.upper_body_ids.append(kid) - elif type == 'lower': - self.lower_body_ids.append(kid) - else: - pass - - swap_keypoint = self.keypoint_info[kid].get('swap', '') - if swap_keypoint == keypoint_name or swap_keypoint == '': - self.flip_index_name.append(keypoint_name) - else: - self.flip_index_name.append(swap_keypoint) - if [swap_keypoint, keypoint_name] not in self.flip_pairs_name: - self.flip_pairs_name.append([keypoint_name, swap_keypoint]) - - self.flip_pairs = [[ - self.keypoint_name2id[pair[0]], self.keypoint_name2id[pair[1]] - ] for pair in self.flip_pairs_name] - self.flip_index = [ - self.keypoint_name2id[name] for name in self.flip_index_name - ] - self.pose_kpt_color = np.array(self.pose_kpt_color) diff --git a/spaces/Abhay1210/prompt-generator_V1/README.md b/spaces/Abhay1210/prompt-generator_V1/README.md deleted file mode 100644 index 1912cd1860be08a67b37276c3946a308ab155b86..0000000000000000000000000000000000000000 --- a/spaces/Abhay1210/prompt-generator_V1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Prompt-generator V1 -emoji: 🏆 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Abhilashvj/planogram-compliance/models/experimental.py b/spaces/Abhilashvj/planogram-compliance/models/experimental.py deleted file mode 100644 index 42644a82ae1f183e33eec7a363bb6c4c971e310c..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/models/experimental.py +++ /dev/null @@ -1,147 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Experimental modules -""" -import math - -import numpy as np -import torch -import torch.nn as nn - -from utils.downloads import attempt_download - - -class Sum(nn.Module): - # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070 - def __init__(self, n, weight=False): # n: number of inputs - super().__init__() - self.weight = weight # apply weights boolean - self.iter = range(n - 1) # iter object - if weight: - self.w = nn.Parameter( - -torch.arange(1.0, n) / 2, requires_grad=True - ) # layer weights - - def forward(self, x): - y = x[0] # no weight - if self.weight: - w = torch.sigmoid(self.w) * 2 - for i in self.iter: - y = y + x[i + 1] * w[i] - else: - for i in self.iter: - y = y + x[i + 1] - return y - - -class MixConv2d(nn.Module): - # Mixed Depth-wise Conv https://arxiv.org/abs/1907.09595 - def __init__( - self, c1, c2, k=(1, 3), s=1, equal_ch=True - ): # ch_in, ch_out, kernel, stride, ch_strategy - super().__init__() - n = len(k) # number of convolutions - if equal_ch: # equal c_ per group - i = torch.linspace(0, n - 1e-6, c2).floor() # c2 indices - c_ = [(i == g).sum() for g in range(n)] # intermediate channels - else: # equal weight.numel() per group - b = [c2] + [0] * n - a = np.eye(n + 1, n, k=-1) - a -= np.roll(a, 1, axis=1) - a *= np.array(k) ** 2 - a[0] = 1 - c_ = np.linalg.lstsq(a, b, rcond=None)[ - 0 - ].round() # solve for equal weight indices, ax = b - - self.m = nn.ModuleList( - [ - nn.Conv2d( - c1, - int(c_), - k, - s, - k // 2, - groups=math.gcd(c1, int(c_)), - bias=False, - ) - for k, c_ in zip(k, c_) - ] - ) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.SiLU() - - def forward(self, x): - return self.act(self.bn(torch.cat([m(x) for m in self.m], 1))) - - -class Ensemble(nn.ModuleList): - # Ensemble of models - def __init__(self): - super().__init__() - - def forward(self, x, augment=False, profile=False, visualize=False): - y = [module(x, augment, profile, visualize)[0] for module in self] - # y = torch.stack(y).max(0)[0] # max ensemble - # y = torch.stack(y).mean(0) # mean ensemble - y = torch.cat(y, 1) # nms ensemble - return y, None # inference, train output - - -def attempt_load(weights, device=None, inplace=True, fuse=True): - # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a - from models.yolo import Detect, Model - - model = Ensemble() - for w in weights if isinstance(weights, list) else [weights]: - ckpt = torch.load(attempt_download(w), map_location="cpu") # load - ckpt = ( - (ckpt.get("ema") or ckpt["model"]).to(device).float() - ) # FP32 model - - # Model compatibility updates - if not hasattr(ckpt, "stride"): - ckpt.stride = torch.tensor([32.0]) - if hasattr(ckpt, "names") and isinstance(ckpt.names, (list, tuple)): - ckpt.names = dict(enumerate(ckpt.names)) # convert to dict - - model.append( - ckpt.fuse().eval() - if fuse and hasattr(ckpt, "fuse") - else ckpt.eval() - ) # model in eval mode - - # Module compatibility updates - for m in model.modules(): - t = type(m) - if t in ( - nn.Hardswish, - nn.LeakyReLU, - nn.ReLU, - nn.ReLU6, - nn.SiLU, - Detect, - Model, - ): - m.inplace = inplace # torch 1.7.0 compatibility - if t is Detect and not isinstance(m.anchor_grid, list): - delattr(m, "anchor_grid") - setattr(m, "anchor_grid", [torch.zeros(1)] * m.nl) - elif t is nn.Upsample and not hasattr(m, "recompute_scale_factor"): - m.recompute_scale_factor = None # torch 1.11.0 compatibility - - # Return model - if len(model) == 1: - return model[-1] - - # Return detection ensemble - print(f"Ensemble created with {weights}\n") - for k in "names", "nc", "yaml": - setattr(model, k, getattr(model[0], k)) - model.stride = model[ - torch.argmax(torch.tensor([m.stride.max() for m in model])).int() - ].stride # max stride - assert all( - model[0].nc == m.nc for m in model - ), f"Models have different class counts: {[m.nc for m in model]}" - return model diff --git a/spaces/AgentVerse/agentVerse/agentverse_command/main_tasksolving_cli.py b/spaces/AgentVerse/agentVerse/agentverse_command/main_tasksolving_cli.py deleted file mode 100644 index 46bf2b461c429f625ba4a686859e197d135060ca..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse_command/main_tasksolving_cli.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import logging - -# from agentverse.agentverse import AgentVerse -from agentverse.tasksolving import TaskSolving -from agentverse.gui import GUI -from agentverse.logging import logger -from argparse import ArgumentParser - -parser = ArgumentParser() - -parser.add_argument( - "--task", - type=str, - default="tasksolving/brainstorming", -) -parser.add_argument("--debug", action="store_true") -parser.add_argument( - "--tasks_dir", - type=str, - default=os.path.join(os.path.dirname(__file__), "..", "agentverse", "tasks"), -) -args = parser.parse_args() - -logger.set_level(logging.DEBUG if args.debug else logging.INFO) - - -def cli_main(): - agentversepipeline = TaskSolving.from_task(args.task, args.tasks_dir) - agentversepipeline.run() - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Factory.js deleted file mode 100644 index d30752208f74d78e84fdcf3d9e0ea2350a82280c..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Factory.js +++ /dev/null @@ -1,11 +0,0 @@ -import Rotate from './Rotate.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('rotate', function (config) { - return new Rotate(this.scene, config); -}); - -SetValue(window, 'RexPlugins.UI.Rotate', Rotate); - -export default Rotate; \ No newline at end of file diff --git a/spaces/AlanMars/QYL-AI-Space/modules/models/StableLM.py b/spaces/AlanMars/QYL-AI-Space/modules/models/StableLM.py deleted file mode 100644 index f4affc3699e335f1e42bf5fc8c93e92a41d027fe..0000000000000000000000000000000000000000 --- a/spaces/AlanMars/QYL-AI-Space/modules/models/StableLM.py +++ /dev/null @@ -1,93 +0,0 @@ -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer -import time -import numpy as np -from torch.nn import functional as F -import os -from .base_model import BaseLLMModel -from threading import Thread - -STABLELM_MODEL = None -STABLELM_TOKENIZER = None - - -class StopOnTokens(StoppingCriteria): - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: - stop_ids = [50278, 50279, 50277, 1, 0] - for stop_id in stop_ids: - if input_ids[0][-1] == stop_id: - return True - return False - - -class StableLM_Client(BaseLLMModel): - def __init__(self, model_name, user_name="") -> None: - super().__init__(model_name=model_name, user=user_name) - global STABLELM_MODEL, STABLELM_TOKENIZER - print(f"Starting to load StableLM to memory") - if model_name == "StableLM": - model_name = "stabilityai/stablelm-tuned-alpha-7b" - else: - model_name = f"models/{model_name}" - if STABLELM_MODEL is None: - STABLELM_MODEL = AutoModelForCausalLM.from_pretrained( - model_name, torch_dtype=torch.float16).cuda() - if STABLELM_TOKENIZER is None: - STABLELM_TOKENIZER = AutoTokenizer.from_pretrained(model_name) - self.generator = pipeline( - 'text-generation', model=STABLELM_MODEL, tokenizer=STABLELM_TOKENIZER, device=0) - print(f"Sucessfully loaded StableLM to the memory") - self.system_prompt = """StableAssistant -- StableAssistant is A helpful and harmless Open Source AI Language Model developed by Stability and CarperAI. -- StableAssistant is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user. -- StableAssistant is more than just an information source, StableAssistant is also able to write poetry, short stories, and make jokes. -- StableAssistant will refuse to participate in anything that could harm a human.""" - self.max_generation_token = 1024 - self.top_p = 0.95 - self.temperature = 1.0 - - def _get_stablelm_style_input(self): - history = self.history + [{"role": "assistant", "content": ""}] - print(history) - messages = self.system_prompt + \ - "".join(["".join(["<|USER|>"+history[i]["content"], "<|ASSISTANT|>"+history[i + 1]["content"]]) - for i in range(0, len(history), 2)]) - return messages - - def _generate(self, text, bad_text=None): - stop = StopOnTokens() - result = self.generator(text, max_new_tokens=self.max_generation_token, num_return_sequences=1, num_beams=1, do_sample=True, - temperature=self.temperature, top_p=self.top_p, top_k=1000, stopping_criteria=StoppingCriteriaList([stop])) - return result[0]["generated_text"].replace(text, "") - - def get_answer_at_once(self): - messages = self._get_stablelm_style_input() - return self._generate(messages), len(messages) - - def get_answer_stream_iter(self): - stop = StopOnTokens() - messages = self._get_stablelm_style_input() - - # model_inputs = tok([messages], return_tensors="pt")['input_ids'].cuda()[:, :4096-1024] - model_inputs = STABLELM_TOKENIZER( - [messages], return_tensors="pt").to("cuda") - streamer = TextIteratorStreamer( - STABLELM_TOKENIZER, timeout=10., skip_prompt=True, skip_special_tokens=True) - generate_kwargs = dict( - model_inputs, - streamer=streamer, - max_new_tokens=self.max_generation_token, - do_sample=True, - top_p=self.top_p, - top_k=1000, - temperature=self.temperature, - num_beams=1, - stopping_criteria=StoppingCriteriaList([stop]) - ) - t = Thread(target=STABLELM_MODEL.generate, kwargs=generate_kwargs) - t.start() - - partial_text = "" - for new_text in streamer: - partial_text += new_text - yield partial_text diff --git a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/text/sanskrit.py b/spaces/Alycer/VITS-Umamusume-voice-synthesizer/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/Ameaou/academic-chatgpt3.1/Dockerfile b/spaces/Ameaou/academic-chatgpt3.1/Dockerfile deleted file mode 100644 index da5053dbc7fc0accbd7b10fab87ca72feced8fe8..0000000000000000000000000000000000000000 --- a/spaces/Ameaou/academic-chatgpt3.1/Dockerfile +++ /dev/null @@ -1,20 +0,0 @@ -# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM -# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic . -# 如何运行: docker run --rm -it --net=host gpt-academic -FROM python:3.11 - -RUN echo '[global]' > /etc/pip.conf && \ - echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \ - echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf - - -WORKDIR /gpt -COPY requirements.txt . -RUN pip3 install -r requirements.txt - -COPY . . - -# 可选步骤,用于预热模块 -RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()' - -CMD ["python3", "-u", "main.py"] diff --git a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/libJPG/jpge.h b/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/libJPG/jpge.h deleted file mode 100644 index a46c805ab80aab491f7f9508b3a008b149866bee..0000000000000000000000000000000000000000 --- a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/libJPG/jpge.h +++ /dev/null @@ -1,172 +0,0 @@ - -// jpge.h - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// Alex Evans: Added RGBA support, linear memory allocator. -#ifndef JPEG_ENCODER_H -#define JPEG_ENCODER_H - -#include - -namespace jpge -{ - typedef unsigned char uint8; - typedef signed short int16; - typedef signed int int32; - typedef unsigned short uint16; - typedef unsigned int uint32; - typedef unsigned int uint; - - // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common. - enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 }; - - // JPEG compression parameters structure. - struct params - { - inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { } - - inline bool check_valid() const - { - if ((m_quality < 1) || (m_quality > 100)) return false; - if ((uint)m_subsampling > (uint)H2V2) return false; - return true; - } - - // Quality: 1-100, higher is better. Typical values are around 50-95. - int m_quality; - - // m_subsampling: - // 0 = Y (grayscale) only - // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU) - // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU) - // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common) - subsampling_t m_subsampling; - - // Disables CbCr discrimination - only intended for testing. - // If true, the Y quantization table is also used for the CbCr channels. - bool m_no_chroma_discrim_flag; - - bool m_two_pass_flag; - }; - - // Writes JPEG image to a file. - // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels. - bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Writes JPEG image to memory buffer. - // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes. - // If return value is true, buf_size will be set to the size of the compressed data. - bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Output stream abstract class - used by the jpeg_encoder class to write to the output stream. - // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts. - class output_stream - { - public: - virtual ~output_stream() { }; - virtual bool put_buf(const void* Pbuf, int64_t len) = 0; - template inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); } - }; - - // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions. - class jpeg_encoder - { - public: - jpeg_encoder(); - ~jpeg_encoder(); - - // Initializes the compressor. - // pStream: The stream object to use for writing compressed data. - // params - Compression parameters structure, defined above. - // width, height - Image dimensions. - // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data. - // Returns false on out of memory or if a stream write fails. - bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params()); - - const params &get_params() const { return m_params; } - - // Deinitializes the compressor, freeing any allocated memory. May be called at any time. - void deinit(); - - uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; } - inline uint get_cur_pass() { return m_pass_num; } - - // Call this method with each source scanline. - // width * src_channels bytes per scanline is expected (RGB or Y format). - // You must call with NULL after all scanlines are processed to finish compression. - // Returns false on out of memory or if a stream write fails. - bool process_scanline(const void* pScanline); - - private: - jpeg_encoder(const jpeg_encoder &); - jpeg_encoder &operator =(const jpeg_encoder &); - - typedef int32 sample_array_t; - - output_stream *m_pStream; - params m_params; - uint8 m_num_components; - uint8 m_comp_h_samp[3], m_comp_v_samp[3]; - int m_image_x, m_image_y, m_image_bpp, m_image_bpl; - int m_image_x_mcu, m_image_y_mcu; - int m_image_bpl_xlt, m_image_bpl_mcu; - int m_mcus_per_row; - int m_mcu_x, m_mcu_y; - uint8 *m_mcu_lines[16]; - uint8 m_mcu_y_ofs; - sample_array_t m_sample_array[64]; - int16 m_coefficient_array[64]; - int32 m_quantization_tables[2][64]; - uint m_huff_codes[4][256]; - uint8 m_huff_code_sizes[4][256]; - uint8 m_huff_bits[4][17]; - uint8 m_huff_val[4][256]; - uint32 m_huff_count[4][256]; - int m_last_dc_val[3]; - enum { JPGE_OUT_BUF_SIZE = 2048 }; - uint8 m_out_buf[JPGE_OUT_BUF_SIZE]; - uint8 *m_pOut_buf; - uint m_out_buf_left; - uint32 m_bit_buffer; - uint m_bits_in; - uint8 m_pass_num; - bool m_all_stream_writes_succeeded; - - void optimize_huffman_table(int table_num, int table_len); - void emit_byte(uint8 i); - void emit_word(uint i); - void emit_marker(int marker); - void emit_jfif_app0(); - void emit_dqt(); - void emit_sof(); - void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag); - void emit_dhts(); - void emit_sos(); - void emit_markers(); - void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val); - void compute_quant_table(int32 *dst, int16 *src); - void adjust_quant_table(int32 *dst, int32 *src); - void first_pass_init(); - bool second_pass_init(); - bool jpg_open(int p_x_res, int p_y_res, int src_channels); - void load_block_8_8_grey(int x); - void load_block_8_8(int x, int y, int c); - void load_block_16_8(int x, int c); - void load_block_16_8_8(int x, int c); - void load_quantized_coefficients(int component_num); - void flush_output_buffer(); - void put_bits(uint bits, uint len); - void code_coefficients_pass_one(int component_num); - void code_coefficients_pass_two(int component_num); - void code_block(int component_num); - void process_mcu_row(); - bool terminate_pass_one(); - bool terminate_pass_two(); - bool process_end_of_image(); - void load_mcu(const void* src); - void clear(); - void init(); - }; - -} // namespace jpge - -#endif // JPEG_ENCODER \ No newline at end of file diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fcos/README.md b/spaces/Andy1621/uniformer_image_detection/configs/fcos/README.md deleted file mode 100644 index c6209d2fac53c5611dc39b726cff1a853137b161..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/fcos/README.md +++ /dev/null @@ -1,35 +0,0 @@ -# FCOS: Fully Convolutional One-Stage Object Detection - -## Introduction - -[ALGORITHM] - -```latex -@article{tian2019fcos, - title={FCOS: Fully Convolutional One-Stage Object Detection}, - author={Tian, Zhi and Shen, Chunhua and Chen, Hao and He, Tong}, - journal={arXiv preprint arXiv:1904.01355}, - year={2019} -} -``` - -## Results and Models - -| Backbone | Style | GN | MS train | Tricks | DCN | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -|:---------:|:-------:|:-------:|:--------:|:-------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:| -| R-50 | caffe | Y | N | N | N | 1x | 3.6 | 22.7 | 36.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos/fcos_r50_caffe_fpn_gn-head_1x_coco.py) | [model](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_r50_caffe_fpn_gn-head_1x_coco/fcos_r50_caffe_fpn_gn-head_1x_coco-821213aa.pth) | [log](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_r50_caffe_fpn_gn-head_1x_coco/20201227_180009.log.json) | -| R-50 | caffe | Y | N | Y | N | 1x | 3.7 | - | 38.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco.py) | [model](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco-0a0d75a8.pth) | [log](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco/20210105_135818.log.json)| -| R-50 | caffe | Y | N | Y | Y | 1x | 3.8 | - | 42.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_dcn_1x_coco.py) | [model](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_dcn_1x_coco/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_dcn_1x_coco-ae4d8b3d.pth) | [log](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_dcn_1x_coco/20210105_224556.log.json)| -| R-101 | caffe | Y | N | N | N | 1x | 5.5 | 17.3 | 39.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos/fcos_r101_caffe_fpn_gn-head_1x_coco.py) | [model](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_r101_caffe_fpn_gn-head_1x_coco/fcos_r101_caffe_fpn_gn-head_1x_coco-0e37b982.pth) | [log](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_r101_caffe_fpn_gn-head_1x_coco/20210103_155046.log.json) | - -| Backbone | Style | GN | MS train | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -|:---------:|:-------:|:-------:|:--------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:| -| R-50 | caffe | Y | Y | 2x | 2.6 | 22.9 | 38.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos/fcos_r50_caffe_fpn_gn-head_mstrain_640-800_2x_coco.py) | [model](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_r50_caffe_fpn_gn-head_mstrain_640-800_2x_coco/fcos_r50_caffe_fpn_gn-head_mstrain_640-800_2x_coco-d92ceeea.pth) | [log](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_r50_caffe_fpn_gn-head_mstrain_640-800_2x_coco/20201227_161900.log.json) | -| R-101 | caffe | Y | Y | 2x | 5.5 | 17.3 | 40.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_2x_coco.py) | [model](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_2x_coco/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_2x_coco-511424d6.pth) | [log](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_2x_coco/20210103_155046.log.json) | -| X-101 | pytorch | Y | Y | 2x | 10.0 | 9.7 | 42.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_2x_coco.py) | [model](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_2x_coco/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_2x_coco-ede514a8.pth) | [log](https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmdetection/v2.0/fcos/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_2x_coco/20210114_133041.log.json) | - -**Notes:** - -- The X-101 backbone is X-101-64x4d. -- Tricks means setting `norm_on_bbox`, `centerness_on_reg`, `center_sampling` as `True`. -- DCN means using `DCNv2` in both backbone and head. diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_x101_32x4d_fpn_dconv_c4-c5_mstrain_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_x101_32x4d_fpn_dconv_c4-c5_mstrain_2x_coco.py deleted file mode 100644 index a2370e234dfec0099aaf74c46a3a85052d882385..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_x101_32x4d_fpn_dconv_c4-c5_mstrain_2x_coco.py +++ /dev/null @@ -1,17 +0,0 @@ -_base_ = './gfl_r50_fpn_mstrain_2x_coco.py' -model = dict( - type='GFL', - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, False, True, True), - norm_eval=True, - style='pytorch')) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x512_20k_voc12aug.py deleted file mode 100644 index a496204bdb061d975c40cb7ef2aaada40e020a13..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_512x512_20k_voc12aug.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = [ - '../_base_/models/gcnet_r50-d8.py', - '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_20k.py' -] -model = dict( - decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/defaults.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/defaults.py deleted file mode 100644 index 2ebade82721d10ded175256c9dd6cdc800b2ed69..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/defaults.py +++ /dev/null @@ -1,74 +0,0 @@ -import copy - -# Slightly different defaults for OpenAI's API -# Data type is important, Ex. use 0.0 for a float 0 -default_req_params = { - 'max_new_tokens': 16, # 'Inf' for chat - 'auto_max_new_tokens': False, - 'max_tokens_second': 0, - 'temperature': 1.0, - 'top_p': 1.0, - 'top_k': 1, # choose 20 for chat in absence of another default - 'repetition_penalty': 1.18, - 'repetition_penalty_range': 0, - 'encoder_repetition_penalty': 1.0, - 'suffix': None, - 'stream': False, - 'echo': False, - 'seed': -1, - # 'n' : default(body, 'n', 1), # 'n' doesn't have a direct map - 'truncation_length': 2048, # first use shared.settings value - 'add_bos_token': True, - 'do_sample': True, - 'typical_p': 1.0, - 'epsilon_cutoff': 0.0, # In units of 1e-4 - 'eta_cutoff': 0.0, # In units of 1e-4 - 'tfs': 1.0, - 'top_a': 0.0, - 'min_length': 0, - 'no_repeat_ngram_size': 0, - 'num_beams': 1, - 'penalty_alpha': 0.0, - 'length_penalty': 1.0, - 'early_stopping': False, - 'mirostat_mode': 0, - 'mirostat_tau': 5.0, - 'mirostat_eta': 0.1, - 'grammar_string': '', - 'guidance_scale': 1, - 'negative_prompt': '', - 'ban_eos_token': False, - 'custom_token_bans': '', - 'skip_special_tokens': True, - 'custom_stopping_strings': '', - # 'logits_processor' - conditionally passed - # 'stopping_strings' - temporarily used - # 'logprobs' - temporarily used - # 'requested_model' - temporarily used -} - - -def get_default_req_params(): - return copy.deepcopy(default_req_params) - - -def default(dic, key, default): - ''' - little helper to get defaults if arg is present but None and should be the same type as default. - ''' - val = dic.get(key, default) - if not isinstance(val, type(default)): - # maybe it's just something like 1 instead of 1.0 - try: - v = type(default)(val) - if type(val)(v) == val: # if it's the same value passed in, it's ok. - return v - except: - pass - - val = default - return val - - -def clamp(value, minvalue, maxvalue): - return max(minvalue, min(value, maxvalue)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/moderations.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/moderations.py deleted file mode 100644 index 1d2d4c1dacb79dfa327bfbb3a93419cfddb49d98..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/moderations.py +++ /dev/null @@ -1,68 +0,0 @@ -import time - -import numpy as np -from extensions.openai.embeddings import get_embeddings -from numpy.linalg import norm - -moderations_disabled = False # return 0/false -category_embeddings = None -antonym_embeddings = None -categories = ["sexual", "hate", "harassment", "self-harm", "sexual/minors", "hate/threatening", "violence/graphic", "self-harm/intent", "self-harm/instructions", "harassment/threatening", "violence"] -flag_threshold = 0.5 - - -def get_category_embeddings() -> dict: - global category_embeddings, categories - if category_embeddings is None: - embeddings = get_embeddings(categories).tolist() - category_embeddings = dict(zip(categories, embeddings)) - - return category_embeddings - - -def cosine_similarity(a: np.ndarray, b: np.ndarray) -> float: - return np.dot(a, b) / (norm(a) * norm(b)) - - -# seems most openai like with all-mpnet-base-v2 -def mod_score(a: np.ndarray, b: np.ndarray) -> float: - return 2.0 * np.dot(a, b) - - -def moderations(input): - global category_embeddings, categories, flag_threshold, moderations_disabled - results = { - "id": f"modr-{int(time.time()*1e9)}", - "model": "text-moderation-001", - "results": [], - } - - if moderations_disabled: - results['results'] = [{ - 'categories': dict([(C, False) for C in categories]), - 'category_scores': dict([(C, 0.0) for C in categories]), - 'flagged': False, - }] - return results - - category_embeddings = get_category_embeddings() - - # input, string or array - if isinstance(input, str): - input = [input] - - for in_str in input: - for ine in get_embeddings([in_str]): - category_scores = dict([(C, mod_score(category_embeddings[C], ine)) for C in categories]) - category_flags = dict([(C, bool(category_scores[C] > flag_threshold)) for C in categories]) - flagged = any(category_flags.values()) - - results['results'].extend([{ - 'flagged': flagged, - 'categories': category_flags, - 'category_scores': category_scores, - }]) - - print(results) - - return results diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/perplexity_colors/script.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/perplexity_colors/script.py deleted file mode 100644 index 2a986ac40b4194a5751015241d82046ce95cbca2..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/perplexity_colors/script.py +++ /dev/null @@ -1,309 +0,0 @@ -import time - -import gradio -import numpy as np -import torch -from transformers import LogitsProcessor - -from modules import html_generator, shared - -params = { - 'active': True, - 'color_by_perplexity': False, - 'color_by_probability': False, - 'ppl_scale': 15.0, # No slider for this right now, because I don't think it really needs to be changed. Very large perplexity scores don't show up often. - 'probability_dropdown': False, - 'verbose': False # For debugging mostly -} - - -class PerplexityLogits(LogitsProcessor): - def __init__(self, verbose=False): - self.generated_token_ids = [] - self.selected_probs = [] - self.top_token_ids_list = [] - self.top_probs_list = [] - self.perplexities_list = [] - self.last_probs = None - self.verbose = verbose - - def __call__(self, input_ids, scores): - # t0 = time.time() - probs = torch.softmax(scores, dim=-1, dtype=torch.float) - log_probs = torch.nan_to_num(torch.log(probs)) # Note: This is to convert log(0) nan to 0, but probs*log_probs makes this 0 not affect the perplexity. - entropy = -torch.sum(probs * log_probs) - entropy = entropy.cpu().numpy() - perplexity = round(float(np.exp(entropy)), 4) - self.perplexities_list.append(perplexity) - last_token_id = int(input_ids[0][-1].cpu().numpy().item()) - # Store the generated tokens (not sure why this isn't accessible in the output endpoint!) - self.generated_token_ids.append(last_token_id) - # Get last probability, and add to the list if it wasn't there - if len(self.selected_probs) > 0: - # Is the selected token in the top tokens? - if self.verbose: - print('Probs: Token after', shared.tokenizer.decode(last_token_id)) - print('Probs:', [shared.tokenizer.decode(token_id) for token_id in self.top_token_ids_list[-1][0]]) - print('Probs:', [round(float(prob), 4) for prob in self.top_probs_list[-1][0]]) - if last_token_id in self.top_token_ids_list[-1][0]: - idx = self.top_token_ids_list[-1][0].index(last_token_id) - self.selected_probs.append(self.top_probs_list[-1][0][idx]) - else: - self.top_token_ids_list[-1][0].append(last_token_id) - last_prob = round(float(self.last_probs[last_token_id]), 4) - self.top_probs_list[-1][0].append(last_prob) - self.selected_probs.append(last_prob) - else: - self.selected_probs.append(1.0) # Placeholder for the last token of the prompt - - if self.verbose: - pplbar = "-" - if not np.isnan(perplexity): - pplbar = "*" * round(perplexity) - print(f"PPL: Token after {shared.tokenizer.decode(last_token_id)}\t{perplexity:.2f}\t{pplbar}") - - # Get top 5 probabilities - top_tokens_and_probs = torch.topk(probs, 5) - top_probs = top_tokens_and_probs.values.cpu().numpy().astype(float).tolist() - top_token_ids = top_tokens_and_probs.indices.cpu().numpy().astype(int).tolist() - - self.top_token_ids_list.append(top_token_ids) - self.top_probs_list.append(top_probs) - - probs = probs.cpu().numpy().flatten() - self.last_probs = probs # Need to keep this as a reference for top probs - - # t1 = time.time() - # print(f"PPL Processor: {(t1-t0):.3f} s") - # About 1 ms, though occasionally up to around 100 ms, not sure why... - # Doesn't actually modify the logits! - return scores - - -# Stores the perplexity and top probabilities -ppl_logits_processor = None - - -def logits_processor_modifier(logits_processor_list, input_ids): - global ppl_logits_processor - if params['active']: - ppl_logits_processor = PerplexityLogits(verbose=params['verbose']) - logits_processor_list.append(ppl_logits_processor) - - -def output_modifier(text): - global ppl_logits_processor - # t0 = time.time() - - if not params['active']: - return text - - # TODO: It's probably more efficient to do this above rather than modifying all these lists - # Remove last element of perplexities_list, top_token_ids_list, top_tokens_list, top_probs_list since everything is off by one because this extension runs before generation - perplexities = ppl_logits_processor.perplexities_list[:-1] - top_token_ids_list = ppl_logits_processor.top_token_ids_list[:-1] - top_tokens_list = [[shared.tokenizer.decode(token_id) for token_id in top_token_ids[0]] for top_token_ids in top_token_ids_list] - top_probs_list = ppl_logits_processor.top_probs_list[:-1] - # Remove first element of generated_token_ids, generated_tokens, selected_probs because they are for the last token of the prompt - gen_token_ids = ppl_logits_processor.generated_token_ids[1:] - gen_tokens = [shared.tokenizer.decode(token_id) for token_id in gen_token_ids] - sel_probs = ppl_logits_processor.selected_probs[1:] - - end_part = '
' if params['probability_dropdown'] else '' # Helps with finding the index after replacing part of the text. - - i = 0 - for token, prob, ppl, top_tokens, top_probs in zip(gen_tokens, sel_probs, perplexities, top_tokens_list, top_probs_list): - color = 'ffffff' - if params['color_by_probability'] and params['color_by_perplexity']: - color = probability_perplexity_color_scale(prob, ppl) - elif params['color_by_perplexity']: - color = perplexity_color_scale(ppl) - elif params['color_by_probability']: - color = probability_color_scale(prob) - if token in text[i:]: - if params['probability_dropdown']: - text = text[:i] + text[i:].replace(token, add_dropdown_html(token, color, top_tokens, top_probs[0], ppl), 1) - else: - text = text[:i] + text[i:].replace(token, add_color_html(token, color), 1) - i += text[i:].find(end_part) + len(end_part) - - # Use full perplexity list for calculating the average here. - print('Average perplexity:', round(np.mean(ppl_logits_processor.perplexities_list[:-1]), 4)) - # t1 = time.time() - # print(f"Modifier: {(t1-t0):.3f} s") - # About 50 ms - return text - - -def probability_color_scale(prob): - ''' - Green-yellow-red color scale - ''' - - rv = 0 - gv = 0 - if prob <= 0.5: - rv = 'ff' - gv = hex(int(255 * prob * 2))[2:] - if len(gv) < 2: - gv = '0' * (2 - len(gv)) + gv - else: - rv = hex(int(255 - 255 * (prob - 0.5) * 2))[2:] - gv = 'ff' - if len(rv) < 2: - rv = '0' * (2 - len(rv)) + rv - - return rv + gv + '00' - - -def perplexity_color_scale(ppl): - ''' - Red component only, white for 0 perplexity (sorry if you're not in dark mode) - ''' - value = hex(max(int(255.0 - params['ppl_scale'] * (float(ppl) - 1.0)), 0))[2:] - if len(value) < 2: - value = '0' * (2 - len(value)) + value - - return 'ff' + value + value - - -def probability_perplexity_color_scale(prob, ppl): - ''' - Green-yellow-red for probability and blue component for perplexity - ''' - - rv = 0 - gv = 0 - bv = hex(min(max(int(params['ppl_scale'] * (float(ppl) - 1.0)), 0), 255))[2:] - if len(bv) < 2: - bv = '0' * (2 - len(bv)) + bv - - if prob <= 0.5: - rv = 'ff' - gv = hex(int(255 * prob * 2))[2:] - if len(gv) < 2: - gv = '0' * (2 - len(gv)) + gv - else: - rv = hex(int(255 - 255 * (prob - 0.5) * 2))[2:] - gv = 'ff' - if len(rv) < 2: - rv = '0' * (2 - len(rv)) + rv - - return rv + gv + bv - - -def add_color_html(token, color): - return f'{token}' - - -# TODO: Major issue: Applying this to too many tokens will cause a permanent slowdown in generation speed until the messages are removed from the history. -# I think the issue is from HTML elements taking up space in the visible history, and things like history deepcopy add latency proportional to the size of the history. -# Potential solution is maybe to modify the main generation code to send just the internal text and not the visible history, to avoid moving too much around. -# I wonder if we can also avoid using deepcopy here. -def add_dropdown_html(token, color, top_tokens, top_probs, perplexity=0): - html = f'
{token}
' - return html # About 750 characters per token... - - -def custom_css(): - return """ - .dropdown { - display: none; - position: absolute; - z-index: 50; - background-color: var(--block-background-fill); - box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2); - width: max-content; - overflow: visible; - padding: 5px; - border-radius: 10px; - border: 1px solid var(--border-color-primary); - } - - .dropdown-content { - border: none; - z-index: 50; - } - - .dropdown-content tr.selected { - background-color: var(--block-label-background-fill); - } - - .dropdown-content td { - color: var(--body-text-color); - } - - .hoverable { - color: var(--body-text-color); - position: relative; - display: inline-block; - overflow: visible; - font-size: 15px; - line-height: 1.75; - margin: 0; - padding: 0; - } - - .hoverable:hover .dropdown { - display: block; - } - - pre { - white-space: pre-wrap; - } - - # TODO: This makes the hover menus extend outside the bounds of the chat area, which is good. - # However, it also makes the scrollbar disappear, which is bad. - # The scroll bar needs to still be present. So for now, we can't see dropdowns that extend past the edge of the chat area. - #.chat { - # overflow-y: auto; - #} - """ - - -# Monkeypatch applied to html_generator.py -# We simply don't render markdown into HTML. We wrap everything in
 tags to preserve whitespace
-# formatting. If you're coloring tokens by perplexity or probability, or especially if you're using
-# the probability dropdown, you probably care more about seeing the tokens the model actually outputted
-# rather than rendering ```code blocks``` or *italics*.
-def convert_to_markdown(string):
-    return '
' + string + '
' - - -html_generator.convert_to_markdown = convert_to_markdown - - -def ui(): - def update_active_check(x): - params.update({'active': x}) - - def update_color_by_ppl_check(x): - params.update({'color_by_perplexity': x}) - - def update_color_by_prob_check(x): - params.update({'color_by_probability': x}) - - def update_prob_dropdown_check(x): - params.update({'probability_dropdown': x}) - - active_check = gradio.Checkbox(value=True, label="Compute probabilities and perplexity scores", info="Activate this extension. Note that this extension currently does not work with exllama or llama.cpp.") - color_by_ppl_check = gradio.Checkbox(value=False, label="Color by perplexity", info="Higher perplexity is more red. If also showing probability, higher perplexity has more blue component.") - color_by_prob_check = gradio.Checkbox(value=False, label="Color by probability", info="Green-yellow-red linear scale, with 100% green, 50% yellow, 0% red.") - prob_dropdown_check = gradio.Checkbox(value=False, label="Probability dropdown", info="Hover over a token to show a dropdown of top token probabilities. Currently slightly buggy with whitespace between tokens.") - - active_check.change(update_active_check, active_check, None) - color_by_ppl_check.change(update_color_by_ppl_check, color_by_ppl_check, None) - color_by_prob_check.change(update_color_by_prob_check, color_by_prob_check, None) - prob_dropdown_check.change(update_prob_dropdown_check, prob_dropdown_check, None) diff --git a/spaces/Apex-X/GODROOP/roop/processors/frame/__init__.py b/spaces/Apex-X/GODROOP/roop/processors/frame/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Archan/ArXivAudio/search.py b/spaces/Archan/ArXivAudio/search.py deleted file mode 100644 index 95c35a301d7e14a995dcab9b2bdc625b04df3745..0000000000000000000000000000000000000000 --- a/spaces/Archan/ArXivAudio/search.py +++ /dev/null @@ -1,21 +0,0 @@ -import arxiv - - -def search(query="", max_results=10, sort_by="Relevance", sort_order="Descending"): - - sr_by_dict = {"Relevance": arxiv.SortCriterion.Relevance, "Last Updated Date": - arxiv.SortCriterion.LastUpdatedDate, "Submitted Date": arxiv.SortCriterion.SubmittedDate} - sr_or_dict = {"Descending": arxiv.SortOrder.Descending, - "Ascending": arxiv.SortOrder.Ascending} - - search = arxiv.Search( - query=query, - max_results=max_results, - sort_by=sr_by_dict[sort_by], - sort_order=sr_or_dict[sort_order]) - src_lst = [] - for i in search.results(): - id = i.entry_id.split("/") - src_lst.append(i.title+" - " + str(id[-1])) - - return src_lst \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/req/req_file.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/req/req_file.py deleted file mode 100644 index f717c1ccc79f7581f1293b3fcf1a0764def7a84a..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/req/req_file.py +++ /dev/null @@ -1,552 +0,0 @@ -""" -Requirements file parsing -""" - -import logging -import optparse -import os -import re -import shlex -import urllib.parse -from optparse import Values -from typing import ( - TYPE_CHECKING, - Any, - Callable, - Dict, - Generator, - Iterable, - List, - Optional, - Tuple, -) - -from pip._internal.cli import cmdoptions -from pip._internal.exceptions import InstallationError, RequirementsFileParseError -from pip._internal.models.search_scope import SearchScope -from pip._internal.network.session import PipSession -from pip._internal.network.utils import raise_for_status -from pip._internal.utils.encoding import auto_decode -from pip._internal.utils.urls import get_url_scheme - -if TYPE_CHECKING: - # NoReturn introduced in 3.6.2; imported only for type checking to maintain - # pip compatibility with older patch versions of Python 3.6 - from typing import NoReturn - - from pip._internal.index.package_finder import PackageFinder - -__all__ = ["parse_requirements"] - -ReqFileLines = Iterable[Tuple[int, str]] - -LineParser = Callable[[str], Tuple[str, Values]] - -SCHEME_RE = re.compile(r"^(http|https|file):", re.I) -COMMENT_RE = re.compile(r"(^|\s+)#.*$") - -# Matches environment variable-style values in '${MY_VARIABLE_1}' with the -# variable name consisting of only uppercase letters, digits or the '_' -# (underscore). This follows the POSIX standard defined in IEEE Std 1003.1, -# 2013 Edition. -ENV_VAR_RE = re.compile(r"(?P\$\{(?P[A-Z0-9_]+)\})") - -SUPPORTED_OPTIONS: List[Callable[..., optparse.Option]] = [ - cmdoptions.index_url, - cmdoptions.extra_index_url, - cmdoptions.no_index, - cmdoptions.constraints, - cmdoptions.requirements, - cmdoptions.editable, - cmdoptions.find_links, - cmdoptions.no_binary, - cmdoptions.only_binary, - cmdoptions.prefer_binary, - cmdoptions.require_hashes, - cmdoptions.pre, - cmdoptions.trusted_host, - cmdoptions.use_new_feature, -] - -# options to be passed to requirements -SUPPORTED_OPTIONS_REQ: List[Callable[..., optparse.Option]] = [ - cmdoptions.global_options, - cmdoptions.hash, - cmdoptions.config_settings, -] - -# the 'dest' string values -SUPPORTED_OPTIONS_REQ_DEST = [str(o().dest) for o in SUPPORTED_OPTIONS_REQ] - -logger = logging.getLogger(__name__) - - -class ParsedRequirement: - def __init__( - self, - requirement: str, - is_editable: bool, - comes_from: str, - constraint: bool, - options: Optional[Dict[str, Any]] = None, - line_source: Optional[str] = None, - ) -> None: - self.requirement = requirement - self.is_editable = is_editable - self.comes_from = comes_from - self.options = options - self.constraint = constraint - self.line_source = line_source - - -class ParsedLine: - def __init__( - self, - filename: str, - lineno: int, - args: str, - opts: Values, - constraint: bool, - ) -> None: - self.filename = filename - self.lineno = lineno - self.opts = opts - self.constraint = constraint - - if args: - self.is_requirement = True - self.is_editable = False - self.requirement = args - elif opts.editables: - self.is_requirement = True - self.is_editable = True - # We don't support multiple -e on one line - self.requirement = opts.editables[0] - else: - self.is_requirement = False - - -def parse_requirements( - filename: str, - session: PipSession, - finder: Optional["PackageFinder"] = None, - options: Optional[optparse.Values] = None, - constraint: bool = False, -) -> Generator[ParsedRequirement, None, None]: - """Parse a requirements file and yield ParsedRequirement instances. - - :param filename: Path or url of requirements file. - :param session: PipSession instance. - :param finder: Instance of pip.index.PackageFinder. - :param options: cli options. - :param constraint: If true, parsing a constraint file rather than - requirements file. - """ - line_parser = get_line_parser(finder) - parser = RequirementsFileParser(session, line_parser) - - for parsed_line in parser.parse(filename, constraint): - parsed_req = handle_line( - parsed_line, options=options, finder=finder, session=session - ) - if parsed_req is not None: - yield parsed_req - - -def preprocess(content: str) -> ReqFileLines: - """Split, filter, and join lines, and return a line iterator - - :param content: the content of the requirements file - """ - lines_enum: ReqFileLines = enumerate(content.splitlines(), start=1) - lines_enum = join_lines(lines_enum) - lines_enum = ignore_comments(lines_enum) - lines_enum = expand_env_variables(lines_enum) - return lines_enum - - -def handle_requirement_line( - line: ParsedLine, - options: Optional[optparse.Values] = None, -) -> ParsedRequirement: - # preserve for the nested code path - line_comes_from = "{} {} (line {})".format( - "-c" if line.constraint else "-r", - line.filename, - line.lineno, - ) - - assert line.is_requirement - - if line.is_editable: - # For editable requirements, we don't support per-requirement - # options, so just return the parsed requirement. - return ParsedRequirement( - requirement=line.requirement, - is_editable=line.is_editable, - comes_from=line_comes_from, - constraint=line.constraint, - ) - else: - # get the options that apply to requirements - req_options = {} - for dest in SUPPORTED_OPTIONS_REQ_DEST: - if dest in line.opts.__dict__ and line.opts.__dict__[dest]: - req_options[dest] = line.opts.__dict__[dest] - - line_source = f"line {line.lineno} of {line.filename}" - return ParsedRequirement( - requirement=line.requirement, - is_editable=line.is_editable, - comes_from=line_comes_from, - constraint=line.constraint, - options=req_options, - line_source=line_source, - ) - - -def handle_option_line( - opts: Values, - filename: str, - lineno: int, - finder: Optional["PackageFinder"] = None, - options: Optional[optparse.Values] = None, - session: Optional[PipSession] = None, -) -> None: - if opts.hashes: - logger.warning( - "%s line %s has --hash but no requirement, and will be ignored.", - filename, - lineno, - ) - - if options: - # percolate options upward - if opts.require_hashes: - options.require_hashes = opts.require_hashes - if opts.features_enabled: - options.features_enabled.extend( - f for f in opts.features_enabled if f not in options.features_enabled - ) - - # set finder options - if finder: - find_links = finder.find_links - index_urls = finder.index_urls - no_index = finder.search_scope.no_index - if opts.no_index is True: - no_index = True - index_urls = [] - if opts.index_url and not no_index: - index_urls = [opts.index_url] - if opts.extra_index_urls and not no_index: - index_urls.extend(opts.extra_index_urls) - if opts.find_links: - # FIXME: it would be nice to keep track of the source - # of the find_links: support a find-links local path - # relative to a requirements file. - value = opts.find_links[0] - req_dir = os.path.dirname(os.path.abspath(filename)) - relative_to_reqs_file = os.path.join(req_dir, value) - if os.path.exists(relative_to_reqs_file): - value = relative_to_reqs_file - find_links.append(value) - - if session: - # We need to update the auth urls in session - session.update_index_urls(index_urls) - - search_scope = SearchScope( - find_links=find_links, - index_urls=index_urls, - no_index=no_index, - ) - finder.search_scope = search_scope - - if opts.pre: - finder.set_allow_all_prereleases() - - if opts.prefer_binary: - finder.set_prefer_binary() - - if session: - for host in opts.trusted_hosts or []: - source = f"line {lineno} of {filename}" - session.add_trusted_host(host, source=source) - - -def handle_line( - line: ParsedLine, - options: Optional[optparse.Values] = None, - finder: Optional["PackageFinder"] = None, - session: Optional[PipSession] = None, -) -> Optional[ParsedRequirement]: - """Handle a single parsed requirements line; This can result in - creating/yielding requirements, or updating the finder. - - :param line: The parsed line to be processed. - :param options: CLI options. - :param finder: The finder - updated by non-requirement lines. - :param session: The session - updated by non-requirement lines. - - Returns a ParsedRequirement object if the line is a requirement line, - otherwise returns None. - - For lines that contain requirements, the only options that have an effect - are from SUPPORTED_OPTIONS_REQ, and they are scoped to the - requirement. Other options from SUPPORTED_OPTIONS may be present, but are - ignored. - - For lines that do not contain requirements, the only options that have an - effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may - be present, but are ignored. These lines may contain multiple options - (although our docs imply only one is supported), and all our parsed and - affect the finder. - """ - - if line.is_requirement: - parsed_req = handle_requirement_line(line, options) - return parsed_req - else: - handle_option_line( - line.opts, - line.filename, - line.lineno, - finder, - options, - session, - ) - return None - - -class RequirementsFileParser: - def __init__( - self, - session: PipSession, - line_parser: LineParser, - ) -> None: - self._session = session - self._line_parser = line_parser - - def parse( - self, filename: str, constraint: bool - ) -> Generator[ParsedLine, None, None]: - """Parse a given file, yielding parsed lines.""" - yield from self._parse_and_recurse(filename, constraint) - - def _parse_and_recurse( - self, filename: str, constraint: bool - ) -> Generator[ParsedLine, None, None]: - for line in self._parse_file(filename, constraint): - if not line.is_requirement and ( - line.opts.requirements or line.opts.constraints - ): - # parse a nested requirements file - if line.opts.requirements: - req_path = line.opts.requirements[0] - nested_constraint = False - else: - req_path = line.opts.constraints[0] - nested_constraint = True - - # original file is over http - if SCHEME_RE.search(filename): - # do a url join so relative paths work - req_path = urllib.parse.urljoin(filename, req_path) - # original file and nested file are paths - elif not SCHEME_RE.search(req_path): - # do a join so relative paths work - req_path = os.path.join( - os.path.dirname(filename), - req_path, - ) - - yield from self._parse_and_recurse(req_path, nested_constraint) - else: - yield line - - def _parse_file( - self, filename: str, constraint: bool - ) -> Generator[ParsedLine, None, None]: - _, content = get_file_content(filename, self._session) - - lines_enum = preprocess(content) - - for line_number, line in lines_enum: - try: - args_str, opts = self._line_parser(line) - except OptionParsingError as e: - # add offending line - msg = f"Invalid requirement: {line}\n{e.msg}" - raise RequirementsFileParseError(msg) - - yield ParsedLine( - filename, - line_number, - args_str, - opts, - constraint, - ) - - -def get_line_parser(finder: Optional["PackageFinder"]) -> LineParser: - def parse_line(line: str) -> Tuple[str, Values]: - # Build new parser for each line since it accumulates appendable - # options. - parser = build_parser() - defaults = parser.get_default_values() - defaults.index_url = None - if finder: - defaults.format_control = finder.format_control - - args_str, options_str = break_args_options(line) - - try: - options = shlex.split(options_str) - except ValueError as e: - raise OptionParsingError(f"Could not split options: {options_str}") from e - - opts, _ = parser.parse_args(options, defaults) - - return args_str, opts - - return parse_line - - -def break_args_options(line: str) -> Tuple[str, str]: - """Break up the line into an args and options string. We only want to shlex - (and then optparse) the options, not the args. args can contain markers - which are corrupted by shlex. - """ - tokens = line.split(" ") - args = [] - options = tokens[:] - for token in tokens: - if token.startswith("-") or token.startswith("--"): - break - else: - args.append(token) - options.pop(0) - return " ".join(args), " ".join(options) - - -class OptionParsingError(Exception): - def __init__(self, msg: str) -> None: - self.msg = msg - - -def build_parser() -> optparse.OptionParser: - """ - Return a parser for parsing requirement lines - """ - parser = optparse.OptionParser(add_help_option=False) - - option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ - for option_factory in option_factories: - option = option_factory() - parser.add_option(option) - - # By default optparse sys.exits on parsing errors. We want to wrap - # that in our own exception. - def parser_exit(self: Any, msg: str) -> "NoReturn": - raise OptionParsingError(msg) - - # NOTE: mypy disallows assigning to a method - # https://github.com/python/mypy/issues/2427 - parser.exit = parser_exit # type: ignore - - return parser - - -def join_lines(lines_enum: ReqFileLines) -> ReqFileLines: - """Joins a line ending in '\' with the previous line (except when following - comments). The joined line takes on the index of the first line. - """ - primary_line_number = None - new_line: List[str] = [] - for line_number, line in lines_enum: - if not line.endswith("\\") or COMMENT_RE.match(line): - if COMMENT_RE.match(line): - # this ensures comments are always matched later - line = " " + line - if new_line: - new_line.append(line) - assert primary_line_number is not None - yield primary_line_number, "".join(new_line) - new_line = [] - else: - yield line_number, line - else: - if not new_line: - primary_line_number = line_number - new_line.append(line.strip("\\")) - - # last line contains \ - if new_line: - assert primary_line_number is not None - yield primary_line_number, "".join(new_line) - - # TODO: handle space after '\'. - - -def ignore_comments(lines_enum: ReqFileLines) -> ReqFileLines: - """ - Strips comments and filter empty lines. - """ - for line_number, line in lines_enum: - line = COMMENT_RE.sub("", line) - line = line.strip() - if line: - yield line_number, line - - -def expand_env_variables(lines_enum: ReqFileLines) -> ReqFileLines: - """Replace all environment variables that can be retrieved via `os.getenv`. - - The only allowed format for environment variables defined in the - requirement file is `${MY_VARIABLE_1}` to ensure two things: - - 1. Strings that contain a `$` aren't accidentally (partially) expanded. - 2. Ensure consistency across platforms for requirement files. - - These points are the result of a discussion on the `github pull - request #3514 `_. - - Valid characters in variable names follow the `POSIX standard - `_ and are limited - to uppercase letter, digits and the `_` (underscore). - """ - for line_number, line in lines_enum: - for env_var, var_name in ENV_VAR_RE.findall(line): - value = os.getenv(var_name) - if not value: - continue - - line = line.replace(env_var, value) - - yield line_number, line - - -def get_file_content(url: str, session: PipSession) -> Tuple[str, str]: - """Gets the content of a file; it may be a filename, file: URL, or - http: URL. Returns (location, content). Content is unicode. - Respects # -*- coding: declarations on the retrieved files. - - :param url: File path or url. - :param session: PipSession instance. - """ - scheme = get_url_scheme(url) - - # Pip has special support for file:// URLs (LocalFSAdapter). - if scheme in ["http", "https", "file"]: - resp = session.get(url) - raise_for_status(resp) - return resp.url, resp.text - - # Assume this is a bare path. - try: - with open(url, "rb") as f: - content = auto_decode(f.read()) - except OSError as exc: - raise InstallationError(f"Could not open requirements file: {exc}") - return url, content diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/ntlmpool.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/ntlmpool.py deleted file mode 100644 index 471665754e9f199f07f90107ebb350c38b378100..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/ntlmpool.py +++ /dev/null @@ -1,130 +0,0 @@ -""" -NTLM authenticating pool, contributed by erikcederstran - -Issue #10, see: http://code.google.com/p/urllib3/issues/detail?id=10 -""" -from __future__ import absolute_import - -import warnings -from logging import getLogger - -from ntlm import ntlm - -from .. import HTTPSConnectionPool -from ..packages.six.moves.http_client import HTTPSConnection - -warnings.warn( - "The 'urllib3.contrib.ntlmpool' module is deprecated and will be removed " - "in urllib3 v2.0 release, urllib3 is not able to support it properly due " - "to reasons listed in issue: https://github.com/urllib3/urllib3/issues/2282. " - "If you are a user of this module please comment in the mentioned issue.", - DeprecationWarning, -) - -log = getLogger(__name__) - - -class NTLMConnectionPool(HTTPSConnectionPool): - """ - Implements an NTLM authentication version of an urllib3 connection pool - """ - - scheme = "https" - - def __init__(self, user, pw, authurl, *args, **kwargs): - """ - authurl is a random URL on the server that is protected by NTLM. - user is the Windows user, probably in the DOMAIN\\username format. - pw is the password for the user. - """ - super(NTLMConnectionPool, self).__init__(*args, **kwargs) - self.authurl = authurl - self.rawuser = user - user_parts = user.split("\\", 1) - self.domain = user_parts[0].upper() - self.user = user_parts[1] - self.pw = pw - - def _new_conn(self): - # Performs the NTLM handshake that secures the connection. The socket - # must be kept open while requests are performed. - self.num_connections += 1 - log.debug( - "Starting NTLM HTTPS connection no. %d: https://%s%s", - self.num_connections, - self.host, - self.authurl, - ) - - headers = {"Connection": "Keep-Alive"} - req_header = "Authorization" - resp_header = "www-authenticate" - - conn = HTTPSConnection(host=self.host, port=self.port) - - # Send negotiation message - headers[req_header] = "NTLM %s" % ntlm.create_NTLM_NEGOTIATE_MESSAGE( - self.rawuser - ) - log.debug("Request headers: %s", headers) - conn.request("GET", self.authurl, None, headers) - res = conn.getresponse() - reshdr = dict(res.headers) - log.debug("Response status: %s %s", res.status, res.reason) - log.debug("Response headers: %s", reshdr) - log.debug("Response data: %s [...]", res.read(100)) - - # Remove the reference to the socket, so that it can not be closed by - # the response object (we want to keep the socket open) - res.fp = None - - # Server should respond with a challenge message - auth_header_values = reshdr[resp_header].split(", ") - auth_header_value = None - for s in auth_header_values: - if s[:5] == "NTLM ": - auth_header_value = s[5:] - if auth_header_value is None: - raise Exception( - "Unexpected %s response header: %s" % (resp_header, reshdr[resp_header]) - ) - - # Send authentication message - ServerChallenge, NegotiateFlags = ntlm.parse_NTLM_CHALLENGE_MESSAGE( - auth_header_value - ) - auth_msg = ntlm.create_NTLM_AUTHENTICATE_MESSAGE( - ServerChallenge, self.user, self.domain, self.pw, NegotiateFlags - ) - headers[req_header] = "NTLM %s" % auth_msg - log.debug("Request headers: %s", headers) - conn.request("GET", self.authurl, None, headers) - res = conn.getresponse() - log.debug("Response status: %s %s", res.status, res.reason) - log.debug("Response headers: %s", dict(res.headers)) - log.debug("Response data: %s [...]", res.read()[:100]) - if res.status != 200: - if res.status == 401: - raise Exception("Server rejected request: wrong username or password") - raise Exception("Wrong server response: %s %s" % (res.status, res.reason)) - - res.fp = None - log.debug("Connection established") - return conn - - def urlopen( - self, - method, - url, - body=None, - headers=None, - retries=3, - redirect=True, - assert_same_host=True, - ): - if headers is None: - headers = {} - headers["Connection"] = "Keep-Alive" - return super(NTLMConnectionPool, self).urlopen( - method, url, body, headers, retries, redirect, assert_same_host - ) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/packaging/version.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/packaging/version.py deleted file mode 100644 index de9a09a4ed3b078b37e7490a6686f660ae935aca..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/packaging/version.py +++ /dev/null @@ -1,504 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import collections -import itertools -import re -import warnings -from typing import Callable, Iterator, List, Optional, SupportsInt, Tuple, Union - -from ._structures import Infinity, InfinityType, NegativeInfinity, NegativeInfinityType - -__all__ = ["parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN"] - -InfiniteTypes = Union[InfinityType, NegativeInfinityType] -PrePostDevType = Union[InfiniteTypes, Tuple[str, int]] -SubLocalType = Union[InfiniteTypes, int, str] -LocalType = Union[ - NegativeInfinityType, - Tuple[ - Union[ - SubLocalType, - Tuple[SubLocalType, str], - Tuple[NegativeInfinityType, SubLocalType], - ], - ..., - ], -] -CmpKey = Tuple[ - int, Tuple[int, ...], PrePostDevType, PrePostDevType, PrePostDevType, LocalType -] -LegacyCmpKey = Tuple[int, Tuple[str, ...]] -VersionComparisonMethod = Callable[ - [Union[CmpKey, LegacyCmpKey], Union[CmpKey, LegacyCmpKey]], bool -] - -_Version = collections.namedtuple( - "_Version", ["epoch", "release", "dev", "pre", "post", "local"] -) - - -def parse(version: str) -> Union["LegacyVersion", "Version"]: - """ - Parse the given version string and return either a :class:`Version` object - or a :class:`LegacyVersion` object depending on if the given version is - a valid PEP 440 version or a legacy version. - """ - try: - return Version(version) - except InvalidVersion: - return LegacyVersion(version) - - -class InvalidVersion(ValueError): - """ - An invalid version was found, users should refer to PEP 440. - """ - - -class _BaseVersion: - _key: Union[CmpKey, LegacyCmpKey] - - def __hash__(self) -> int: - return hash(self._key) - - # Please keep the duplicated `isinstance` check - # in the six comparisons hereunder - # unless you find a way to avoid adding overhead function calls. - def __lt__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key < other._key - - def __le__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key <= other._key - - def __eq__(self, other: object) -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key == other._key - - def __ge__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key >= other._key - - def __gt__(self, other: "_BaseVersion") -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key > other._key - - def __ne__(self, other: object) -> bool: - if not isinstance(other, _BaseVersion): - return NotImplemented - - return self._key != other._key - - -class LegacyVersion(_BaseVersion): - def __init__(self, version: str) -> None: - self._version = str(version) - self._key = _legacy_cmpkey(self._version) - - warnings.warn( - "Creating a LegacyVersion has been deprecated and will be " - "removed in the next major release", - DeprecationWarning, - ) - - def __str__(self) -> str: - return self._version - - def __repr__(self) -> str: - return f"" - - @property - def public(self) -> str: - return self._version - - @property - def base_version(self) -> str: - return self._version - - @property - def epoch(self) -> int: - return -1 - - @property - def release(self) -> None: - return None - - @property - def pre(self) -> None: - return None - - @property - def post(self) -> None: - return None - - @property - def dev(self) -> None: - return None - - @property - def local(self) -> None: - return None - - @property - def is_prerelease(self) -> bool: - return False - - @property - def is_postrelease(self) -> bool: - return False - - @property - def is_devrelease(self) -> bool: - return False - - -_legacy_version_component_re = re.compile(r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE) - -_legacy_version_replacement_map = { - "pre": "c", - "preview": "c", - "-": "final-", - "rc": "c", - "dev": "@", -} - - -def _parse_version_parts(s: str) -> Iterator[str]: - for part in _legacy_version_component_re.split(s): - part = _legacy_version_replacement_map.get(part, part) - - if not part or part == ".": - continue - - if part[:1] in "0123456789": - # pad for numeric comparison - yield part.zfill(8) - else: - yield "*" + part - - # ensure that alpha/beta/candidate are before final - yield "*final" - - -def _legacy_cmpkey(version: str) -> LegacyCmpKey: - - # We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch - # greater than or equal to 0. This will effectively put the LegacyVersion, - # which uses the defacto standard originally implemented by setuptools, - # as before all PEP 440 versions. - epoch = -1 - - # This scheme is taken from pkg_resources.parse_version setuptools prior to - # it's adoption of the packaging library. - parts: List[str] = [] - for part in _parse_version_parts(version.lower()): - if part.startswith("*"): - # remove "-" before a prerelease tag - if part < "*final": - while parts and parts[-1] == "*final-": - parts.pop() - - # remove trailing zeros from each series of numeric parts - while parts and parts[-1] == "00000000": - parts.pop() - - parts.append(part) - - return epoch, tuple(parts) - - -# Deliberately not anchored to the start and end of the string, to make it -# easier for 3rd party code to reuse -VERSION_PATTERN = r""" - v? - (?: - (?:(?P[0-9]+)!)? # epoch - (?P[0-9]+(?:\.[0-9]+)*) # release segment - (?P
                                          # pre-release
-            [-_\.]?
-            (?P(a|b|c|rc|alpha|beta|pre|preview))
-            [-_\.]?
-            (?P[0-9]+)?
-        )?
-        (?P                                         # post release
-            (?:-(?P[0-9]+))
-            |
-            (?:
-                [-_\.]?
-                (?Ppost|rev|r)
-                [-_\.]?
-                (?P[0-9]+)?
-            )
-        )?
-        (?P                                          # dev release
-            [-_\.]?
-            (?Pdev)
-            [-_\.]?
-            (?P[0-9]+)?
-        )?
-    )
-    (?:\+(?P[a-z0-9]+(?:[-_\.][a-z0-9]+)*))?       # local version
-"""
-
-
-class Version(_BaseVersion):
-
-    _regex = re.compile(r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE)
-
-    def __init__(self, version: str) -> None:
-
-        # Validate the version and parse it into pieces
-        match = self._regex.search(version)
-        if not match:
-            raise InvalidVersion(f"Invalid version: '{version}'")
-
-        # Store the parsed out pieces of the version
-        self._version = _Version(
-            epoch=int(match.group("epoch")) if match.group("epoch") else 0,
-            release=tuple(int(i) for i in match.group("release").split(".")),
-            pre=_parse_letter_version(match.group("pre_l"), match.group("pre_n")),
-            post=_parse_letter_version(
-                match.group("post_l"), match.group("post_n1") or match.group("post_n2")
-            ),
-            dev=_parse_letter_version(match.group("dev_l"), match.group("dev_n")),
-            local=_parse_local_version(match.group("local")),
-        )
-
-        # Generate a key which will be used for sorting
-        self._key = _cmpkey(
-            self._version.epoch,
-            self._version.release,
-            self._version.pre,
-            self._version.post,
-            self._version.dev,
-            self._version.local,
-        )
-
-    def __repr__(self) -> str:
-        return f""
-
-    def __str__(self) -> str:
-        parts = []
-
-        # Epoch
-        if self.epoch != 0:
-            parts.append(f"{self.epoch}!")
-
-        # Release segment
-        parts.append(".".join(str(x) for x in self.release))
-
-        # Pre-release
-        if self.pre is not None:
-            parts.append("".join(str(x) for x in self.pre))
-
-        # Post-release
-        if self.post is not None:
-            parts.append(f".post{self.post}")
-
-        # Development release
-        if self.dev is not None:
-            parts.append(f".dev{self.dev}")
-
-        # Local version segment
-        if self.local is not None:
-            parts.append(f"+{self.local}")
-
-        return "".join(parts)
-
-    @property
-    def epoch(self) -> int:
-        _epoch: int = self._version.epoch
-        return _epoch
-
-    @property
-    def release(self) -> Tuple[int, ...]:
-        _release: Tuple[int, ...] = self._version.release
-        return _release
-
-    @property
-    def pre(self) -> Optional[Tuple[str, int]]:
-        _pre: Optional[Tuple[str, int]] = self._version.pre
-        return _pre
-
-    @property
-    def post(self) -> Optional[int]:
-        return self._version.post[1] if self._version.post else None
-
-    @property
-    def dev(self) -> Optional[int]:
-        return self._version.dev[1] if self._version.dev else None
-
-    @property
-    def local(self) -> Optional[str]:
-        if self._version.local:
-            return ".".join(str(x) for x in self._version.local)
-        else:
-            return None
-
-    @property
-    def public(self) -> str:
-        return str(self).split("+", 1)[0]
-
-    @property
-    def base_version(self) -> str:
-        parts = []
-
-        # Epoch
-        if self.epoch != 0:
-            parts.append(f"{self.epoch}!")
-
-        # Release segment
-        parts.append(".".join(str(x) for x in self.release))
-
-        return "".join(parts)
-
-    @property
-    def is_prerelease(self) -> bool:
-        return self.dev is not None or self.pre is not None
-
-    @property
-    def is_postrelease(self) -> bool:
-        return self.post is not None
-
-    @property
-    def is_devrelease(self) -> bool:
-        return self.dev is not None
-
-    @property
-    def major(self) -> int:
-        return self.release[0] if len(self.release) >= 1 else 0
-
-    @property
-    def minor(self) -> int:
-        return self.release[1] if len(self.release) >= 2 else 0
-
-    @property
-    def micro(self) -> int:
-        return self.release[2] if len(self.release) >= 3 else 0
-
-
-def _parse_letter_version(
-    letter: str, number: Union[str, bytes, SupportsInt]
-) -> Optional[Tuple[str, int]]:
-
-    if letter:
-        # We consider there to be an implicit 0 in a pre-release if there is
-        # not a numeral associated with it.
-        if number is None:
-            number = 0
-
-        # We normalize any letters to their lower case form
-        letter = letter.lower()
-
-        # We consider some words to be alternate spellings of other words and
-        # in those cases we want to normalize the spellings to our preferred
-        # spelling.
-        if letter == "alpha":
-            letter = "a"
-        elif letter == "beta":
-            letter = "b"
-        elif letter in ["c", "pre", "preview"]:
-            letter = "rc"
-        elif letter in ["rev", "r"]:
-            letter = "post"
-
-        return letter, int(number)
-    if not letter and number:
-        # We assume if we are given a number, but we are not given a letter
-        # then this is using the implicit post release syntax (e.g. 1.0-1)
-        letter = "post"
-
-        return letter, int(number)
-
-    return None
-
-
-_local_version_separators = re.compile(r"[\._-]")
-
-
-def _parse_local_version(local: str) -> Optional[LocalType]:
-    """
-    Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
-    """
-    if local is not None:
-        return tuple(
-            part.lower() if not part.isdigit() else int(part)
-            for part in _local_version_separators.split(local)
-        )
-    return None
-
-
-def _cmpkey(
-    epoch: int,
-    release: Tuple[int, ...],
-    pre: Optional[Tuple[str, int]],
-    post: Optional[Tuple[str, int]],
-    dev: Optional[Tuple[str, int]],
-    local: Optional[Tuple[SubLocalType]],
-) -> CmpKey:
-
-    # When we compare a release version, we want to compare it with all of the
-    # trailing zeros removed. So we'll use a reverse the list, drop all the now
-    # leading zeros until we come to something non zero, then take the rest
-    # re-reverse it back into the correct order and make it a tuple and use
-    # that for our sorting key.
-    _release = tuple(
-        reversed(list(itertools.dropwhile(lambda x: x == 0, reversed(release))))
-    )
-
-    # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
-    # We'll do this by abusing the pre segment, but we _only_ want to do this
-    # if there is not a pre or a post segment. If we have one of those then
-    # the normal sorting rules will handle this case correctly.
-    if pre is None and post is None and dev is not None:
-        _pre: PrePostDevType = NegativeInfinity
-    # Versions without a pre-release (except as noted above) should sort after
-    # those with one.
-    elif pre is None:
-        _pre = Infinity
-    else:
-        _pre = pre
-
-    # Versions without a post segment should sort before those with one.
-    if post is None:
-        _post: PrePostDevType = NegativeInfinity
-
-    else:
-        _post = post
-
-    # Versions without a development segment should sort after those with one.
-    if dev is None:
-        _dev: PrePostDevType = Infinity
-
-    else:
-        _dev = dev
-
-    if local is None:
-        # Versions without a local segment should sort before those with one.
-        _local: LocalType = NegativeInfinity
-    else:
-        # Versions with a local segment need that segment parsed to implement
-        # the sorting rules in PEP440.
-        # - Alpha numeric segments sort before numeric segments
-        # - Alpha numeric segments sort lexicographically
-        # - Numeric segments sort numerically
-        # - Shorter versions sort before longer versions when the prefixes
-        #   match exactly
-        _local = tuple(
-            (i, "") if isinstance(i, int) else (NegativeInfinity, i) for i in local
-        )
-
-    return epoch, _release, _pre, _post, _dev, _local
diff --git a/spaces/BartPoint/VoiceChange_Beta/infer_pack/attentions.py b/spaces/BartPoint/VoiceChange_Beta/infer_pack/attentions.py
deleted file mode 100644
index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000
--- a/spaces/BartPoint/VoiceChange_Beta/infer_pack/attentions.py
+++ /dev/null
@@ -1,417 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from infer_pack import commons
-from infer_pack import modules
-from infer_pack.modules import LayerNorm
-
-
-class Encoder(nn.Module):
-    def __init__(
-        self,
-        hidden_channels,
-        filter_channels,
-        n_heads,
-        n_layers,
-        kernel_size=1,
-        p_dropout=0.0,
-        window_size=10,
-        **kwargs
-    ):
-        super().__init__()
-        self.hidden_channels = hidden_channels
-        self.filter_channels = filter_channels
-        self.n_heads = n_heads
-        self.n_layers = n_layers
-        self.kernel_size = kernel_size
-        self.p_dropout = p_dropout
-        self.window_size = window_size
-
-        self.drop = nn.Dropout(p_dropout)
-        self.attn_layers = nn.ModuleList()
-        self.norm_layers_1 = nn.ModuleList()
-        self.ffn_layers = nn.ModuleList()
-        self.norm_layers_2 = nn.ModuleList()
-        for i in range(self.n_layers):
-            self.attn_layers.append(
-                MultiHeadAttention(
-                    hidden_channels,
-                    hidden_channels,
-                    n_heads,
-                    p_dropout=p_dropout,
-                    window_size=window_size,
-                )
-            )
-            self.norm_layers_1.append(LayerNorm(hidden_channels))
-            self.ffn_layers.append(
-                FFN(
-                    hidden_channels,
-                    hidden_channels,
-                    filter_channels,
-                    kernel_size,
-                    p_dropout=p_dropout,
-                )
-            )
-            self.norm_layers_2.append(LayerNorm(hidden_channels))
-
-    def forward(self, x, x_mask):
-        attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
-        x = x * x_mask
-        for i in range(self.n_layers):
-            y = self.attn_layers[i](x, x, attn_mask)
-            y = self.drop(y)
-            x = self.norm_layers_1[i](x + y)
-
-            y = self.ffn_layers[i](x, x_mask)
-            y = self.drop(y)
-            x = self.norm_layers_2[i](x + y)
-        x = x * x_mask
-        return x
-
-
-class Decoder(nn.Module):
-    def __init__(
-        self,
-        hidden_channels,
-        filter_channels,
-        n_heads,
-        n_layers,
-        kernel_size=1,
-        p_dropout=0.0,
-        proximal_bias=False,
-        proximal_init=True,
-        **kwargs
-    ):
-        super().__init__()
-        self.hidden_channels = hidden_channels
-        self.filter_channels = filter_channels
-        self.n_heads = n_heads
-        self.n_layers = n_layers
-        self.kernel_size = kernel_size
-        self.p_dropout = p_dropout
-        self.proximal_bias = proximal_bias
-        self.proximal_init = proximal_init
-
-        self.drop = nn.Dropout(p_dropout)
-        self.self_attn_layers = nn.ModuleList()
-        self.norm_layers_0 = nn.ModuleList()
-        self.encdec_attn_layers = nn.ModuleList()
-        self.norm_layers_1 = nn.ModuleList()
-        self.ffn_layers = nn.ModuleList()
-        self.norm_layers_2 = nn.ModuleList()
-        for i in range(self.n_layers):
-            self.self_attn_layers.append(
-                MultiHeadAttention(
-                    hidden_channels,
-                    hidden_channels,
-                    n_heads,
-                    p_dropout=p_dropout,
-                    proximal_bias=proximal_bias,
-                    proximal_init=proximal_init,
-                )
-            )
-            self.norm_layers_0.append(LayerNorm(hidden_channels))
-            self.encdec_attn_layers.append(
-                MultiHeadAttention(
-                    hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
-                )
-            )
-            self.norm_layers_1.append(LayerNorm(hidden_channels))
-            self.ffn_layers.append(
-                FFN(
-                    hidden_channels,
-                    hidden_channels,
-                    filter_channels,
-                    kernel_size,
-                    p_dropout=p_dropout,
-                    causal=True,
-                )
-            )
-            self.norm_layers_2.append(LayerNorm(hidden_channels))
-
-    def forward(self, x, x_mask, h, h_mask):
-        """
-        x: decoder input
-        h: encoder output
-        """
-        self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
-            device=x.device, dtype=x.dtype
-        )
-        encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
-        x = x * x_mask
-        for i in range(self.n_layers):
-            y = self.self_attn_layers[i](x, x, self_attn_mask)
-            y = self.drop(y)
-            x = self.norm_layers_0[i](x + y)
-
-            y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
-            y = self.drop(y)
-            x = self.norm_layers_1[i](x + y)
-
-            y = self.ffn_layers[i](x, x_mask)
-            y = self.drop(y)
-            x = self.norm_layers_2[i](x + y)
-        x = x * x_mask
-        return x
-
-
-class MultiHeadAttention(nn.Module):
-    def __init__(
-        self,
-        channels,
-        out_channels,
-        n_heads,
-        p_dropout=0.0,
-        window_size=None,
-        heads_share=True,
-        block_length=None,
-        proximal_bias=False,
-        proximal_init=False,
-    ):
-        super().__init__()
-        assert channels % n_heads == 0
-
-        self.channels = channels
-        self.out_channels = out_channels
-        self.n_heads = n_heads
-        self.p_dropout = p_dropout
-        self.window_size = window_size
-        self.heads_share = heads_share
-        self.block_length = block_length
-        self.proximal_bias = proximal_bias
-        self.proximal_init = proximal_init
-        self.attn = None
-
-        self.k_channels = channels // n_heads
-        self.conv_q = nn.Conv1d(channels, channels, 1)
-        self.conv_k = nn.Conv1d(channels, channels, 1)
-        self.conv_v = nn.Conv1d(channels, channels, 1)
-        self.conv_o = nn.Conv1d(channels, out_channels, 1)
-        self.drop = nn.Dropout(p_dropout)
-
-        if window_size is not None:
-            n_heads_rel = 1 if heads_share else n_heads
-            rel_stddev = self.k_channels**-0.5
-            self.emb_rel_k = nn.Parameter(
-                torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
-                * rel_stddev
-            )
-            self.emb_rel_v = nn.Parameter(
-                torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
-                * rel_stddev
-            )
-
-        nn.init.xavier_uniform_(self.conv_q.weight)
-        nn.init.xavier_uniform_(self.conv_k.weight)
-        nn.init.xavier_uniform_(self.conv_v.weight)
-        if proximal_init:
-            with torch.no_grad():
-                self.conv_k.weight.copy_(self.conv_q.weight)
-                self.conv_k.bias.copy_(self.conv_q.bias)
-
-    def forward(self, x, c, attn_mask=None):
-        q = self.conv_q(x)
-        k = self.conv_k(c)
-        v = self.conv_v(c)
-
-        x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
-        x = self.conv_o(x)
-        return x
-
-    def attention(self, query, key, value, mask=None):
-        # reshape [b, d, t] -> [b, n_h, t, d_k]
-        b, d, t_s, t_t = (*key.size(), query.size(2))
-        query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
-        key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-        value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
-        scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
-        if self.window_size is not None:
-            assert (
-                t_s == t_t
-            ), "Relative attention is only available for self-attention."
-            key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
-            rel_logits = self._matmul_with_relative_keys(
-                query / math.sqrt(self.k_channels), key_relative_embeddings
-            )
-            scores_local = self._relative_position_to_absolute_position(rel_logits)
-            scores = scores + scores_local
-        if self.proximal_bias:
-            assert t_s == t_t, "Proximal bias is only available for self-attention."
-            scores = scores + self._attention_bias_proximal(t_s).to(
-                device=scores.device, dtype=scores.dtype
-            )
-        if mask is not None:
-            scores = scores.masked_fill(mask == 0, -1e4)
-            if self.block_length is not None:
-                assert (
-                    t_s == t_t
-                ), "Local attention is only available for self-attention."
-                block_mask = (
-                    torch.ones_like(scores)
-                    .triu(-self.block_length)
-                    .tril(self.block_length)
-                )
-                scores = scores.masked_fill(block_mask == 0, -1e4)
-        p_attn = F.softmax(scores, dim=-1)  # [b, n_h, t_t, t_s]
-        p_attn = self.drop(p_attn)
-        output = torch.matmul(p_attn, value)
-        if self.window_size is not None:
-            relative_weights = self._absolute_position_to_relative_position(p_attn)
-            value_relative_embeddings = self._get_relative_embeddings(
-                self.emb_rel_v, t_s
-            )
-            output = output + self._matmul_with_relative_values(
-                relative_weights, value_relative_embeddings
-            )
-        output = (
-            output.transpose(2, 3).contiguous().view(b, d, t_t)
-        )  # [b, n_h, t_t, d_k] -> [b, d, t_t]
-        return output, p_attn
-
-    def _matmul_with_relative_values(self, x, y):
-        """
-        x: [b, h, l, m]
-        y: [h or 1, m, d]
-        ret: [b, h, l, d]
-        """
-        ret = torch.matmul(x, y.unsqueeze(0))
-        return ret
-
-    def _matmul_with_relative_keys(self, x, y):
-        """
-        x: [b, h, l, d]
-        y: [h or 1, m, d]
-        ret: [b, h, l, m]
-        """
-        ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
-        return ret
-
-    def _get_relative_embeddings(self, relative_embeddings, length):
-        max_relative_position = 2 * self.window_size + 1
-        # Pad first before slice to avoid using cond ops.
-        pad_length = max(length - (self.window_size + 1), 0)
-        slice_start_position = max((self.window_size + 1) - length, 0)
-        slice_end_position = slice_start_position + 2 * length - 1
-        if pad_length > 0:
-            padded_relative_embeddings = F.pad(
-                relative_embeddings,
-                commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
-            )
-        else:
-            padded_relative_embeddings = relative_embeddings
-        used_relative_embeddings = padded_relative_embeddings[
-            :, slice_start_position:slice_end_position
-        ]
-        return used_relative_embeddings
-
-    def _relative_position_to_absolute_position(self, x):
-        """
-        x: [b, h, l, 2*l-1]
-        ret: [b, h, l, l]
-        """
-        batch, heads, length, _ = x.size()
-        # Concat columns of pad to shift from relative to absolute indexing.
-        x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
-        # Concat extra elements so to add up to shape (len+1, 2*len-1).
-        x_flat = x.view([batch, heads, length * 2 * length])
-        x_flat = F.pad(
-            x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
-        )
-
-        # Reshape and slice out the padded elements.
-        x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
-            :, :, :length, length - 1 :
-        ]
-        return x_final
-
-    def _absolute_position_to_relative_position(self, x):
-        """
-        x: [b, h, l, l]
-        ret: [b, h, l, 2*l-1]
-        """
-        batch, heads, length, _ = x.size()
-        # padd along column
-        x = F.pad(
-            x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
-        )
-        x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
-        # add 0's in the beginning that will skew the elements after reshape
-        x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
-        x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
-        return x_final
-
-    def _attention_bias_proximal(self, length):
-        """Bias for self-attention to encourage attention to close positions.
-        Args:
-          length: an integer scalar.
-        Returns:
-          a Tensor with shape [1, 1, length, length]
-        """
-        r = torch.arange(length, dtype=torch.float32)
-        diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
-        return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
-    def __init__(
-        self,
-        in_channels,
-        out_channels,
-        filter_channels,
-        kernel_size,
-        p_dropout=0.0,
-        activation=None,
-        causal=False,
-    ):
-        super().__init__()
-        self.in_channels = in_channels
-        self.out_channels = out_channels
-        self.filter_channels = filter_channels
-        self.kernel_size = kernel_size
-        self.p_dropout = p_dropout
-        self.activation = activation
-        self.causal = causal
-
-        if causal:
-            self.padding = self._causal_padding
-        else:
-            self.padding = self._same_padding
-
-        self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
-        self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
-        self.drop = nn.Dropout(p_dropout)
-
-    def forward(self, x, x_mask):
-        x = self.conv_1(self.padding(x * x_mask))
-        if self.activation == "gelu":
-            x = x * torch.sigmoid(1.702 * x)
-        else:
-            x = torch.relu(x)
-        x = self.drop(x)
-        x = self.conv_2(self.padding(x * x_mask))
-        return x * x_mask
-
-    def _causal_padding(self, x):
-        if self.kernel_size == 1:
-            return x
-        pad_l = self.kernel_size - 1
-        pad_r = 0
-        padding = [[0, 0], [0, 0], [pad_l, pad_r]]
-        x = F.pad(x, commons.convert_pad_shape(padding))
-        return x
-
-    def _same_padding(self, x):
-        if self.kernel_size == 1:
-            return x
-        pad_l = (self.kernel_size - 1) // 2
-        pad_r = self.kernel_size // 2
-        padding = [[0, 0], [0, 0], [pad_l, pad_r]]
-        x = F.pad(x, commons.convert_pad_shape(padding))
-        return x
diff --git a/spaces/Benson/text-generation/Examples/1happybirthday.com En Descarga Tamil.md b/spaces/Benson/text-generation/Examples/1happybirthday.com En Descarga Tamil.md
deleted file mode 100644
index d53cae25922f6712960aaf95a150cd6c297d496c..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/1happybirthday.com En Descarga Tamil.md	
+++ /dev/null
@@ -1,72 +0,0 @@
-
-

Plantas vs Zombies Descargar 1: Cómo jugar el clásico juego de defensa de la torre en su PC

-

Introducción

-

Plants vs Zombies es uno de los juegos de defensa de torres más populares y adictivos jamás creados. Fue desarrollado por PopCap Games y lanzado en 2009 para Windows y Mac OS X. El juego ha ganado varios premios y ha sido elogiado por su humor, jugabilidad y gráficos.

-

En Plants vs Zombies, tienes que proteger tu casa de las olas de zombies que quieren comerse tu cerebro. Haces esto plantando varios tipos de plantas que pueden disparar, explotar o ralentizar a los zombies. El juego tiene 50 niveles en el modo Aventura, además de otros modos como Supervivencia, Puzzle y Mini-Games. También puedes desbloquear diferentes plantas, zombies y logros a medida que avanzas.

-

1happybirthday.com en descarga tamil


DOWNLOAD ○○○ https://bltlly.com/2v6JJo



-

Si eres un fan de Plants vs Zombies, o si quieres probarlo por primera vez, es posible que te estés preguntando cómo jugarlo en tu PC. En este artículo, te mostraremos dos formas fáciles de descargar e instalar Plants vs Zombies en tu PC, para que puedas disfrutar de este clásico juego en una pantalla más grande.

-

Cómo descargar e instalar Plants vs Zombies en PC

-

Opción 1: Descarga desde Google Play Store usando el emulador de BlueStacks

-

Una de las formas más fáciles de jugar Plants vs Zombies en tu PC es usar un emulador de Android como BlueStacks. BlueStacks es un software que te permite ejecutar aplicaciones y juegos Android en tu PC. Puedes descargarlo gratis desde [BlueStacks.com]( 2 ).

-

Paso 1: Descargar e instalar BlueStacks en su PC

-

Vaya a [BlueStacks.com]( 2 ) y haga clic en el botón de descarga. La descarga se iniciará automáticamente. Una vez finalizada la descarga, ejecute el archivo de instalación y siga las instrucciones para instalar BlueStacks en su PC.

-

Paso 2: Inicie BlueStacks e inicie sesión con su cuenta de Google

- -

Paso 3: Búsqueda de plantas vs zombies en la Google Play Store

-

Una vez que haya iniciado sesión, verá la pantalla de inicio de BlueStacks. En la esquina superior derecha, verá un icono de búsqueda. Haz clic en él y escribe "Plants vs Zombies" en la barra de búsqueda. Verás una lista de resultados. Haga clic en el que dice "Plants vs. Zombies=" por ELECTRONIC ARTS.

-

Paso 4: Instalar plantas vs zombies y disfrutar jugando en su PC

-

Serás llevado a la página de aplicaciones de Plants vs Zombies en la Google Play Store. Haga clic en el botón de instalación y espere a que la instalación termine

Después de que la instalación se haya completado, verá un botón abierto. Haga clic en él y podrá jugar Plants vs Zombies en su PC. También puede encontrar el icono del juego en la pantalla de inicio de BlueStacks o en el escritorio. Puede utilizar el ratón y el teclado para controlar el juego, o personalizar la configuración a su preferencia.

-

Opción 2: Descargar desde Filehippo.com usando un archivo de instalación

-

Otra forma de jugar Plants vs Zombies en tu PC es descargarlo desde un sitio web que ofrece archivos de instalación para juegos de PC. Uno de los sitios web que puedes utilizar es [Filehippo.com]. Filehippo.com es una fuente confiable y confiable de descargas de software libre para Windows, Mac y Android. Puedes descargar Plants vs Zombies de Filehippo.com gratis y sin virus ni malware.

-

Paso 1: Ir a Filehippo.com y buscar plantas vs zombies

-

Abra su navegador web y vaya a [Filehippo.com]. En la esquina superior derecha, verá un cuadro de búsqueda. Escribe "Plants vs Zombies" en el cuadro de búsqueda y pulsa enter. Verás una lista de resultados. Haga clic en el que dice "Plants vs. Zombies Game Of The Year Edition 1.2.0.1073 for PC Windows".

-

-

Paso 2: Haga clic en el botón de descarga y guarde el archivo de instalación en su PC

- -

Paso 3: Ejecute el archivo de instalación y siga las instrucciones para instalar Plants vs Zombies en su PC

-

Una vez completada la descarga, vaya a la ubicación donde guardó el archivo de instalación y haga doble clic en él. Aparecerá una ventana pidiéndole que confirme si desea ejecutar el archivo. Haz clic en sí y sigue las instrucciones para instalar Plants vs Zombies en tu PC. Es posible que tenga que aceptar los términos y condiciones y elegir una carpeta de destino para el juego.

-

Paso 4: Plantas de lanzamiento vs zombies y divertirse jugando en su PC

-

Una vez completada la instalación, verá un icono de acceso directo para Plants vs Zombies en su escritorio o menú de inicio. Haz clic en él y podrás jugar Plants vs Zombies en tu PC. Puedes usar el ratón y el teclado para controlar el juego, o ajustar la configuración a tu gusto.

-

Conclusión

-

Plants vs Zombies es un clásico juego de torre de defensa que puedes jugar en tu PC usando un emulador de Android como BlueStacks o un archivo de instalación de Filehippo.com. Ambos métodos son fáciles y gratuitos, y te permiten disfrutar de este divertido y adictivo juego en una pantalla más grande. Ya sea que quieras revivir tus recuerdos de infancia o descubrir este juego por primera vez, Plants vs Zombies es una gran opción para cualquiera que ame la estrategia, el humor y los zombies.

-

Si estás listo para jugar Plants vs Zombies en tu PC, elige una de las opciones de arriba y sigue los pasos que te proporcionamos. Usted será capaz de descargar e instalar Plants vs Zombies en ningún momento, y empezar a plantar sus defensas contra los invasores muertos vivientes. Diviértete!

-

Preguntas frecuentes

-
    -
  • ¿Es libre Plants vs Zombies?
  • -

    Sí, Plants vs Zombies es gratis para descargar y jugar en tu PC usando BlueStacks o Filehippo.com. Sin embargo, puede haber algunas compras en la aplicación o anuncios en el juego que puedes ignorar o comprar.

    -
  • ¿Son seguras las plantas contra los zombis?
  • - -
  • ¿Cuáles son los requisitos del sistema para Plantas vs Zombies?
  • -

    Los requisitos mínimos del sistema para Plantas vs Zombies son:

    - -OSWindows XP/Vista/7/8/10 -CPUprocesador de 1,2 GHz -RAM512 MB -HDD65 MB de espacio libre -GráficosDirectX 8 o posterior -SonidoTarjeta de sonido compatible con DirectX - -

    Los requisitos de sistema recomendados para Plants vs Zombies son:

    - -OSWindows XP/Vista/7/8/10 -CPUprocesador de 1,5 GHz -RAM1 GB -HDD65 MB de espacio libre -GráficosDirectX 9 o posterior -SonidoTarjeta de sonido compatible con DirectX - -
  • ¿Cuántas plantas y zombies hay en Plants vs Zombies?
  • -

    Hay 49 plantas diferentes y 26 zombis diferentes en Plants vs Zombies. Cada planta y zombi tiene sus propias habilidades y características únicas. Puedes desbloquear más plantas y zombies mientras juegas el juego y completas los niveles.

    -
  • ¿Cuáles son los otros modos en Plants vs Zombies?
  • -

    Además del modo Aventura, que tiene 50 niveles, también hay otros modos en Plants vs Zombies que puedes jugar para más diversión y desafío. Estos modos son:

    -
      -
    • Modo de supervivencia: Tienes que sobrevivir a interminables oleadas de zombies con recursos limitados.
    • -
    • Modo de rompecabezas: Tienes que resolver varios puzzles que involucran plantas y zombies.
    • -Modo de minijuegos: Tienes que jugar varios minijuegos que tienen diferentes reglas y objetivos. -
    • Modo de jardín zen: Tienes que crecer y cuidar de tus propias plantas en un jardín relajante.
    • -
    • Crazy Dave’s Shop: Puedes comprar varios artículos y mejoras de Crazy Dave, el vecino excéntrico que te ayuda a lo largo del juego.
    • -
    - -

    Sí, hay una secuela de Plants vs Zombies llamada Plants vs. Zombies 2: It’s About Time. Fue lanzado en 2013 para dispositivos iOS y Android. La secuela cuenta con nuevas plantas, zombies, mundos, niveles y modos. También tiene un tema de viaje en el tiempo que le permite visitar diferentes períodos históricos y luchar contra zombies allí.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descarga Quickbooks 2017 Premier.md b/spaces/Benson/text-generation/Examples/Descarga Quickbooks 2017 Premier.md deleted file mode 100644 index 1eb04538483d26fefcc4116d29c5289823610561..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descarga Quickbooks 2017 Premier.md +++ /dev/null @@ -1,96 +0,0 @@ - -

    Tamil Movie Download 2017: Cómo ver las mejores películas tamiles del año en línea

    -

    Las películas tamiles son una forma de cine indio que se origina en el estado de Tamil Nadu en el sur de la India. Son conocidos por su rica cultura, historia y diversidad, así como por su estilo, música y acción únicos. Las películas tamiles tienen una enorme base de fans no solo en la India, sino también en otros países como Sri Lanka, Malasia, Singapur y Oriente Medio.

    -

    descarga quickbooks 2017 premier


    DOWNLOAD === https://bltlly.com/2v6Jg3



    -

    2017 fue un gran año para el cine tamil, ya que produjo algunas de las películas más aclamadas y comercialmente exitosas en los últimos tiempos. Algunas de estas películas incluyen Vikram Vedha, un thriller criminal que explora el dilema moral entre un policía y un gángster; Baahubali 2: The Conclusion, una fantasía épica que rompió récords de taquilla en toda la India; Mersal, un drama político que provocó controversia y debate; y Vivegam, un thriller de espías lleno de acción que mostró el poder estelar de Ajith Kumar.

    -

    Cómo descargar películas Tamil de forma legal y segura

    -

    Si quieres ver estas increíbles películas Tamil en línea, es posible que tenga la tentación de buscar descargas gratuitas o baratas de varios sitios web. Sin embargo, esto no es una buena idea, ya que puede exponerlo a muchos riesgos y problemas. Estas son algunas de las razones por las que debería evitar los sitios ilegales o piratas y optar por servicios de streaming legales y seguros en su lugar.

    -

    Los beneficios de usar un servicio de streaming

    -

    Un servicio de streaming es una plataforma que te permite ver películas y programas online sin necesidad de descargarlos. Puede acceder a ellos en cualquier momento y en cualquier lugar, siempre y cuando tenga una conexión a Internet y un dispositivo compatible. Algunos de los beneficios de usar un servicio de streaming son:

    -

    -
      -
    • Puedes ver vídeos de alta calidad sin búfer ni interrupciones.
    • -
    • Puede elegir entre una amplia gama de géneros, idiomas y categorías.
    • -
    • Puedes disfrutar de contenido exclusivo que no está disponible en otros lugares.
    • - -
    • Puede evitar problemas legales y sanciones que pueden surgir de descargar o compartir contenido con derechos de autor.
    • -
    -

    Lista de algunos de los mejores servicios de streaming para películas en tamil

    -

    Hay muchos servicios de streaming que ofrecen películas Tamil en línea, pero algunos de ellos son mejores que otros. Estos son algunos de los mejores que puedes probar:

    - -Servicio de streamingCaracterísticasPrecio -Hotstar- Ofrece más de 1000 películas tamiles, incluyendo nuevos lanzamientos y clásicos
    - También tiene películas en hindi, inglés, malayalam, telugu, kannada, bengalí y marathi
    - Tiene deportes en vivo, noticias, programas de televisión y originales
    - Tiene subtítulos y opciones de doblaje
    Tiene función de visualización sin conexión- Gratis para contenido limitado
    - Rs. 299 por mes o Rs. 1499 por año para contenido premium -Amazon Prime Video- Ofrece más de 500 películas tamiles, incluyendo nuevos lanzamientos y exclusivas
    - También tiene Hindi, Inglés, Malayalam, Telugu, Kannada, Bengalí, Marathi, y otras películas regionales
    - Tiene deportes en vivo, noticias, programas de televisión, y originales br<>- Tiene subtítulos y opciones de doblaje
    - Tiene función de visualización sin conexión- Gratis durante 30 días de prueba
    - Rs. 129 por mes o Rs. 999 por año para contenido ilimitado -Netflix- Ofrece más de 300 películas tamiles, incluyendo nuevos lanzamientos y originales
    - También tiene hindi, inglés, malayalam, telugu, kannada, bengalí, marathi y otras películas regionales e internacionales
    - Tiene programas de televisión, documentales, y especiales
    - Tiene subtítulos y opciones de doblaje
    - Tiene función de visualización sin conexión- Rs. 199 por mes para el plan móvil (un dispositivo)
    - Rs. 499 por mes para el plan básico (un dispositivo)
    - Rs. 649 por mes para el plan estándar (dos dispositivos)
    - Rs. 799 por mes para el plan premium (cuatro dispositivos) - -
    -

    Los riesgos de usar sitios ilegales o pirateados

    -

    Si bien los servicios de streaming son la mejor manera de ver películas tamiles en línea, algunas personas todavía pueden recurrir a sitios ilegales o piratas que ofrecen descargas gratuitas o baratas. Sin embargo, esta es una práctica muy arriesgada e irresponsable que puede tener graves consecuencias. Estos son algunos de los riesgos y problemas de usar estos sitios:

    -
      -
    • Puede descargar virus, malware o spyware que pueden dañar su dispositivo o robar su información personal.
    • -
    • Usted puede enfrentar acciones legales o multas por violar los derechos de propiedad intelectual de los cineastas y distribuidores.
    • -
    • Puedes comprometer la calidad y la seguridad de tu conexión a Internet exponiéndola a hackers o ciberataques.
    • -
    • Puede perderse la experiencia original y auténtica de ver películas tamiles, ya que están destinados a ser vistos.
    • -
    • Usted puede contribuir a la pérdida de ingresos y puestos de trabajo para la industria cinematográfica tamil y sus trabajadores.
    • -
    -

    Lista de algunos de los peligros y desventajas comunes de usar tales sitios

    -

    Hay muchos sitios ilegales o pirateados que dicen ofrecer descargas de películas tamiles, pero la mayoría de ellos son poco fiables, inseguros o poco éticos. Aquí están algunos de los peligros y desventajas comunes de usar tales sitios:

    - -Sitio ilegal o pirataPeligros e inconvenientes -Tamilrockers- Uno de los sitios más notorios que filtra nuevas películas tamiles en línea
    - A menudo cambia su nombre de dominio para evadir a las autoridades
    - Contiene anuncios emergentes, redirecciones y malware
    - Se enfrenta a acciones legales y prohibiciones de varios gobiernos e ISPs No respeta el trabajo duro y la creatividad de los cineastas y actores - -Isaimini- Un sitio que proporciona descargas de películas tamiles en varios formatos y tamaños
    - Tiene un diseño y una interfaz desactualizados con enlaces rotos
    - Contiene anuncios de spam, redirecciones y malware
    - Enfrenta acciones legales y prohibiciones de varios gobiernos e ISP<>-> Perjudica la reputación y la imagen de la industria cinematográfica tamil -

    Cómo disfrutar de películas tamiles con subtítulos y doblaje

    -

    Otro aspecto de ver películas tamiles en línea es la elección de subtítulos y doblaje. Los subtítulos son el texto que aparece en la parte inferior de la pantalla que traduce el diálogo a otro idioma. El doblaje es el proceso de reemplazar la voz original de los actores por otro lenguaje. Tanto los subtítulos como el doblaje tienen sus pros y sus contras, dependiendo de su preferencia y conveniencia. Estas son algunas de las ventajas y desventajas de ver películas tamiles con subtítulos y doblaje.

    -

    Las ventajas de ver películas en tamil con subtítulos

    -

    Los subtítulos son una gran manera de disfrutar de las películas tamiles si no estás familiarizado con el idioma o no quieres aprenderlo. Algunas de las ventajas de ver películas en tamil con subtítulos son:

    -
      -
    • Puedes entender mejor el diálogo y la historia.
    • -
    • Puedes apreciar los matices, emociones y expresiones de los actores.
    • -
    • Puedes aprender nuevas palabras, frases y expresiones en tamil.
    • -
    • Puede evitar perder cualquier detalle o chistes importantes que puedan perderse en la traducción.
    • -
    • Puedes ver la película en su forma original y calidad.
    • -
    -

    Lista de algunos de los mejores sitios para encontrar subtítulos para películas en tamil

    -

    Hay muchos sitios que ofrecen subtítulos para películas tamiles en línea, pero algunos de ellos son mejores que otros. Estos son algunos de los mejores que puedes probar:

    - -Sitio de subtítulosCaracterísticas - -Opensubtitles- Otro sitio popular y confiable para encontrar subtítulos para películas y programas
    - Tiene una gran colección de subtítulos en varios idiomas, incluyendo Tamil
    - Tiene una interfaz simple y rápida y función de búsqueda
    - Permite a los usuarios subir, rate, and comment on subtitles
    - Tiene un blog para noticias y actualizaciones sobre subtítulos -YIFY Subtitles- Un sitio que se especializa en encontrar subtítulos para películas YIFY, que son películas de alta calidad y de tamaño pequeño
    - Tiene una selección decente de subtítulos en varios idiomas, incluyendo Tamil
    - Tiene una interfaz elegante y moderna y la función de búsqueda
    -br Permite a los usuarios subir, calificar y comentar subtítulos
    - Tiene una sección de preguntas frecuentes para ayuda y soporte -

    Las desventajas de ver películas tamiles con doblaje

    -

    El doblaje es una práctica común que se utiliza para hacer películas accesibles a un público más amplio que no habla el idioma original. Sin embargo, el doblaje también puede tener algunos inconvenientes que pueden afectar el disfrute y la apreciación de las películas tamiles. Algunas de las desventajas de ver películas tamiles con doblaje son:

    -
      -
    • Puede perder la voz y el tono originales de los actores.
    • -
    • Puedes perderte los matices culturales y lingüísticos y las referencias que son específicas de Tamil.
    • -
    • Puedes distraerte o molestarte por el desajuste entre los movimientos de los labios y la voz.
    • -
    • Puede encontrarse con mala calidad o doblaje inexacto que puede arruinar el estado de ánimo y el significado de la película.
    • -
    • Se puede faltar el respeto a la visión artística y la intención de los cineastas y actores.
    • -
    -

    Lista de algunas de las razones por las que el doblaje puede arruinar la experiencia original

    -

    Hay muchos ejemplos de cómo el doblaje puede arruinar la experiencia original de ver películas tamiles. Estos son algunos de ellos:

    - -Película tamilRazón por la que el doblaje puede arruinarlo - -Baahubali 2: La Conclusión- La película es un espectáculo visual que muestra la grandeza y belleza de la cultura e historia tamil
    - El doblaje puede diluir la autenticidad y riqueza de la lengua y la música que son parte integral de la película
    - El doblaje también puede reducir la intensidad y la emoción de las secuencias de acción y el clímax -Mersal- La película es un drama político que aborda algunos de los problemas sociales y económicos en la India
    - El doblaje puede perder la relevancia y resonancia de algunos de los diálogos y canciones que están dirigidos a la audiencia tamil
    - El doblaje también puede cambiar o censurar algunos de los aspectos controvertidos o sensibles de la película -
    -

    Conclusión

    -

    Las películas tamiles son una gran fuente de entretenimiento, educación e inspiración para millones de personas en todo el mundo. Ofrecen una variedad de géneros, temas y estilos que se adaptan a diferentes gustos y preferencias. Sin embargo, si quieres ver películas Tamil en línea, debes tener cuidado con la forma en que las descargas. Debe evitar los sitios ilegales o piratas que pueden dañar su dispositivo, su seguridad y su conciencia. Debe optar por servicios de transmisión legales y seguros que puedan proporcionarle videos de alta calidad, una amplia gama de opciones y contenido exclusivo. También debe considerar ver películas tamiles con subtítulos en lugar de doblaje, ya que puede mejorar su comprensión y apreciación de la lengua, la cultura y el arte del cine tamil.

    - -

    ¿Qué estás esperando? Regístrate para Hotstar hoy y comienza a ver tus películas favoritas de Tamil en línea. ¡No te arrepentirás!

    -

    Preguntas frecuentes

    -
      -
    1. ¿Cuáles son algunas de las mejores películas tamiles de 2017?
    2. -

      Some of the best Tamil movies of 2017 are Vikram Vedha, Baahubali 2: The Conclusion, Mersal, Vivegam, Aruvi, Theeran Adhigaaram Ondru, Thupparivaalan, Velaikkaran, Meesaya Murukku, and Vikram Vedha. 2. ¿Cómo puedo descargar películas Tamil de forma legal y segura?

      -

      Puede descargar películas tamiles de forma legal y segura mediante el uso de un servicio de transmisión que ofrece películas tamiles en línea, como Hotstar, Amazon Prime Video, Netflix o ZEE5. Estos servicios de transmisión tienen la licencia y el permiso para distribuir películas tamiles en línea, y también proporcionan videos de alta calidad, una amplia gama de opciones y contenido exclusivo. También puede evitar los riesgos y problemas de usar sitios ilegales o pirateados que pueden dañar su dispositivo, su seguridad y su conciencia.

      -
    3. ¿Cómo puedo ver películas en tamil con subtítulos o doblaje?
    4. -

      Puedes ver películas en tamil con subtítulos o doblaje eligiendo la opción que se adapte a tu preferencia y conveniencia. La mayoría de los servicios de streaming ofrecen subtítulos o opciones de doblaje para películas tamiles, dependiendo de la disponibilidad y la demanda. También puede encontrar subtítulos para películas tamiles en algunos sitios que se especializan en proporcionar subtítulos para películas y programas, como Subscene, Opensubtitles o YIFY Subtitles. Sin embargo, debe tener cuidado con la calidad y precisión de los subtítulos o doblajes, ya que pueden afectar su disfrute y apreciación de la película.

      -
    5. ¿Cuáles son las ventajas y desventajas de ver películas tamiles con subtítulos o doblaje?
    6. - -
    7. ¿Por qué debería evitar sitios ilegales o piratas para descargar películas Tamil?
    8. -

      Debes evitar sitios ilegales o pirateados para descargar películas tamiles porque pueden exponerte a muchos riesgos y problemas. Algunos de los riesgos y problemas de usar estos sitios son que puede descargar virus, malware o spyware que pueden dañar su dispositivo o robar su información personal, enfrentar acciones legales o multas por violar los derechos de propiedad intelectual de los cineastas y distribuidores, comprometer la calidad y la seguridad de su conexión a Internet al exponerla a hackers o ciberataques, perderse la experiencia original y auténtica de ver películas tamiles como están destinadas a ser vistas, y contribuir a la pérdida de ingresos y puestos de trabajo para la industria cinematográfica tamil y sus trabajadores.

      -
    9. ¿Qué es Hotstar y por qué debería usarlo para ver películas Tamil en línea?
    10. -

      Hotstar es uno de los mejores servicios de streaming para ver películas tamiles en línea, ya que ofrece más de 1000 películas tamiles, incluidos nuevos lanzamientos y clásicos. También puedes ver películas en hindi, inglés, malayalam, telugu, kannada, bengalí, marathi y otras películas regionales en Hotstar. Puede disfrutar de deportes en vivo, noticias, programas de televisión y originales en Hotstar también. Puede ver Hotstar con subtítulos o opciones de doblaje, según su preferencia. También puede descargar vídeos para verlos sin conexión en Hotstar. Puede obtener Hotstar gratis para contenido limitado, o para Rs. 299 por mes o Rs. 1499 por año para contenido premium.

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/BernardoOlisan/vqganclip/CLIP/clip/model.py b/spaces/BernardoOlisan/vqganclip/CLIP/clip/model.py deleted file mode 100644 index f2c95c481724270116998b90de64cee8ef58c94e..0000000000000000000000000000000000000000 --- a/spaces/BernardoOlisan/vqganclip/CLIP/clip/model.py +++ /dev/null @@ -1,432 +0,0 @@ -from collections import OrderedDict -from typing import Tuple, Union - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1): - super().__init__() - - # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1 - self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - - self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - - self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity() - - self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - - self.relu = nn.ReLU(inplace=True) - self.downsample = None - self.stride = stride - - if stride > 1 or inplanes != planes * Bottleneck.expansion: - # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1 - self.downsample = nn.Sequential(OrderedDict([ - ("-1", nn.AvgPool2d(stride)), - ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)), - ("1", nn.BatchNorm2d(planes * self.expansion)) - ])) - - def forward(self, x: torch.Tensor): - identity = x - - out = self.relu(self.bn1(self.conv1(x))) - out = self.relu(self.bn2(self.conv2(out))) - out = self.avgpool(out) - out = self.bn3(self.conv3(out)) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - return out - - -class AttentionPool2d(nn.Module): - def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None): - super().__init__() - self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5) - self.k_proj = nn.Linear(embed_dim, embed_dim) - self.q_proj = nn.Linear(embed_dim, embed_dim) - self.v_proj = nn.Linear(embed_dim, embed_dim) - self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim) - self.num_heads = num_heads - - def forward(self, x): - x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC - x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC - x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC - x, _ = F.multi_head_attention_forward( - query=x, key=x, value=x, - embed_dim_to_check=x.shape[-1], - num_heads=self.num_heads, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - in_proj_weight=None, - in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]), - bias_k=None, - bias_v=None, - add_zero_attn=False, - dropout_p=0, - out_proj_weight=self.c_proj.weight, - out_proj_bias=self.c_proj.bias, - use_separate_proj_weight=True, - training=self.training, - need_weights=False - ) - - return x[0] - - -class ModifiedResNet(nn.Module): - """ - A ResNet class that is similar to torchvision's but contains the following changes: - - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool. - - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - - The final pooling layer is a QKV attention instead of an average pool - """ - - def __init__(self, layers, output_dim, heads, input_resolution=224, width=64): - super().__init__() - self.output_dim = output_dim - self.input_resolution = input_resolution - - # the 3-layer stem - self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(width // 2) - self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(width // 2) - self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False) - self.bn3 = nn.BatchNorm2d(width) - self.avgpool = nn.AvgPool2d(2) - self.relu = nn.ReLU(inplace=True) - - # residual layers - self._inplanes = width # this is a *mutable* variable used during construction - self.layer1 = self._make_layer(width, layers[0]) - self.layer2 = self._make_layer(width * 2, layers[1], stride=2) - self.layer3 = self._make_layer(width * 4, layers[2], stride=2) - self.layer4 = self._make_layer(width * 8, layers[3], stride=2) - - embed_dim = width * 32 # the ResNet feature dimension - self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim) - - def _make_layer(self, planes, blocks, stride=1): - layers = [Bottleneck(self._inplanes, planes, stride)] - - self._inplanes = planes * Bottleneck.expansion - for _ in range(1, blocks): - layers.append(Bottleneck(self._inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - def stem(x): - for conv, bn in [(self.conv1, self.bn1), (self.conv2, self.bn2), (self.conv3, self.bn3)]: - x = self.relu(bn(conv(x))) - x = self.avgpool(x) - return x - - x = x.type(self.conv1.weight.dtype) - x = stem(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.attnpool(x) - - return x - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x: torch.Tensor): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x: torch.Tensor): - x = x + self.attention(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None): - super().__init__() - self.width = width - self.layers = layers - self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)]) - - def forward(self, x: torch.Tensor): - return self.resblocks(x) - - -class VisionTransformer(nn.Module): - def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int): - super().__init__() - self.input_resolution = input_resolution - self.output_dim = output_dim - self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width)) - self.ln_pre = LayerNorm(width) - - self.transformer = Transformer(width, layers, heads) - - self.ln_post = LayerNorm(width) - self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) - - def forward(self, x: torch.Tensor): - x = self.conv1(x) # shape = [*, width, grid, grid] - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - - x = self.ln_post(x[:, 0, :]) - - if self.proj is not None: - x = x @ self.proj - - return x - - -class CLIP(nn.Module): - def __init__(self, - embed_dim: int, - # vision - image_resolution: int, - vision_layers: Union[Tuple[int, int, int, int], int], - vision_width: int, - vision_patch_size: int, - # text - context_length: int, - vocab_size: int, - transformer_width: int, - transformer_heads: int, - transformer_layers: int - ): - super().__init__() - - self.context_length = context_length - - if isinstance(vision_layers, (tuple, list)): - vision_heads = vision_width * 32 // 64 - self.visual = ModifiedResNet( - layers=vision_layers, - output_dim=embed_dim, - heads=vision_heads, - input_resolution=image_resolution, - width=vision_width - ) - else: - vision_heads = vision_width // 64 - self.visual = VisionTransformer( - input_resolution=image_resolution, - patch_size=vision_patch_size, - width=vision_width, - layers=vision_layers, - heads=vision_heads, - output_dim=embed_dim - ) - - self.transformer = Transformer( - width=transformer_width, - layers=transformer_layers, - heads=transformer_heads, - attn_mask=self.build_attention_mask() - ) - - self.vocab_size = vocab_size - self.token_embedding = nn.Embedding(vocab_size, transformer_width) - self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width)) - self.ln_final = LayerNorm(transformer_width) - - self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim)) - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - self.initialize_parameters() - - def initialize_parameters(self): - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - - if isinstance(self.visual, ModifiedResNet): - if self.visual.attnpool is not None: - std = self.visual.attnpool.c_proj.in_features ** -0.5 - nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std) - - for resnet_block in [self.visual.layer1, self.visual.layer2, self.visual.layer3, self.visual.layer4]: - for name, param in resnet_block.named_parameters(): - if name.endswith("bn3.weight"): - nn.init.zeros_(param) - - proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5) - attn_std = self.transformer.width ** -0.5 - fc_std = (2 * self.transformer.width) ** -0.5 - for block in self.transformer.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - - if self.text_projection is not None: - nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - @property - def dtype(self): - return self.visual.conv1.weight.dtype - - def encode_image(self, image): - return self.visual(image.type(self.dtype)) - - def encode_text(self, text): - x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding.type(self.dtype) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x).type(self.dtype) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - - return x - - def forward(self, image, text): - image_features = self.encode_image(image) - text_features = self.encode_text(text) - - # normalized features - image_features = image_features / image_features.norm(dim=-1, keepdim=True) - text_features = text_features / text_features.norm(dim=-1, keepdim=True) - - # cosine similarity as logits - logit_scale = self.logit_scale.exp() - logits_per_image = logit_scale * image_features @ text_features.t() - logits_per_text = logit_scale * text_features @ image_features.t() - - # shape = [global_batch_size, global_batch_size] - return logits_per_image, logits_per_text - - -def convert_weights(model: nn.Module): - """Convert applicable model parameters to fp16""" - - def _convert_weights_to_fp16(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - if isinstance(l, nn.MultiheadAttention): - for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]: - tensor = getattr(l, attr) - if tensor is not None: - tensor.data = tensor.data.half() - - for name in ["text_projection", "proj"]: - if hasattr(l, name): - attr = getattr(l, name) - if attr is not None: - attr.data = attr.data.half() - - model.apply(_convert_weights_to_fp16) - - -def build_model(state_dict: dict): - vit = "visual.proj" in state_dict - - if vit: - vision_width = state_dict["visual.conv1.weight"].shape[0] - vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")]) - vision_patch_size = state_dict["visual.conv1.weight"].shape[-1] - grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5) - image_resolution = vision_patch_size * grid_size - else: - counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]] - vision_layers = tuple(counts) - vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0] - output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5) - vision_patch_size = None - assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0] - image_resolution = output_width * 32 - - embed_dim = state_dict["text_projection"].shape[1] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks"))) - - model = CLIP( - embed_dim, - image_resolution, vision_layers, vision_width, vision_patch_size, - context_length, vocab_size, transformer_width, transformer_heads, transformer_layers - ) - - for key in ["input_resolution", "context_length", "vocab_size"]: - if key in state_dict: - del state_dict[key] - - convert_weights(model) - model.load_state_dict(state_dict) - return model.eval() diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/codec.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/codec.py deleted file mode 100644 index 1ca9ba62c208527b796b49306f4b8c95eb868a51..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/codec.py +++ /dev/null @@ -1,112 +0,0 @@ -from .core import encode, decode, alabel, ulabel, IDNAError -import codecs -import re -from typing import Tuple, Optional - -_unicode_dots_re = re.compile('[\u002e\u3002\uff0e\uff61]') - -class Codec(codecs.Codec): - - def encode(self, data: str, errors: str = 'strict') -> Tuple[bytes, int]: - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return b"", 0 - - return encode(data), len(data) - - def decode(self, data: bytes, errors: str = 'strict') -> Tuple[str, int]: - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return '', 0 - - return decode(data), len(data) - -class IncrementalEncoder(codecs.BufferedIncrementalEncoder): - def _buffer_encode(self, data: str, errors: str, final: bool) -> Tuple[str, int]: # type: ignore - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return "", 0 - - labels = _unicode_dots_re.split(data) - trailing_dot = '' - if labels: - if not labels[-1]: - trailing_dot = '.' - del labels[-1] - elif not final: - # Keep potentially unfinished label until the next call - del labels[-1] - if labels: - trailing_dot = '.' - - result = [] - size = 0 - for label in labels: - result.append(alabel(label)) - if size: - size += 1 - size += len(label) - - # Join with U+002E - result_str = '.'.join(result) + trailing_dot # type: ignore - size += len(trailing_dot) - return result_str, size - -class IncrementalDecoder(codecs.BufferedIncrementalDecoder): - def _buffer_decode(self, data: str, errors: str, final: bool) -> Tuple[str, int]: # type: ignore - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return ('', 0) - - labels = _unicode_dots_re.split(data) - trailing_dot = '' - if labels: - if not labels[-1]: - trailing_dot = '.' - del labels[-1] - elif not final: - # Keep potentially unfinished label until the next call - del labels[-1] - if labels: - trailing_dot = '.' - - result = [] - size = 0 - for label in labels: - result.append(ulabel(label)) - if size: - size += 1 - size += len(label) - - result_str = '.'.join(result) + trailing_dot - size += len(trailing_dot) - return (result_str, size) - - -class StreamWriter(Codec, codecs.StreamWriter): - pass - - -class StreamReader(Codec, codecs.StreamReader): - pass - - -def getregentry() -> codecs.CodecInfo: - # Compatibility as a search_function for codecs.register() - return codecs.CodecInfo( - name='idna', - encode=Codec().encode, # type: ignore - decode=Codec().decode, # type: ignore - incrementalencoder=IncrementalEncoder, - incrementaldecoder=IncrementalDecoder, - streamwriter=StreamWriter, - streamreader=StreamReader, - ) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/install_scripts.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/install_scripts.py deleted file mode 100644 index aeb0e4240cd92c77e0ba96e6651487625cd9f391..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/install_scripts.py +++ /dev/null @@ -1,70 +0,0 @@ -from distutils import log -import distutils.command.install_scripts as orig -from distutils.errors import DistutilsModuleError -import os -import sys - -from pkg_resources import Distribution, PathMetadata -from .._path import ensure_directory - - -class install_scripts(orig.install_scripts): - """Do normal script install, plus any egg_info wrapper scripts""" - - def initialize_options(self): - orig.install_scripts.initialize_options(self) - self.no_ep = False - - def run(self): - import setuptools.command.easy_install as ei - - self.run_command("egg_info") - if self.distribution.scripts: - orig.install_scripts.run(self) # run first to set up self.outfiles - else: - self.outfiles = [] - if self.no_ep: - # don't install entry point scripts into .egg file! - return - - ei_cmd = self.get_finalized_command("egg_info") - dist = Distribution( - ei_cmd.egg_base, PathMetadata(ei_cmd.egg_base, ei_cmd.egg_info), - ei_cmd.egg_name, ei_cmd.egg_version, - ) - bs_cmd = self.get_finalized_command('build_scripts') - exec_param = getattr(bs_cmd, 'executable', None) - try: - bw_cmd = self.get_finalized_command("bdist_wininst") - is_wininst = getattr(bw_cmd, '_is_running', False) - except (ImportError, DistutilsModuleError): - is_wininst = False - writer = ei.ScriptWriter - if is_wininst: - exec_param = "python.exe" - writer = ei.WindowsScriptWriter - if exec_param == sys.executable: - # In case the path to the Python executable contains a space, wrap - # it so it's not split up. - exec_param = [exec_param] - # resolve the writer to the environment - writer = writer.best() - cmd = writer.command_spec_class.best().from_param(exec_param) - for args in writer.get_args(dist, cmd.as_header()): - self.write_script(*args) - - def write_script(self, script_name, contents, mode="t", *ignored): - """Write an executable file to the scripts directory""" - from setuptools.command.easy_install import chmod, current_umask - - log.info("Installing %s script to %s", script_name, self.install_dir) - target = os.path.join(self.install_dir, script_name) - self.outfiles.append(target) - - mask = current_umask() - if not self.dry_run: - ensure_directory(target) - f = open(target, "w" + mode) - f.write(contents) - f.close() - chmod(target, 0o777 - mask) diff --git a/spaces/BilalSardar/Text-To-image-AllModels/app.py b/spaces/BilalSardar/Text-To-image-AllModels/app.py deleted file mode 100644 index 4c9be36af58c47d288dae6872445bec71e3afa04..0000000000000000000000000000000000000000 --- a/spaces/BilalSardar/Text-To-image-AllModels/app.py +++ /dev/null @@ -1,44 +0,0 @@ -from diffusers import StableDiffusionPipeline -import torch - -modelieo=['nitrosocke/Arcane-Diffusion', - 'dreamlike-art/dreamlike-diffusion-1.0', - 'nitrosocke/archer-diffusion', - 'Linaqruf/anything-v3.0', - 'nitrosocke/mo-di-diffusion', - 'nitrosocke/classic-anim-diffusion', - 'dallinmackay/Van-Gogh-diffusion', - 'wavymulder/wavyfusion', - 'wavymulder/Analog-Diffusion', - 'nitrosocke/redshift-diffusion', - 'prompthero/midjourney-v4-diffusion', - 'hakurei/waifu-diffusion', - 'DGSpitzer/Cyberpunk-Anime-Diffusion', - 'nitrosocke/elden-ring-diffusion', - 'naclbit/trinart_stable_diffusion_v2', - 'nitrosocke/spider-verse-diffusion', - 'Fictiverse/Stable_Diffusion_BalloonArt_Model', - 'dallinmackay/Tron-Legacy-diffusion', - 'lambdalabs/sd-pokemon-diffusers', - 'AstraliteHeart/pony-diffusion', - 'nousr/robo-diffusion'] - - -def TextToImage(Prompt,model): - model_id = model - pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) - pipe = pipe.to("cpu") - - prompt = Prompt - image = pipe(prompt).images[0] - - return image - - -import gradio as gr -interface = gr.Interface(fn=TextToImage, - inputs=["text", gr.Dropdown(modelieo)], - outputs="image", - title='Text to Image') - -interface.launch() \ No newline at end of file diff --git a/spaces/Boadiwaa/Recipes/openai/api_resources/answer.py b/spaces/Boadiwaa/Recipes/openai/api_resources/answer.py deleted file mode 100644 index 33de3cb7e9901cfa35a17dd9998da5aca72f9e3f..0000000000000000000000000000000000000000 --- a/spaces/Boadiwaa/Recipes/openai/api_resources/answer.py +++ /dev/null @@ -1,12 +0,0 @@ -from openai.openai_object import OpenAIObject - - -class Answer(OpenAIObject): - @classmethod - def get_url(self): - return "/answers" - - @classmethod - def create(cls, **params): - instance = cls() - return instance.request("post", cls.get_url(), params) diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/structures.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/structures.py deleted file mode 100644 index 41bdb164d696d707adc7f4a15498cf9bab644276..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/structures.py +++ /dev/null @@ -1,578 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import base64 -import numpy as np -from io import BytesIO -import torch -from PIL import Image -from torch.nn import functional as F - - -class DensePoseTransformData(object): - - # Horizontal symmetry label transforms used for horizontal flip - MASK_LABEL_SYMMETRIES = [0, 1, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 14] - # fmt: off - POINT_LABEL_SYMMETRIES = [ 0, 1, 2, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15, 18, 17, 20, 19, 22, 21, 24, 23] # noqa - # fmt: on - - def __init__(self, uv_symmetries): - self.mask_label_symmetries = DensePoseTransformData.MASK_LABEL_SYMMETRIES - self.point_label_symmetries = DensePoseTransformData.POINT_LABEL_SYMMETRIES - self.uv_symmetries = uv_symmetries - - @staticmethod - def load(fpath): - import scipy.io - - uv_symmetry_map = scipy.io.loadmat(fpath) - uv_symmetry_map_torch = {} - for key in ["U_transforms", "V_transforms"]: - uv_symmetry_map_torch[key] = [] - map_src = uv_symmetry_map[key] - map_dst = uv_symmetry_map_torch[key] - for i in range(map_src.shape[1]): - map_dst.append(torch.from_numpy(map_src[0, i]).to(dtype=torch.float)) - uv_symmetry_map_torch[key] = torch.stack(map_dst, dim=0).to( - device=torch.cuda.current_device() - ) - transform_data = DensePoseTransformData(uv_symmetry_map_torch) - return transform_data - - -class DensePoseDataRelative(object): - """ - Dense pose relative annotations that can be applied to any bounding box: - x - normalized X coordinates [0, 255] of annotated points - y - normalized Y coordinates [0, 255] of annotated points - i - body part labels 0,...,24 for annotated points - u - body part U coordinates [0, 1] for annotated points - v - body part V coordinates [0, 1] for annotated points - segm - 256x256 segmentation mask with values 0,...,14 - To obtain absolute x and y data wrt some bounding box one needs to first - divide the data by 256, multiply by the respective bounding box size - and add bounding box offset: - x_img = x0 + x_norm * w / 256.0 - y_img = y0 + y_norm * h / 256.0 - Segmentation masks are typically sampled to get image-based masks. - """ - - # Key for normalized X coordinates in annotation dict - X_KEY = "dp_x" - # Key for normalized Y coordinates in annotation dict - Y_KEY = "dp_y" - # Key for U part coordinates in annotation dict - U_KEY = "dp_U" - # Key for V part coordinates in annotation dict - V_KEY = "dp_V" - # Key for I point labels in annotation dict - I_KEY = "dp_I" - # Key for segmentation mask in annotation dict - S_KEY = "dp_masks" - # Number of body parts in segmentation masks - N_BODY_PARTS = 14 - # Number of parts in point labels - N_PART_LABELS = 24 - MASK_SIZE = 256 - - def __init__(self, annotation, cleanup=False): - is_valid, reason_not_valid = DensePoseDataRelative.validate_annotation(annotation) - assert is_valid, "Invalid DensePose annotations: {}".format(reason_not_valid) - self.x = torch.as_tensor(annotation[DensePoseDataRelative.X_KEY]) - self.y = torch.as_tensor(annotation[DensePoseDataRelative.Y_KEY]) - self.i = torch.as_tensor(annotation[DensePoseDataRelative.I_KEY]) - self.u = torch.as_tensor(annotation[DensePoseDataRelative.U_KEY]) - self.v = torch.as_tensor(annotation[DensePoseDataRelative.V_KEY]) - self.segm = DensePoseDataRelative.extract_segmentation_mask(annotation) - self.device = torch.device("cpu") - if cleanup: - DensePoseDataRelative.cleanup_annotation(annotation) - - def to(self, device): - if self.device == device: - return self - new_data = DensePoseDataRelative.__new__(DensePoseDataRelative) - new_data.x = self.x - new_data.x = self.x.to(device) - new_data.y = self.y.to(device) - new_data.i = self.i.to(device) - new_data.u = self.u.to(device) - new_data.v = self.v.to(device) - new_data.segm = self.segm.to(device) - new_data.device = device - return new_data - - @staticmethod - def extract_segmentation_mask(annotation): - import pycocotools.mask as mask_utils - - poly_specs = annotation[DensePoseDataRelative.S_KEY] - segm = torch.zeros((DensePoseDataRelative.MASK_SIZE,) * 2, dtype=torch.float32) - for i in range(DensePoseDataRelative.N_BODY_PARTS): - poly_i = poly_specs[i] - if poly_i: - mask_i = mask_utils.decode(poly_i) - segm[mask_i > 0] = i + 1 - return segm - - @staticmethod - def validate_annotation(annotation): - for key in [ - DensePoseDataRelative.X_KEY, - DensePoseDataRelative.Y_KEY, - DensePoseDataRelative.I_KEY, - DensePoseDataRelative.U_KEY, - DensePoseDataRelative.V_KEY, - DensePoseDataRelative.S_KEY, - ]: - if key not in annotation: - return False, "no {key} data in the annotation".format(key=key) - return True, None - - @staticmethod - def cleanup_annotation(annotation): - for key in [ - DensePoseDataRelative.X_KEY, - DensePoseDataRelative.Y_KEY, - DensePoseDataRelative.I_KEY, - DensePoseDataRelative.U_KEY, - DensePoseDataRelative.V_KEY, - DensePoseDataRelative.S_KEY, - ]: - if key in annotation: - del annotation[key] - - def apply_transform(self, transforms, densepose_transform_data): - self._transform_pts(transforms, densepose_transform_data) - self._transform_segm(transforms, densepose_transform_data) - - def _transform_pts(self, transforms, dp_transform_data): - import detectron2.data.transforms as T - - # NOTE: This assumes that HorizFlipTransform is the only one that does flip - do_hflip = sum(isinstance(t, T.HFlipTransform) for t in transforms.transforms) % 2 == 1 - if do_hflip: - self.x = self.segm.size(1) - self.x - self._flip_iuv_semantics(dp_transform_data) - - def _flip_iuv_semantics(self, dp_transform_data: DensePoseTransformData) -> None: - i_old = self.i.clone() - uv_symmetries = dp_transform_data.uv_symmetries - pt_label_symmetries = dp_transform_data.point_label_symmetries - for i in range(self.N_PART_LABELS): - if i + 1 in i_old: - annot_indices_i = i_old == i + 1 - if pt_label_symmetries[i + 1] != i + 1: - self.i[annot_indices_i] = pt_label_symmetries[i + 1] - u_loc = (self.u[annot_indices_i] * 255).long() - v_loc = (self.v[annot_indices_i] * 255).long() - self.u[annot_indices_i] = uv_symmetries["U_transforms"][i][v_loc, u_loc].to( - device=self.u.device - ) - self.v[annot_indices_i] = uv_symmetries["V_transforms"][i][v_loc, u_loc].to( - device=self.v.device - ) - - def _transform_segm(self, transforms, dp_transform_data): - import detectron2.data.transforms as T - - # NOTE: This assumes that HorizFlipTransform is the only one that does flip - do_hflip = sum(isinstance(t, T.HFlipTransform) for t in transforms.transforms) % 2 == 1 - if do_hflip: - self.segm = torch.flip(self.segm, [1]) - self._flip_segm_semantics(dp_transform_data) - - def _flip_segm_semantics(self, dp_transform_data): - old_segm = self.segm.clone() - mask_label_symmetries = dp_transform_data.mask_label_symmetries - for i in range(self.N_BODY_PARTS): - if mask_label_symmetries[i + 1] != i + 1: - self.segm[old_segm == i + 1] = mask_label_symmetries[i + 1] - - -def normalized_coords_transform(x0, y0, w, h): - """ - Coordinates transform that maps top left corner to (-1, -1) and bottom - right corner to (1, 1). Used for torch.grid_sample to initialize the - grid - """ - - def f(p): - return (2 * (p[0] - x0) / w - 1, 2 * (p[1] - y0) / h - 1) - - return f - - -class DensePoseOutput(object): - def __init__(self, S, I, U, V, confidences): - """ - Args: - S (`torch.Tensor`): coarse segmentation tensor of size (N, A, H, W) - I (`torch.Tensor`): fine segmentation tensor of size (N, C, H, W) - U (`torch.Tensor`): U coordinates for each fine segmentation label of size (N, C, H, W) - V (`torch.Tensor`): V coordinates for each fine segmentation label of size (N, C, H, W) - confidences (dict of str -> `torch.Tensor`) estimated confidence model parameters - """ - self.S = S - self.I = I # noqa: E741 - self.U = U - self.V = V - self.confidences = confidences - self._check_output_dims(S, I, U, V) - - def _check_output_dims(self, S, I, U, V): - assert ( - len(S.size()) == 4 - ), "Segmentation output should have 4 " "dimensions (NCHW), but has size {}".format( - S.size() - ) - assert ( - len(I.size()) == 4 - ), "Segmentation output should have 4 " "dimensions (NCHW), but has size {}".format( - S.size() - ) - assert ( - len(U.size()) == 4 - ), "Segmentation output should have 4 " "dimensions (NCHW), but has size {}".format( - S.size() - ) - assert ( - len(V.size()) == 4 - ), "Segmentation output should have 4 " "dimensions (NCHW), but has size {}".format( - S.size() - ) - assert len(S) == len(I), ( - "Number of output segmentation planes {} " - "should be equal to the number of output part index " - "planes {}".format(len(S), len(I)) - ) - assert S.size()[2:] == I.size()[2:], ( - "Output segmentation plane size {} " - "should be equal to the output part index " - "plane size {}".format(S.size()[2:], I.size()[2:]) - ) - assert I.size() == U.size(), ( - "Part index output shape {} " - "should be the same as U coordinates output shape {}".format(I.size(), U.size()) - ) - assert I.size() == V.size(), ( - "Part index output shape {} " - "should be the same as V coordinates output shape {}".format(I.size(), V.size()) - ) - - def resize(self, image_size_hw): - # do nothing - outputs are invariant to resize - pass - - def _crop(self, S, I, U, V, bbox_old_xywh, bbox_new_xywh): - """ - Resample S, I, U, V from bbox_old to the cropped bbox_new - """ - x0old, y0old, wold, hold = bbox_old_xywh - x0new, y0new, wnew, hnew = bbox_new_xywh - tr_coords = normalized_coords_transform(x0old, y0old, wold, hold) - topleft = (x0new, y0new) - bottomright = (x0new + wnew, y0new + hnew) - topleft_norm = tr_coords(topleft) - bottomright_norm = tr_coords(bottomright) - hsize = S.size(1) - wsize = S.size(2) - grid = torch.meshgrid( - torch.arange( - topleft_norm[1], - bottomright_norm[1], - (bottomright_norm[1] - topleft_norm[1]) / hsize, - )[:hsize], - torch.arange( - topleft_norm[0], - bottomright_norm[0], - (bottomright_norm[0] - topleft_norm[0]) / wsize, - )[:wsize], - ) - grid = torch.stack(grid, dim=2).to(S.device) - assert ( - grid.size(0) == hsize - ), "Resampled grid expected " "height={}, actual height={}".format(hsize, grid.size(0)) - assert grid.size(1) == wsize, "Resampled grid expected " "width={}, actual width={}".format( - wsize, grid.size(1) - ) - S_new = F.grid_sample( - S.unsqueeze(0), - torch.unsqueeze(grid, 0), - mode="bilinear", - padding_mode="border", - align_corners=True, - ).squeeze(0) - I_new = F.grid_sample( - I.unsqueeze(0), - torch.unsqueeze(grid, 0), - mode="bilinear", - padding_mode="border", - align_corners=True, - ).squeeze(0) - U_new = F.grid_sample( - U.unsqueeze(0), - torch.unsqueeze(grid, 0), - mode="bilinear", - padding_mode="border", - align_corners=True, - ).squeeze(0) - V_new = F.grid_sample( - V.unsqueeze(0), - torch.unsqueeze(grid, 0), - mode="bilinear", - padding_mode="border", - align_corners=True, - ).squeeze(0) - return S_new, I_new, U_new, V_new - - def crop(self, indices_cropped, bboxes_old, bboxes_new): - """ - Crop outputs for selected bounding boxes to the new bounding boxes. - """ - # VK: cropping is ignored for now - # for i, ic in enumerate(indices_cropped): - # self.S[ic], self.I[ic], self.U[ic], self.V[ic] = \ - # self._crop(self.S[ic], self.I[ic], self.U[ic], self.V[ic], - # bboxes_old[i], bboxes_new[i]) - pass - - def hflip(self, transform_data: DensePoseTransformData) -> None: - """ - Change S, I, U and V to take into account a Horizontal flip. - """ - if self.I.shape[0] > 0: - for el in "SIUV": - self.__dict__[el] = torch.flip(self.__dict__[el], [3]) - self._flip_iuv_semantics_tensor(transform_data) - self._flip_segm_semantics_tensor(transform_data) - - def _flip_iuv_semantics_tensor(self, dp_transform_data: DensePoseTransformData) -> None: - point_label_symmetries = dp_transform_data.point_label_symmetries - uv_symmetries = dp_transform_data.uv_symmetries - - N, C, H, W = self.U.shape - u_loc = (self.U[:, 1:, :, :].clamp(0, 1) * 255).long() - v_loc = (self.V[:, 1:, :, :].clamp(0, 1) * 255).long() - Iindex = torch.arange(C - 1, device=self.U.device)[None, :, None, None].expand( - N, C - 1, H, W - ) - self.U[:, 1:, :, :] = uv_symmetries["U_transforms"][Iindex, v_loc, u_loc].to( - device=self.U.device - ) - self.V[:, 1:, :, :] = uv_symmetries["V_transforms"][Iindex, v_loc, u_loc].to( - device=self.V.device - ) - - for el in "IUV": - self.__dict__[el] = self.__dict__[el][:, point_label_symmetries, :, :] - - def _flip_segm_semantics_tensor(self, dp_transform_data): - if self.S.shape[1] == DensePoseDataRelative.N_BODY_PARTS + 1: - self.S = self.S[:, dp_transform_data.mask_label_symmetries, :, :] - - def to_result(self, boxes_xywh): - """ - Convert DensePose outputs to results format. Results are more compact, - but cannot be resampled any more - """ - result = DensePoseResult(boxes_xywh, self.S, self.I, self.U, self.V) - return result - - def __getitem__(self, item): - if isinstance(item, int): - S_selected = self.S[item].unsqueeze(0) - I_selected = self.I[item].unsqueeze(0) - U_selected = self.U[item].unsqueeze(0) - V_selected = self.V[item].unsqueeze(0) - conf_selected = {} - for key in self.confidences: - conf_selected[key] = self.confidences[key][item].unsqueeze(0) - else: - S_selected = self.S[item] - I_selected = self.I[item] - U_selected = self.U[item] - V_selected = self.V[item] - conf_selected = {} - for key in self.confidences: - conf_selected[key] = self.confidences[key][item] - return DensePoseOutput(S_selected, I_selected, U_selected, V_selected, conf_selected) - - def __str__(self): - s = "DensePoseOutput S {}, I {}, U {}, V {}".format( - list(self.S.size()), list(self.I.size()), list(self.U.size()), list(self.V.size()) - ) - s_conf = "confidences: [{}]".format( - ", ".join([f"{key} {list(self.confidences[key].size())}" for key in self.confidences]) - ) - return ", ".join([s, s_conf]) - - def __len__(self): - return self.S.size(0) - - -class DensePoseResult(object): - def __init__(self, boxes_xywh, S, I, U, V): - self.results = [] - self.boxes_xywh = boxes_xywh.cpu().tolist() - assert len(boxes_xywh.size()) == 2 - assert boxes_xywh.size(1) == 4 - for i, box_xywh in enumerate(boxes_xywh): - result_i = self._output_to_result(box_xywh, S[[i]], I[[i]], U[[i]], V[[i]]) - result_numpy_i = result_i.cpu().numpy() - result_encoded_i = DensePoseResult.encode_png_data(result_numpy_i) - result_encoded_with_shape_i = (result_numpy_i.shape, result_encoded_i) - self.results.append(result_encoded_with_shape_i) - - def __str__(self): - s = "DensePoseResult: N={} [{}]".format( - len(self.results), ", ".join([str(list(r[0])) for r in self.results]) - ) - return s - - def _output_to_result(self, box_xywh, S, I, U, V): - x, y, w, h = box_xywh - w = max(int(w), 1) - h = max(int(h), 1) - result = torch.zeros([3, h, w], dtype=torch.uint8, device=U.device) - assert ( - len(S.size()) == 4 - ), "AnnIndex tensor size should have {} " "dimensions but has {}".format(4, len(S.size())) - s_bbox = F.interpolate(S, (h, w), mode="bilinear", align_corners=False).argmax(dim=1) - assert ( - len(I.size()) == 4 - ), "IndexUV tensor size should have {} " "dimensions but has {}".format(4, len(S.size())) - i_bbox = ( - F.interpolate(I, (h, w), mode="bilinear", align_corners=False).argmax(dim=1) - * (s_bbox > 0).long() - ).squeeze(0) - assert len(U.size()) == 4, "U tensor size should have {} " "dimensions but has {}".format( - 4, len(U.size()) - ) - u_bbox = F.interpolate(U, (h, w), mode="bilinear", align_corners=False) - assert len(V.size()) == 4, "V tensor size should have {} " "dimensions but has {}".format( - 4, len(V.size()) - ) - v_bbox = F.interpolate(V, (h, w), mode="bilinear", align_corners=False) - result[0] = i_bbox - for part_id in range(1, u_bbox.size(1)): - result[1][i_bbox == part_id] = ( - (u_bbox[0, part_id][i_bbox == part_id] * 255).clamp(0, 255).to(torch.uint8) - ) - result[2][i_bbox == part_id] = ( - (v_bbox[0, part_id][i_bbox == part_id] * 255).clamp(0, 255).to(torch.uint8) - ) - assert ( - result.size(1) == h - ), "Results height {} should be equal" "to bounding box height {}".format(result.size(1), h) - assert ( - result.size(2) == w - ), "Results width {} should be equal" "to bounding box width {}".format(result.size(2), w) - return result - - @staticmethod - def encode_png_data(arr): - """ - Encode array data as a PNG image using the highest compression rate - @param arr [in] Data stored in an array of size (3, M, N) of type uint8 - @return Base64-encoded string containing PNG-compressed data - """ - assert len(arr.shape) == 3, "Expected a 3D array as an input," " got a {0}D array".format( - len(arr.shape) - ) - assert arr.shape[0] == 3, "Expected first array dimension of size 3," " got {0}".format( - arr.shape[0] - ) - assert arr.dtype == np.uint8, "Expected an array of type np.uint8, " " got {0}".format( - arr.dtype - ) - data = np.moveaxis(arr, 0, -1) - im = Image.fromarray(data) - fstream = BytesIO() - im.save(fstream, format="png", optimize=True) - s = base64.encodebytes(fstream.getvalue()).decode() - return s - - @staticmethod - def decode_png_data(shape, s): - """ - Decode array data from a string that contains PNG-compressed data - @param Base64-encoded string containing PNG-compressed data - @return Data stored in an array of size (3, M, N) of type uint8 - """ - fstream = BytesIO(base64.decodebytes(s.encode())) - im = Image.open(fstream) - data = np.moveaxis(np.array(im.getdata(), dtype=np.uint8), -1, 0) - return data.reshape(shape) - - def __len__(self): - return len(self.results) - - def __getitem__(self, item): - result_encoded = self.results[item] - bbox_xywh = self.boxes_xywh[item] - return result_encoded, bbox_xywh - - -class DensePoseList(object): - - _TORCH_DEVICE_CPU = torch.device("cpu") - - def __init__(self, densepose_datas, boxes_xyxy_abs, image_size_hw, device=_TORCH_DEVICE_CPU): - assert len(densepose_datas) == len(boxes_xyxy_abs), ( - "Attempt to initialize DensePoseList with {} DensePose datas " - "and {} boxes".format(len(densepose_datas), len(boxes_xyxy_abs)) - ) - self.densepose_datas = [] - for densepose_data in densepose_datas: - assert isinstance(densepose_data, DensePoseDataRelative) or densepose_data is None, ( - "Attempt to initialize DensePoseList with DensePose datas " - "of type {}, expected DensePoseDataRelative".format(type(densepose_data)) - ) - densepose_data_ondevice = ( - densepose_data.to(device) if densepose_data is not None else None - ) - self.densepose_datas.append(densepose_data_ondevice) - self.boxes_xyxy_abs = boxes_xyxy_abs.to(device) - self.image_size_hw = image_size_hw - self.device = device - - def to(self, device): - if self.device == device: - return self - return DensePoseList(self.densepose_datas, self.boxes_xyxy_abs, self.image_size_hw, device) - - def __iter__(self): - return iter(self.densepose_datas) - - def __len__(self): - return len(self.densepose_datas) - - def __repr__(self): - s = self.__class__.__name__ + "(" - s += "num_instances={}, ".format(len(self.densepose_datas)) - s += "image_width={}, ".format(self.image_size_hw[1]) - s += "image_height={})".format(self.image_size_hw[0]) - return s - - def __getitem__(self, item): - if isinstance(item, int): - densepose_data_rel = self.densepose_datas[item] - return densepose_data_rel - elif isinstance(item, slice): - densepose_datas_rel = self.densepose_datas[item] - boxes_xyxy_abs = self.boxes_xyxy_abs[item] - return DensePoseList( - densepose_datas_rel, boxes_xyxy_abs, self.image_size_hw, self.device - ) - elif isinstance(item, torch.Tensor) and (item.dtype == torch.bool): - densepose_datas_rel = [self.densepose_datas[i] for i, x in enumerate(item) if x > 0] - boxes_xyxy_abs = self.boxes_xyxy_abs[item] - return DensePoseList( - densepose_datas_rel, boxes_xyxy_abs, self.image_size_hw, self.device - ) - else: - densepose_datas_rel = [self.densepose_datas[i] for i in item] - boxes_xyxy_abs = self.boxes_xyxy_abs[item] - return DensePoseList( - densepose_datas_rel, boxes_xyxy_abs, self.image_size_hw, self.device - ) diff --git a/spaces/CVPR/LIVE/thrust/thrust/mr/detail/config.h b/spaces/CVPR/LIVE/thrust/thrust/mr/detail/config.h deleted file mode 100644 index 4cfc50d3e4290bbe63ae2a1d028b14b87c6a1665..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/mr/detail/config.h +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Copyright 2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -#include -#include -#include - -#define THRUST_MR_DEFAULT_ALIGNMENT THRUST_ALIGNOF(::thrust::detail::max_align_t) - -#if THRUST_CPP_DIALECT >= 2017 -# if __has_include() -# define THRUST_MR_STD_MR_HEADER -# define THRUST_MR_STD_MR_NS std::pmr -# elif __has_include() -# define THRUST_MR_STD_MR_HEADER -# define THRUST_MR_STD_MR_NS std::experimental::pmr -# endif -#endif - diff --git a/spaces/CVPR/ml-talking-face/translator/v3.py b/spaces/CVPR/ml-talking-face/translator/v3.py deleted file mode 100644 index 71d1c3f32eb810349449a60202ab9201267f04b9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/ml-talking-face/translator/v3.py +++ /dev/null @@ -1,58 +0,0 @@ -from google.cloud import translate -import yaml - - -class GoogleAuthTranslation: - def __init__(self, project_id, yaml_path='lang.yaml'): - self.translator = translate.TranslationServiceClient() - self.location = "global" - self.parent = f"projects/{project_id}/locations/{self.location}" - - with open(yaml_path) as f: - self.supporting_languages = yaml.load(f, Loader=yaml.FullLoader) - - def _detect(self, query): - response = self.translator.detect_language( - request={ - "parent": self.parent, - "content": query, - "mime_type": "text/plain", # mime types: text/plain, text/html - } - ) - - for language in response.languages: - # First language is the most confident one - return language.language_code - - def _get_dest_from_lang(self, lang): - try: - return self.supporting_languages[lang]['google_dest'] - - except KeyError as e: - raise e - - def _get_lang_from_dest(self, dest): - for key in self.supporting_languages: - if self.supporting_languages[key]['google_dest'] == dest: - return key - - raise RuntimeError(f"Detected langauge is not supported in our multilingual TTS. |\n Code: {dest} | See https://cloud.google.com/translate/docs/languages") - - def translate(self, query, lang): - - dest = self._get_dest_from_lang(lang) - - response = self.translator.translate_text( - request={ - "parent": self.parent, - "contents": [query], - "mime_type": "text/plain", # mime types: text/plain, text/html - "target_language_code": dest, - } - ) - - return " ".join([translation.translated_text for translation in response.translations]) - - def detect(self, query): - dest = self._detect(query) - return self._get_lang_from_dest(dest) diff --git a/spaces/CVPR/transfiner/demo/demo.py b/spaces/CVPR/transfiner/demo/demo.py deleted file mode 100644 index a14dfb94c998bd3bfb650004a6fe1a23bf17eda3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/transfiner/demo/demo.py +++ /dev/null @@ -1,190 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import glob -import multiprocessing as mp -import numpy as np -import os -import tempfile -import time -import warnings -import cv2 -import tqdm - -from detectron2.config import get_cfg -from detectron2.data.detection_utils import read_image -from detectron2.utils.logger import setup_logger - -from predictor import VisualizationDemo - -# constants -WINDOW_NAME = "COCO detections" - - -def setup_cfg(args): - # load config from file and command-line arguments - cfg = get_cfg() - # To use demo for Panoptic-DeepLab, please uncomment the following two lines. - # from detectron2.projects.panoptic_deeplab import add_panoptic_deeplab_config # noqa - # add_panoptic_deeplab_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - # Set score_threshold for builtin models - cfg.MODEL.RETINANET.SCORE_THRESH_TEST = args.confidence_threshold - cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = args.confidence_threshold - cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = args.confidence_threshold - cfg.freeze() - return cfg - - -def get_parser(): - parser = argparse.ArgumentParser(description="Detectron2 demo for builtin configs") - parser.add_argument( - "--config-file", - default="configs/quick_schedules/mask_rcnn_R_50_FPN_inference_acc_test.yaml", - metavar="FILE", - help="path to config file", - ) - parser.add_argument("--webcam", action="store_true", help="Take inputs from webcam.") - parser.add_argument("--video-input", help="Path to video file.") - parser.add_argument( - "--input", - nargs="+", - help="A list of space separated input images; " - "or a single glob pattern such as 'directory/*.jpg'", - ) - parser.add_argument( - "--output", - help="A file or directory to save output visualizations. " - "If not given, will show output in an OpenCV window.", - ) - - parser.add_argument( - "--confidence-threshold", - type=float, - default=0.5, - help="Minimum score for instance predictions to be shown", - ) - parser.add_argument( - "--opts", - help="Modify config options using the command-line 'KEY VALUE' pairs", - default=[], - nargs=argparse.REMAINDER, - ) - return parser - - -def test_opencv_video_format(codec, file_ext): - with tempfile.TemporaryDirectory(prefix="video_format_test") as dir: - filename = os.path.join(dir, "test_file" + file_ext) - writer = cv2.VideoWriter( - filename=filename, - fourcc=cv2.VideoWriter_fourcc(*codec), - fps=float(30), - frameSize=(10, 10), - isColor=True, - ) - [writer.write(np.zeros((10, 10, 3), np.uint8)) for _ in range(30)] - writer.release() - if os.path.isfile(filename): - return True - return False - - -if __name__ == "__main__": - mp.set_start_method("spawn", force=True) - args = get_parser().parse_args() - setup_logger(name="fvcore") - logger = setup_logger() - logger.info("Arguments: " + str(args)) - - cfg = setup_cfg(args) - - demo = VisualizationDemo(cfg) - - if args.input: - if len(args.input) == 1: - args.input = glob.glob(os.path.expanduser(args.input[0])) - assert args.input, "The input path(s) was not found" - for path in tqdm.tqdm(args.input, disable=not args.output): - # use PIL, to be consistent with evaluation - img = read_image(path, format="BGR") - start_time = time.time() - predictions, visualized_output = demo.run_on_image(img) - logger.info( - "{}: {} in {:.2f}s".format( - path, - "detected {} instances".format(len(predictions["instances"])) - if "instances" in predictions - else "finished", - time.time() - start_time, - ) - ) - - if args.output: - if os.path.isdir(args.output): - assert os.path.isdir(args.output), args.output - out_filename = os.path.join(args.output, os.path.basename(path)) - else: - #assert len(args.input) == 1, "Please specify a directory with args.output" - os.makedirs(args.output) - out_filename = os.path.join(args.output, os.path.basename(path)) - #out_filename = args.output - visualized_output.save(out_filename) - else: - cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL) - cv2.imshow(WINDOW_NAME, visualized_output.get_image()[:, :, ::-1]) - if cv2.waitKey(0) == 27: - break # esc to quit - elif args.webcam: - assert args.input is None, "Cannot have both --input and --webcam!" - assert args.output is None, "output not yet supported with --webcam!" - cam = cv2.VideoCapture(0) - for vis in tqdm.tqdm(demo.run_on_video(cam)): - cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL) - cv2.imshow(WINDOW_NAME, vis) - if cv2.waitKey(1) == 27: - break # esc to quit - cam.release() - cv2.destroyAllWindows() - elif args.video_input: - video = cv2.VideoCapture(args.video_input) - width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH)) - height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)) - frames_per_second = video.get(cv2.CAP_PROP_FPS) - num_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT)) - basename = os.path.basename(args.video_input) - codec, file_ext = ( - ("x264", ".mkv") if test_opencv_video_format("x264", ".mkv") else ("mp4v", ".mp4") - ) - if codec == ".mp4v": - warnings.warn("x264 codec not available, switching to mp4v") - if args.output: - if os.path.isdir(args.output): - output_fname = os.path.join(args.output, basename) - output_fname = os.path.splitext(output_fname)[0] + file_ext - else: - output_fname = args.output - assert not os.path.isfile(output_fname), output_fname - output_file = cv2.VideoWriter( - filename=output_fname, - # some installation of opencv may not support x264 (due to its license), - # you can try other format (e.g. MPEG) - fourcc=cv2.VideoWriter_fourcc(*codec), - fps=float(frames_per_second), - frameSize=(width, height), - isColor=True, - ) - assert os.path.isfile(args.video_input) - for vis_frame in tqdm.tqdm(demo.run_on_video(video), total=num_frames): - if args.output: - output_file.write(vis_frame) - else: - cv2.namedWindow(basename, cv2.WINDOW_NORMAL) - cv2.imshow(basename, vis_frame) - if cv2.waitKey(1) == 27: - break # esc to quit - video.release() - if args.output: - output_file.release() - else: - cv2.destroyAllWindows() diff --git a/spaces/CVPR/v-doc_abstractive_mac/mac_cell.py b/spaces/CVPR/v-doc_abstractive_mac/mac_cell.py deleted file mode 100644 index 2fc78a1ff980bc291047cf199cc45b8ff631ac39..0000000000000000000000000000000000000000 --- a/spaces/CVPR/v-doc_abstractive_mac/mac_cell.py +++ /dev/null @@ -1,592 +0,0 @@ -import collections -import numpy as np -import tensorflow as tf - -import ops -from config import config - -MACCellTuple = collections.namedtuple("MACCellTuple", ("control", "memory")) - -''' -The MAC cell. - -Recurrent cell for multi-step reasoning. Presented in https://arxiv.org/abs/1803.03067. -The cell has recurrent control and memory states that interact with the question -and knowledge base (image) respectively. - -The hidden state structure is MACCellTuple(control, memory) - -At each step the cell performs by calling to three subunits: control, read and write. - -1. The Control Unit computes the control state by computing attention over the question words. -The control state represents the current reasoning operation the cell performs. - -2. The Read Unit retrieves information from the knowledge base, given the control and previous -memory values, by computing 2-stages attention over the knowledge base. - -3. The Write Unit integrates the retrieved information to the previous hidden memory state, -given the value of the control state, to perform the current reasoning operation. -''' -class MACCell(tf.nn.rnn_cell.RNNCell): - - '''Initialize the MAC cell. - (Note that in the current version the cell is stateful -- - updating its own internals when being called) - - Args: - vecQuestions: the vector representation of the questions. - [batchSize, ctrlDim] - - questionWords: the question words embeddings. - [batchSize, questionLength, ctrlDim] - - questionCntxWords: the encoder outputs -- the "contextual" question words. - [batchSize, questionLength, ctrlDim] - - questionLengths: the length of each question. - [batchSize] - - memoryDropout: dropout on the memory state (Tensor scalar). - readDropout: dropout inside the read unit (Tensor scalar). - writeDropout: dropout on the new information that gets into the write unit (Tensor scalar). - - batchSize: batch size (Tensor scalar). - train: train or test mod (Tensor boolean). - reuse: reuse cell - - knowledgeBase: - ''' - def __init__(self, vecQuestions, questionWords, questionCntxWords, questionLengths, - knowledgeBase, memoryDropout, readDropout, writeDropout, - batchSize, train, reuse = None): - - self.vecQuestions = vecQuestions - self.questionWords = questionWords - self.questionCntxWords = questionCntxWords - self.questionLengths = questionLengths - - self.knowledgeBase = knowledgeBase - - self.dropouts = {} - self.dropouts["memory"] = memoryDropout - self.dropouts["read"] = readDropout - self.dropouts["write"] = writeDropout - - self.none = tf.zeros((batchSize, 1), dtype = tf.float32) - - self.batchSize = batchSize - self.train = train - self.reuse = reuse - - ''' - Cell state size. - ''' - @property - def state_size(self): - return MACCellTuple(config.ctrlDim, config.memDim) - - ''' - Cell output size. Currently it doesn't have any outputs. - ''' - @property - def output_size(self): - return 1 - - # pass encoder hidden states to control? - ''' - The Control Unit: computes the new control state -- the reasoning operation, - by summing up the word embeddings according to a computed attention distribution. - - The unit is recurrent: it receives the whole question and the previous control state, - merge them together (resulting in the "continuous control"), and then uses that - to compute attentions over the question words. Finally, it combines the words - together according to the attention distribution to get the new control state. - - Args: - controlInput: external inputs to control unit (the question vector). - [batchSize, ctrlDim] - - inWords: the representation of the words used to compute the attention. - [batchSize, questionLength, ctrlDim] - - outWords: the representation of the words that are summed up. - (by default inWords == outWords) - [batchSize, questionLength, ctrlDim] - - questionLengths: the length of each question. - [batchSize] - - control: the previous control hidden state value. - [batchSize, ctrlDim] - - contControl: optional corresponding continuous control state - (before casting the attention over the words). - [batchSize, ctrlDim] - - Returns: - the new control state - [batchSize, ctrlDim] - - the continuous (pre-attention) control - [batchSize, ctrlDim] - ''' - def control(self, controlInput, inWords, outWords, questionLengths, - control, contControl = None, name = "", reuse = None): - - with tf.variable_scope("control" + name, reuse = reuse): - dim = config.ctrlDim - - ## Step 1: compute "continuous" control state given previous control and question. - # control inputs: question and previous control - newContControl = controlInput - if config.controlFeedPrev: - newContControl = control if config.controlFeedPrevAtt else contControl - if config.controlFeedInputs: - newContControl = tf.concat([newContControl, controlInput], axis = -1) - dim += config.ctrlDim - - # merge inputs together - newContControl = ops.linear(newContControl, dim, config.ctrlDim, - act = config.controlContAct, name = "contControl") - dim = config.ctrlDim - - ## Step 2: compute attention distribution over words and sum them up accordingly. - # compute interactions with question words - interactions = tf.expand_dims(newContControl, axis = 1) * inWords - - # optionally concatenate words - if config.controlConcatWords: - interactions = tf.concat([interactions, inWords], axis = -1) - dim += config.ctrlDim - - # optional projection - if config.controlProj: - interactions = ops.linear(interactions, dim, config.ctrlDim, - act = config.controlProjAct) - dim = config.ctrlDim - - # compute attention distribution over words and summarize them accordingly - logits = ops.inter2logits(interactions, dim) - # self.interL = (interW, interb) - - # if config.controlCoverage: - # logits += coverageBias * coverage - - attention = tf.nn.softmax(ops.expMask(logits, questionLengths)) - self.attentions["question"].append(attention) - - # if config.controlCoverage: - # coverage += attention # Add logits instead? - - newControl = ops.att2Smry(attention, outWords) - - # ablation: use continuous control (pre-attention) instead - if config.controlContinuous: - newControl = newContControl - - return newControl, newContControl - - ''' - The read unit extracts relevant information from the knowledge base given the - cell's memory and control states. It computes attention distribution over - the knowledge base by comparing it first to the memory and then to the control. - Finally, it uses the attention distribution to sum up the knowledge base accordingly, - resulting in an extraction of relevant information. - - Args: - knowledge base: representation of the knowledge base (image). - [batchSize, kbSize (Height * Width), memDim] - - memory: the cell's memory state - [batchSize, memDim] - - control: the cell's control state - [batchSize, ctrlDim] - - Returns the information extracted. - [batchSize, memDim] - ''' - def read(self, knowledgeBase, memory, control, name = "", reuse = None): - with tf.variable_scope("read" + name, reuse = reuse): - dim = config.memDim - - ## memory dropout - if config.memoryVariationalDropout: - memory = ops.applyVarDpMask(memory, self.memDpMask, self.dropouts["memory"]) - else: - memory = tf.nn.dropout(memory, self.dropouts["memory"]) - - ## Step 1: knowledge base / memory interactions - # parameters for knowledge base and memory projection - proj = None - if config.readProjInputs: - proj = {"dim": config.attDim, "shared": config.readProjShared, "dropout": self.dropouts["read"] } - dim = config.attDim - - # parameters for concatenating knowledge base elements - concat = {"x": config.readMemConcatKB, "proj": config.readMemConcatProj} - - # compute interactions between knowledge base and memory - interactions, interDim = ops.mul(x = knowledgeBase, y = memory, dim = config.memDim, - proj = proj, concat = concat, interMod = config.readMemAttType, name = "memInter") - - projectedKB = proj.get("x") if proj else None - - # project memory interactions back to hidden dimension - if config.readMemProj: - interactions = ops.linear(interactions, interDim, dim, act = config.readMemAct, - name = "memKbProj") - else: - dim = interDim - - ## Step 2: compute interactions with control - if config.readCtrl: - # compute interactions with control - if config.ctrlDim != dim: - control = ops.linear(control, ctrlDim, dim, name = "ctrlProj") - - interactions, interDim = ops.mul(interactions, control, dim, - interMod = config.readCtrlAttType, concat = {"x": config.readCtrlConcatInter}, - name = "ctrlInter") - - # optionally concatenate knowledge base elements - if config.readCtrlConcatKB: - if config.readCtrlConcatProj: - addedInp, addedDim = projectedKB, config.attDim - else: - addedInp, addedDim = knowledgeBase, config.memDim - interactions = tf.concat([interactions, addedInp], axis = -1) - dim += addedDim - - # optional nonlinearity - interactions = ops.activations[config.readCtrlAct](interactions) - - ## Step 3: sum attentions up over the knowledge base - # transform vectors to attention distribution - attention = ops.inter2att(interactions, dim, dropout = self.dropouts["read"]) - - self.attentions["kb"].append(attention) - - # optionally use projected knowledge base instead of original - if config.readSmryKBProj: - knowledgeBase = projectedKB - - # sum up the knowledge base according to the distribution - information = ops.att2Smry(attention, knowledgeBase) - - return information - - ''' - The write unit integrates newly retrieved information (from the read unit), - with the cell's previous memory hidden state, resulting in a new memory value. - The unit optionally supports: - 1. Self-attention to previous control / memory states, in order to consider previous steps - in the reasoning process. - 2. Gating between the new memory and previous memory states, to allow dynamic adjustment - of the reasoning process length. - - Args: - memory: the cell's memory state - [batchSize, memDim] - - info: the information to integrate with the memory - [batchSize, memDim] - - control: the cell's control state - [batchSize, ctrlDim] - - contControl: optional corresponding continuous control state - (before casting the attention over the words). - [batchSize, ctrlDim] - - Return the new memory - [batchSize, memDim] - ''' - def write(self, memory, info, control, contControl = None, name = "", reuse = None): - with tf.variable_scope("write" + name, reuse = reuse): - - # optionally project info - if config.writeInfoProj: - info = ops.linear(info, config.memDim, config.memDim, name = "info") - - # optional info nonlinearity - info = ops.activations[config.writeInfoAct](info) - - # compute self-attention vector based on previous controls and memories - if config.writeSelfAtt: - selfControl = control - if config.writeSelfAttMod == "CONT": - selfControl = contControl - # elif config.writeSelfAttMod == "POST": - # selfControl = postControl - selfControl = ops.linear(selfControl, config.ctrlDim, config.ctrlDim, name = "ctrlProj") - - interactions = self.controls * tf.expand_dims(selfControl, axis = 1) - - # if config.selfAttShareInter: - # selfAttlogits = self.linearP(selfAttInter, config.encDim, 1, self.interL[0], self.interL[1], name = "modSelfAttInter") - attention = ops.inter2att(interactions, config.ctrlDim, name = "selfAttention") - self.attentions["self"].append(attention) - selfSmry = ops.att2Smry(attention, self.memories) - - # get write unit inputs: previous memory, the new info, optionally self-attention / control - newMemory, dim = memory, config.memDim - if config.writeInputs == "INFO": - newMemory = info - elif config.writeInputs == "SUM": - newMemory += info - elif config.writeInputs == "BOTH": - newMemory, dim = ops.concat(newMemory, info, dim, mul = config.writeConcatMul) - # else: MEM - - if config.writeSelfAtt: - newMemory = tf.concat([newMemory, selfSmry], axis = -1) - dim += config.memDim - - if config.writeMergeCtrl: - newMemory = tf.concat([newMemory, control], axis = -1) - dim += config.memDim - - # project memory back to memory dimension - if config.writeMemProj or (dim != config.memDim): - newMemory = ops.linear(newMemory, dim, config.memDim, name = "newMemory") - - # optional memory nonlinearity - newMemory = ops.activations[config.writeMemAct](newMemory) - - # write unit gate - if config.writeGate: - gateDim = config.memDim - if config.writeGateShared: - gateDim = 1 - - z = tf.sigmoid(ops.linear(control, config.ctrlDim, gateDim, name = "gate", bias = config.writeGateBias)) - - self.attentions["gate"].append(z) - - newMemory = newMemory * z + memory * (1 - z) - - # optional batch normalization - if config.memoryBN: - newMemory = tf.contrib.layers.batch_norm(newMemory, decay = config.bnDecay, - center = config.bnCenter, scale = config.bnScale, - is_training = self.train, updates_collections = None) - - return newMemory - - def memAutoEnc(newMemory, info, control, name = "", reuse = None): - with tf.variable_scope("memAutoEnc" + name, reuse = reuse): - # inputs to auto encoder - features = info if config.autoEncMemInputs == "INFO" else newMemory - features = ops.linear(features, config.memDim, config.ctrlDim, - act = config.autoEncMemAct, name = "aeMem") - - # reconstruct control - if config.autoEncMemLoss == "CONT": - loss = tf.reduce_mean(tf.squared_difference(control, features)) - else: - interactions, dim = ops.mul(self.questionCntxWords, features, config.ctrlDim, - concat = {"x": config.autoEncMemCnct}, mulBias = config.mulBias, name = "aeMem") - - logits = ops.inter2logits(interactions, dim) - logits = self.expMask(logits, self.questionLengths) - - # reconstruct word attentions - if config.autoEncMemLoss == "PROB": - loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( - labels = self.attentions["question"][-1], logits = logits)) - - # reconstruct control through words attentions - else: - attention = tf.nn.softmax(logits) - summary = ops.att2Smry(attention, self.questionCntxWords) - loss = tf.reduce_mean(tf.squared_difference(control, summary)) - - return loss - - ''' - Call the cell to get new control and memory states. - - Args: - inputs: in the current implementation the cell don't get recurrent inputs - every iteration (argument for comparability with rnn interface). - - state: the cell current state (control, memory) - MACCellTuple([batchSize, ctrlDim],[batchSize, memDim]) - - Returns the new state -- the new memory and control values. - MACCellTuple([batchSize, ctrlDim],[batchSize, memDim]) - ''' - def __call__(self, inputs, state, scope = None): - scope = scope or type(self).__name__ - with tf.variable_scope(scope, reuse = self.reuse): # as tfscope - control = state.control - memory = state.memory - - # cell sharing - inputName = "qInput" - inputNameU = "qInputU" - inputReuseU = inputReuse = (self.iteration > 0) - if config.controlInputUnshared: - inputNameU = "qInput%d" % self.iteration - inputReuseU = None - - cellName = "" - cellReuse = (self.iteration > 0) - if config.unsharedCells: - cellName = str(self.iteration) - cellReuse = None - - ## control unit - # prepare question input to control - controlInput = ops.linear(self.vecQuestions, config.ctrlDim, config.ctrlDim, - name = inputName, reuse = inputReuse) - - controlInput = ops.activations[config.controlInputAct](controlInput) - - controlInput = ops.linear(controlInput, config.ctrlDim, config.ctrlDim, - name = inputNameU, reuse = inputReuseU) - - newControl, self.contControl = self.control(controlInput, self.inWords, self.outWords, - self.questionLengths, control, self.contControl, name = cellName, reuse = cellReuse) - - # read unit - # ablation: use whole question as control - if config.controlWholeQ: - newControl = self.vecQuestions - # ops.linear(self.vecQuestions, config.ctrlDim, projDim, name = "qMod") - - info = self.read(self.knowledgeBase, memory, newControl, name = cellName, reuse = cellReuse) - - if config.writeDropout < 1.0: - # write unit - info = tf.nn.dropout(info, self.dropouts["write"]) - - newMemory = self.write(memory, info, newControl, self.contControl, name = cellName, reuse = cellReuse) - - # add auto encoder loss for memory - # if config.autoEncMem: - # self.autoEncLosses["memory"] += memAutoEnc(newMemory, info, newControl) - - # append as standard list? - self.controls = tf.concat([self.controls, tf.expand_dims(newControl, axis = 1)], axis = 1) - self.memories = tf.concat([self.memories, tf.expand_dims(newMemory, axis = 1)], axis = 1) - self.infos = tf.concat([self.infos, tf.expand_dims(info, axis = 1)], axis = 1) - - # self.contControls = tf.concat([self.contControls, tf.expand_dims(contControl, axis = 1)], axis = 1) - # self.postControls = tf.concat([self.controls, tf.expand_dims(postControls, axis = 1)], axis = 1) - - newState = MACCellTuple(newControl, newMemory) - return self.none, newState - - ''' - Initializes the a hidden state to based on the value of the initType: - "PRM" for parametric initialization - "ZERO" for zero initialization - "Q" to initialize to question vectors. - - Args: - name: the state variable name. - dim: the dimension of the state. - initType: the type of the initialization - batchSize: the batch size - - Returns the initialized hidden state. - ''' - def initState(self, name, dim, initType, batchSize): - if initType == "PRM": - prm = tf.get_variable(name, shape = (dim, ), - initializer = tf.random_normal_initializer()) - initState = tf.tile(tf.expand_dims(prm, axis = 0), [batchSize, 1]) - elif initType == "ZERO": - initState = tf.zeros((batchSize, dim), dtype = tf.float32) - else: # "Q" - initState = self.vecQuestions - return initState - - ''' - Add a parametric null word to the questions. - - Args: - words: the words to add a null word to. - [batchSize, questionLentgth] - - lengths: question lengths. - [batchSize] - - Returns the updated word sequence and lengths. - ''' - def addNullWord(words, lengths): - nullWord = tf.get_variable("zeroWord", shape = (1 , config.ctrlDim), initializer = tf.random_normal_initializer()) - nullWord = tf.tile(tf.expand_dims(nullWord, axis = 0), [self.batchSize, 1, 1]) - words = tf.concat([nullWord, words], axis = 1) - lengths += 1 - return words, lengths - - ''' - Initializes the cell internal state (currently it's stateful). In particular, - 1. Data-structures (lists of attention maps and accumulated losses). - 2. The memory and control states. - 3. The knowledge base (optionally merging it with the question vectors) - 4. The question words used by the cell (either the original word embeddings, or the - encoder outputs, with optional projection). - - Args: - batchSize: the batch size - - Returns the initial cell state. - ''' - def zero_state(self, batchSize, dtype = tf.float32): - ## initialize data-structures - self.attentions = {"kb": [], "question": [], "self": [], "gate": []} - self.autoEncLosses = {"control": tf.constant(0.0), "memory": tf.constant(0.0)} - - - ## initialize state - initialControl = self.initState("initCtrl", config.ctrlDim, config.initCtrl, batchSize) - initialMemory = self.initState("initMem", config.memDim, config.initMem, batchSize) - - self.controls = tf.expand_dims(initialControl, axis = 1) - self.memories = tf.expand_dims(initialMemory, axis = 1) - self.infos = tf.expand_dims(initialMemory, axis = 1) - - self.contControl = initialControl - # self.contControls = tf.expand_dims(initialControl, axis = 1) - # self.postControls = tf.expand_dims(initialControl, axis = 1) - - - ## initialize knowledge base - # optionally merge question into knowledge base representation - if config.initKBwithQ != "NON": - iVecQuestions = ops.linear(self.vecQuestions, config.ctrlDim, config.memDim, name = "questions") - - concatMul = (config.initKBwithQ == "MUL") - cnct, dim = ops.concat(self.knowledgeBase, iVecQuestions, config.memDim, mul = concatMul, expandY = True) - self.knowledgeBase = ops.linear(cnct, dim, config.memDim, name = "initKB") - - - ## initialize question words - # choose question words to work with (original embeddings or encoder outputs) - words = self.questionCntxWords if config.controlContextual else self.questionWords - - # optionally add parametric "null" word in the to all questions - if config.addNullWord: - words, questionLengths = self.addNullWord(words, questionLengths) - - # project words - self.inWords = self.outWords = words - if config.controlInWordsProj or config.controlOutWordsProj: - pWords = ops.linear(words, config.ctrlDim, config.ctrlDim, name = "wordsProj") - self.inWords = pWords if config.controlInWordsProj else words - self.outWords = pWords if config.controlOutWordsProj else words - - # if config.controlCoverage: - # self.coverage = tf.zeros((batchSize, tf.shape(words)[1]), dtype = tf.float32) - # self.coverageBias = tf.get_variable("coverageBias", shape = (), - # initializer = config.controlCoverageBias) - - ## initialize memory variational dropout mask - if config.memoryVariationalDropout: - self.memDpMask = ops.generateVarDpMask((batchSize, config.memDim), self.dropouts["memory"]) - - return MACCellTuple(initialControl, initialMemory) diff --git a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Zeabur.py b/spaces/CofAI/chat.b4/g4f/Provider/Providers/Zeabur.py deleted file mode 100644 index e412720bd9a0c88860f6ea8a657cb0a24bcce63f..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Zeabur.py +++ /dev/null @@ -1,50 +0,0 @@ -import os -import requests -from ...typing import sha256, Dict, get_type_hints - -url = "https://gptleg.zeabur.app" -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-0301', - 'gpt-3.5-turbo-16k', 'gpt-4', 'gpt-4-0613'] -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - headers = { - 'Authority': 'chat.dfehub.com', - 'Content-Type': 'application/json', - 'Method': 'POST', - 'Path': '/api/openai/v1/chat/completions', - 'Scheme': 'https', - 'Accept': 'text/event-stream', - 'Accept-Language': 'pt-BR,pt;q=0.9,en-US;q=0.8,en;q=0.7,zh-CN;q=0.6,zh;q=0.5', - 'Content-Type': 'application/json', - 'Origin': 'https://gptleg.zeabur.app', - 'Referer': 'https://gptleg.zeabur.app/', - 'Sec-Ch-Ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"', - 'Sec-Ch-Ua-Mobile': '?0', - 'Sec-Ch-Ua-Platform': '"Windows"', - 'Sec-Fetch-Dest': 'empty', - 'Sec-Fetch-Mode': 'cors', - 'Sec-Fetch-Site': 'same-origin', - 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36', - 'X-Requested-With': 'XMLHttpRequest', - } - - data = { - 'model': model, - 'temperature': 0.7, - 'max_tokens': '16000', - 'presence_penalty': 0, - 'messages': messages, - } - - response = requests.post(url + '/api/openai/v1/chat/completions', - headers=headers, json=data, stream=stream) - - yield response.json()['choices'][0]['message']['content'] - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/Cong723/gpt-academic-public/config.py b/spaces/Cong723/gpt-academic-public/config.py deleted file mode 100644 index 89e5af085fcfed0aa8e040514bf9e0238d4d54d9..0000000000000000000000000000000000000000 --- a/spaces/Cong723/gpt-academic-public/config.py +++ /dev/null @@ -1,77 +0,0 @@ -# [step 1]>> 例如: API_KEY = "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" (此key无效) -API_KEY = "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" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey1,fkxxxx-api2dkey2" - -# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改 -USE_PROXY = False -if USE_PROXY: - # 填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改 - # 例如 "socks5h://localhost:11284" - # [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http - # [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上) - # [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上 - - # 代理网络的地址,打开你的*学*网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284) - proxies = { - # [协议]:// [地址] :[端口] - "http": "socks5h://localhost:11284", # 再例如 "http": "http://127.0.0.1:7890", - "https": "socks5h://localhost:11284", # 再例如 "https": "http://127.0.0.1:7890", - } -else: - proxies = None - -# [step 3]>> 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次 -# 一言以蔽之:免费用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview -DEFAULT_WORKER_NUM = 3 - - -# [step 4]>> 以下配置可以优化体验,但大部分场合下并不需要修改 -# 对话窗的高度 -CHATBOT_HEIGHT = 1115 - -# 代码高亮 -CODE_HIGHLIGHT = True - -# 窗口布局 -LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局) -DARK_MODE = True # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局) - -# 发送请求到OpenAI后,等待多久判定为超时 -TIMEOUT_SECONDS = 30 - -# 网页的端口, -1代表随机端口 -WEB_PORT = -1 - -# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制 -MAX_RETRY = 2 - -# OpenAI模型选择是(gpt4现在只对申请成功的人开放) -LLM_MODEL = "gpt-3.5-turbo" # 可选 "chatglm" -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "api2d-gpt-3.5-turbo"] - -# 本地LLM模型如ChatGLM的执行方式 CPU/GPU -LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda" - -# 设置gradio的并行线程数(不需要修改) -CONCURRENT_COUNT = 100 - -# 加一个看板娘装饰 -ADD_WAIFU = False - -# 设置用户名和密码(不需要修改)(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个) -# [("username", "password"), ("username2", "password2"), ...] -AUTHENTICATION = [] - -# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!) -# (高危设置!通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!) -# 格式 {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"} -# 例如 API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://ai.open.com/api/conversation"} -API_URL_REDIRECT = {} - -# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!) -CUSTOM_PATH = "/" - -# 如果需要使用newbing,把newbing的长长的cookie放到这里 -NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"] -NEWBING_COOKIES = """ -your bing cookies here -""" diff --git a/spaces/Cropinky/hana_hanak_houses/realesrgan/archs/srvgg_arch.py b/spaces/Cropinky/hana_hanak_houses/realesrgan/archs/srvgg_arch.py deleted file mode 100644 index 39460965c9c5ee9cd6eb41c50d33574cb8ba6e50..0000000000000000000000000000000000000000 --- a/spaces/Cropinky/hana_hanak_houses/realesrgan/archs/srvgg_arch.py +++ /dev/null @@ -1,69 +0,0 @@ -from basicsr.utils.registry import ARCH_REGISTRY -from torch import nn as nn -from torch.nn import functional as F - - -@ARCH_REGISTRY.register() -class SRVGGNetCompact(nn.Module): - """A compact VGG-style network structure for super-resolution. - - It is a compact network structure, which performs upsampling in the last layer and no convolution is - conducted on the HR feature space. - - Args: - num_in_ch (int): Channel number of inputs. Default: 3. - num_out_ch (int): Channel number of outputs. Default: 3. - num_feat (int): Channel number of intermediate features. Default: 64. - num_conv (int): Number of convolution layers in the body network. Default: 16. - upscale (int): Upsampling factor. Default: 4. - act_type (str): Activation type, options: 'relu', 'prelu', 'leakyrelu'. Default: prelu. - """ - - def __init__(self, num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu'): - super(SRVGGNetCompact, self).__init__() - self.num_in_ch = num_in_ch - self.num_out_ch = num_out_ch - self.num_feat = num_feat - self.num_conv = num_conv - self.upscale = upscale - self.act_type = act_type - - self.body = nn.ModuleList() - # the first conv - self.body.append(nn.Conv2d(num_in_ch, num_feat, 3, 1, 1)) - # the first activation - if act_type == 'relu': - activation = nn.ReLU(inplace=True) - elif act_type == 'prelu': - activation = nn.PReLU(num_parameters=num_feat) - elif act_type == 'leakyrelu': - activation = nn.LeakyReLU(negative_slope=0.1, inplace=True) - self.body.append(activation) - - # the body structure - for _ in range(num_conv): - self.body.append(nn.Conv2d(num_feat, num_feat, 3, 1, 1)) - # activation - if act_type == 'relu': - activation = nn.ReLU(inplace=True) - elif act_type == 'prelu': - activation = nn.PReLU(num_parameters=num_feat) - elif act_type == 'leakyrelu': - activation = nn.LeakyReLU(negative_slope=0.1, inplace=True) - self.body.append(activation) - - # the last conv - self.body.append(nn.Conv2d(num_feat, num_out_ch * upscale * upscale, 3, 1, 1)) - # upsample - self.upsampler = nn.PixelShuffle(upscale) - - def forward(self, x): - out = x - for i in range(0, len(self.body)): - out = self.body[i](out) - - out = self.upsampler(out) - # add the nearest upsampled image, so that the network learns the residual - base = F.interpolate(x, scale_factor=self.upscale, mode='nearest') - out += base - return out diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/tracing.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/tracing.py deleted file mode 100644 index d5596a4ceab79aff362203376952edc3122bf811..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/tracing.py +++ /dev/null @@ -1,472 +0,0 @@ -from types import SimpleNamespace -from typing import TYPE_CHECKING, Awaitable, Optional, Type, TypeVar - -import attr -from aiosignal import Signal -from multidict import CIMultiDict -from yarl import URL - -from .client_reqrep import ClientResponse - -if TYPE_CHECKING: # pragma: no cover - from .client import ClientSession - from .typedefs import Protocol - - _ParamT_contra = TypeVar("_ParamT_contra", contravariant=True) - - class _SignalCallback(Protocol[_ParamT_contra]): - def __call__( - self, - __client_session: ClientSession, - __trace_config_ctx: SimpleNamespace, - __params: _ParamT_contra, - ) -> Awaitable[None]: - ... - - -__all__ = ( - "TraceConfig", - "TraceRequestStartParams", - "TraceRequestEndParams", - "TraceRequestExceptionParams", - "TraceConnectionQueuedStartParams", - "TraceConnectionQueuedEndParams", - "TraceConnectionCreateStartParams", - "TraceConnectionCreateEndParams", - "TraceConnectionReuseconnParams", - "TraceDnsResolveHostStartParams", - "TraceDnsResolveHostEndParams", - "TraceDnsCacheHitParams", - "TraceDnsCacheMissParams", - "TraceRequestRedirectParams", - "TraceRequestChunkSentParams", - "TraceResponseChunkReceivedParams", - "TraceRequestHeadersSentParams", -) - - -class TraceConfig: - """First-class used to trace requests launched via ClientSession objects.""" - - def __init__( - self, trace_config_ctx_factory: Type[SimpleNamespace] = SimpleNamespace - ) -> None: - self._on_request_start: Signal[ - _SignalCallback[TraceRequestStartParams] - ] = Signal(self) - self._on_request_chunk_sent: Signal[ - _SignalCallback[TraceRequestChunkSentParams] - ] = Signal(self) - self._on_response_chunk_received: Signal[ - _SignalCallback[TraceResponseChunkReceivedParams] - ] = Signal(self) - self._on_request_end: Signal[_SignalCallback[TraceRequestEndParams]] = Signal( - self - ) - self._on_request_exception: Signal[ - _SignalCallback[TraceRequestExceptionParams] - ] = Signal(self) - self._on_request_redirect: Signal[ - _SignalCallback[TraceRequestRedirectParams] - ] = Signal(self) - self._on_connection_queued_start: Signal[ - _SignalCallback[TraceConnectionQueuedStartParams] - ] = Signal(self) - self._on_connection_queued_end: Signal[ - _SignalCallback[TraceConnectionQueuedEndParams] - ] = Signal(self) - self._on_connection_create_start: Signal[ - _SignalCallback[TraceConnectionCreateStartParams] - ] = Signal(self) - self._on_connection_create_end: Signal[ - _SignalCallback[TraceConnectionCreateEndParams] - ] = Signal(self) - self._on_connection_reuseconn: Signal[ - _SignalCallback[TraceConnectionReuseconnParams] - ] = Signal(self) - self._on_dns_resolvehost_start: Signal[ - _SignalCallback[TraceDnsResolveHostStartParams] - ] = Signal(self) - self._on_dns_resolvehost_end: Signal[ - _SignalCallback[TraceDnsResolveHostEndParams] - ] = Signal(self) - self._on_dns_cache_hit: Signal[ - _SignalCallback[TraceDnsCacheHitParams] - ] = Signal(self) - self._on_dns_cache_miss: Signal[ - _SignalCallback[TraceDnsCacheMissParams] - ] = Signal(self) - self._on_request_headers_sent: Signal[ - _SignalCallback[TraceRequestHeadersSentParams] - ] = Signal(self) - - self._trace_config_ctx_factory = trace_config_ctx_factory - - def trace_config_ctx( - self, trace_request_ctx: Optional[SimpleNamespace] = None - ) -> SimpleNamespace: - """Return a new trace_config_ctx instance""" - return self._trace_config_ctx_factory(trace_request_ctx=trace_request_ctx) - - def freeze(self) -> None: - self._on_request_start.freeze() - self._on_request_chunk_sent.freeze() - self._on_response_chunk_received.freeze() - self._on_request_end.freeze() - self._on_request_exception.freeze() - self._on_request_redirect.freeze() - self._on_connection_queued_start.freeze() - self._on_connection_queued_end.freeze() - self._on_connection_create_start.freeze() - self._on_connection_create_end.freeze() - self._on_connection_reuseconn.freeze() - self._on_dns_resolvehost_start.freeze() - self._on_dns_resolvehost_end.freeze() - self._on_dns_cache_hit.freeze() - self._on_dns_cache_miss.freeze() - self._on_request_headers_sent.freeze() - - @property - def on_request_start(self) -> "Signal[_SignalCallback[TraceRequestStartParams]]": - return self._on_request_start - - @property - def on_request_chunk_sent( - self, - ) -> "Signal[_SignalCallback[TraceRequestChunkSentParams]]": - return self._on_request_chunk_sent - - @property - def on_response_chunk_received( - self, - ) -> "Signal[_SignalCallback[TraceResponseChunkReceivedParams]]": - return self._on_response_chunk_received - - @property - def on_request_end(self) -> "Signal[_SignalCallback[TraceRequestEndParams]]": - return self._on_request_end - - @property - def on_request_exception( - self, - ) -> "Signal[_SignalCallback[TraceRequestExceptionParams]]": - return self._on_request_exception - - @property - def on_request_redirect( - self, - ) -> "Signal[_SignalCallback[TraceRequestRedirectParams]]": - return self._on_request_redirect - - @property - def on_connection_queued_start( - self, - ) -> "Signal[_SignalCallback[TraceConnectionQueuedStartParams]]": - return self._on_connection_queued_start - - @property - def on_connection_queued_end( - self, - ) -> "Signal[_SignalCallback[TraceConnectionQueuedEndParams]]": - return self._on_connection_queued_end - - @property - def on_connection_create_start( - self, - ) -> "Signal[_SignalCallback[TraceConnectionCreateStartParams]]": - return self._on_connection_create_start - - @property - def on_connection_create_end( - self, - ) -> "Signal[_SignalCallback[TraceConnectionCreateEndParams]]": - return self._on_connection_create_end - - @property - def on_connection_reuseconn( - self, - ) -> "Signal[_SignalCallback[TraceConnectionReuseconnParams]]": - return self._on_connection_reuseconn - - @property - def on_dns_resolvehost_start( - self, - ) -> "Signal[_SignalCallback[TraceDnsResolveHostStartParams]]": - return self._on_dns_resolvehost_start - - @property - def on_dns_resolvehost_end( - self, - ) -> "Signal[_SignalCallback[TraceDnsResolveHostEndParams]]": - return self._on_dns_resolvehost_end - - @property - def on_dns_cache_hit(self) -> "Signal[_SignalCallback[TraceDnsCacheHitParams]]": - return self._on_dns_cache_hit - - @property - def on_dns_cache_miss(self) -> "Signal[_SignalCallback[TraceDnsCacheMissParams]]": - return self._on_dns_cache_miss - - @property - def on_request_headers_sent( - self, - ) -> "Signal[_SignalCallback[TraceRequestHeadersSentParams]]": - return self._on_request_headers_sent - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceRequestStartParams: - """Parameters sent by the `on_request_start` signal""" - - method: str - url: URL - headers: "CIMultiDict[str]" - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceRequestChunkSentParams: - """Parameters sent by the `on_request_chunk_sent` signal""" - - method: str - url: URL - chunk: bytes - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceResponseChunkReceivedParams: - """Parameters sent by the `on_response_chunk_received` signal""" - - method: str - url: URL - chunk: bytes - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceRequestEndParams: - """Parameters sent by the `on_request_end` signal""" - - method: str - url: URL - headers: "CIMultiDict[str]" - response: ClientResponse - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceRequestExceptionParams: - """Parameters sent by the `on_request_exception` signal""" - - method: str - url: URL - headers: "CIMultiDict[str]" - exception: BaseException - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceRequestRedirectParams: - """Parameters sent by the `on_request_redirect` signal""" - - method: str - url: URL - headers: "CIMultiDict[str]" - response: ClientResponse - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceConnectionQueuedStartParams: - """Parameters sent by the `on_connection_queued_start` signal""" - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceConnectionQueuedEndParams: - """Parameters sent by the `on_connection_queued_end` signal""" - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceConnectionCreateStartParams: - """Parameters sent by the `on_connection_create_start` signal""" - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceConnectionCreateEndParams: - """Parameters sent by the `on_connection_create_end` signal""" - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceConnectionReuseconnParams: - """Parameters sent by the `on_connection_reuseconn` signal""" - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceDnsResolveHostStartParams: - """Parameters sent by the `on_dns_resolvehost_start` signal""" - - host: str - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceDnsResolveHostEndParams: - """Parameters sent by the `on_dns_resolvehost_end` signal""" - - host: str - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceDnsCacheHitParams: - """Parameters sent by the `on_dns_cache_hit` signal""" - - host: str - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceDnsCacheMissParams: - """Parameters sent by the `on_dns_cache_miss` signal""" - - host: str - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class TraceRequestHeadersSentParams: - """Parameters sent by the `on_request_headers_sent` signal""" - - method: str - url: URL - headers: "CIMultiDict[str]" - - -class Trace: - """Internal dependency holder class. - - Used to keep together the main dependencies used - at the moment of send a signal. - """ - - def __init__( - self, - session: "ClientSession", - trace_config: TraceConfig, - trace_config_ctx: SimpleNamespace, - ) -> None: - self._trace_config = trace_config - self._trace_config_ctx = trace_config_ctx - self._session = session - - async def send_request_start( - self, method: str, url: URL, headers: "CIMultiDict[str]" - ) -> None: - return await self._trace_config.on_request_start.send( - self._session, - self._trace_config_ctx, - TraceRequestStartParams(method, url, headers), - ) - - async def send_request_chunk_sent( - self, method: str, url: URL, chunk: bytes - ) -> None: - return await self._trace_config.on_request_chunk_sent.send( - self._session, - self._trace_config_ctx, - TraceRequestChunkSentParams(method, url, chunk), - ) - - async def send_response_chunk_received( - self, method: str, url: URL, chunk: bytes - ) -> None: - return await self._trace_config.on_response_chunk_received.send( - self._session, - self._trace_config_ctx, - TraceResponseChunkReceivedParams(method, url, chunk), - ) - - async def send_request_end( - self, - method: str, - url: URL, - headers: "CIMultiDict[str]", - response: ClientResponse, - ) -> None: - return await self._trace_config.on_request_end.send( - self._session, - self._trace_config_ctx, - TraceRequestEndParams(method, url, headers, response), - ) - - async def send_request_exception( - self, - method: str, - url: URL, - headers: "CIMultiDict[str]", - exception: BaseException, - ) -> None: - return await self._trace_config.on_request_exception.send( - self._session, - self._trace_config_ctx, - TraceRequestExceptionParams(method, url, headers, exception), - ) - - async def send_request_redirect( - self, - method: str, - url: URL, - headers: "CIMultiDict[str]", - response: ClientResponse, - ) -> None: - return await self._trace_config._on_request_redirect.send( - self._session, - self._trace_config_ctx, - TraceRequestRedirectParams(method, url, headers, response), - ) - - async def send_connection_queued_start(self) -> None: - return await self._trace_config.on_connection_queued_start.send( - self._session, self._trace_config_ctx, TraceConnectionQueuedStartParams() - ) - - async def send_connection_queued_end(self) -> None: - return await self._trace_config.on_connection_queued_end.send( - self._session, self._trace_config_ctx, TraceConnectionQueuedEndParams() - ) - - async def send_connection_create_start(self) -> None: - return await self._trace_config.on_connection_create_start.send( - self._session, self._trace_config_ctx, TraceConnectionCreateStartParams() - ) - - async def send_connection_create_end(self) -> None: - return await self._trace_config.on_connection_create_end.send( - self._session, self._trace_config_ctx, TraceConnectionCreateEndParams() - ) - - async def send_connection_reuseconn(self) -> None: - return await self._trace_config.on_connection_reuseconn.send( - self._session, self._trace_config_ctx, TraceConnectionReuseconnParams() - ) - - async def send_dns_resolvehost_start(self, host: str) -> None: - return await self._trace_config.on_dns_resolvehost_start.send( - self._session, self._trace_config_ctx, TraceDnsResolveHostStartParams(host) - ) - - async def send_dns_resolvehost_end(self, host: str) -> None: - return await self._trace_config.on_dns_resolvehost_end.send( - self._session, self._trace_config_ctx, TraceDnsResolveHostEndParams(host) - ) - - async def send_dns_cache_hit(self, host: str) -> None: - return await self._trace_config.on_dns_cache_hit.send( - self._session, self._trace_config_ctx, TraceDnsCacheHitParams(host) - ) - - async def send_dns_cache_miss(self, host: str) -> None: - return await self._trace_config.on_dns_cache_miss.send( - self._session, self._trace_config_ctx, TraceDnsCacheMissParams(host) - ) - - async def send_request_headers( - self, method: str, url: URL, headers: "CIMultiDict[str]" - ) -> None: - return await self._trace_config._on_request_headers_sent.send( - self._session, - self._trace_config_ctx, - TraceRequestHeadersSentParams(method, url, headers), - ) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/context.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/context.py deleted file mode 100644 index 393e563a42af1557927a2eb8a51e6e231d48a29a..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/context.py +++ /dev/null @@ -1,20 +0,0 @@ -# Defines the Context class, which is used to store the state of all Blocks that are being rendered. - -from __future__ import annotations - -import threading -from typing import TYPE_CHECKING - -if TYPE_CHECKING: # Only import for type checking (is False at runtime). - from gradio.blocks import BlockContext, Blocks - - -class Context: - root_block: Blocks | None = None # The current root block that holds all blocks. - block: BlockContext | None = None # The current block that children are added to. - id: int = 0 # Running id to uniquely refer to any block that gets defined - ip_address: str | None = None # The IP address of the user. - hf_token: str | None = None # The token provided when loading private HF repos - - -thread_data = threading.local() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-28365bae.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-28365bae.js deleted file mode 100644 index 4dd6c93ad39b36179fd36625dea0079c452def9e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-28365bae.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as o,e as s,s as a}from"./index-3370be2a.js";class n extends o{constructor(e){super(),s(this,e,null,null,a,{})}}const c=n,p=["static"],d=t=>({type:{payload:"Any"},description:{payload:"stored state value"},example_data:""});export{c as Component,d as document,p as modes}; -//# sourceMappingURL=index-28365bae.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/test_helpers.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/test_helpers.py deleted file mode 100644 index c329c767834f73b1dd8991a4ac12d4972a41e98a..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/test_helpers.py +++ /dev/null @@ -1,32 +0,0 @@ -from .._events import ( - ConnectionClosed, - Data, - EndOfMessage, - Event, - InformationalResponse, - Request, - Response, -) -from .helpers import normalize_data_events - - -def test_normalize_data_events() -> None: - assert normalize_data_events( - [ - Data(data=bytearray(b"1")), - Data(data=b"2"), - Response(status_code=200, headers=[]), # type: ignore[arg-type] - Data(data=b"3"), - Data(data=b"4"), - EndOfMessage(), - Data(data=b"5"), - Data(data=b"6"), - Data(data=b"7"), - ] - ) == [ - Data(data=b"12"), - Response(status_code=200, headers=[]), # type: ignore[arg-type] - Data(data=b"34"), - EndOfMessage(), - Data(data=b"567"), - ] diff --git a/spaces/Datasculptor/DescriptionGPT/detic/modeling/meta_arch/custom_rcnn.py b/spaces/Datasculptor/DescriptionGPT/detic/modeling/meta_arch/custom_rcnn.py deleted file mode 100644 index 9a5ac721d42e40a8b4f28508b10a932cef827fcf..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/detic/modeling/meta_arch/custom_rcnn.py +++ /dev/null @@ -1,232 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging -import numpy as np -from typing import Dict, List, Optional, Tuple -import torch -from torch import nn -import json -from detectron2.utils.events import get_event_storage -from detectron2.config import configurable -from detectron2.structures import ImageList, Instances, Boxes -import detectron2.utils.comm as comm - -from detectron2.modeling.meta_arch.build import META_ARCH_REGISTRY -from detectron2.modeling.meta_arch.rcnn import GeneralizedRCNN -from detectron2.modeling.postprocessing import detector_postprocess -from detectron2.utils.visualizer import Visualizer, _create_text_labels -from detectron2.data.detection_utils import convert_image_to_rgb - -from torch.cuda.amp import autocast -from ..text.text_encoder import build_text_encoder -from ..utils import load_class_freq, get_fed_loss_inds - -@META_ARCH_REGISTRY.register() -class CustomRCNN(GeneralizedRCNN): - ''' - Add image labels - ''' - @configurable - def __init__( - self, - with_image_labels = False, - dataset_loss_weight = [], - fp16 = False, - sync_caption_batch = False, - roi_head_name = '', - cap_batch_ratio = 4, - with_caption = False, - dynamic_classifier = False, - **kwargs): - """ - """ - self.with_image_labels = with_image_labels - self.dataset_loss_weight = dataset_loss_weight - self.fp16 = fp16 - self.with_caption = with_caption - self.sync_caption_batch = sync_caption_batch - self.roi_head_name = roi_head_name - self.cap_batch_ratio = cap_batch_ratio - self.dynamic_classifier = dynamic_classifier - self.return_proposal = False - if self.dynamic_classifier: - self.freq_weight = kwargs.pop('freq_weight') - self.num_classes = kwargs.pop('num_classes') - self.num_sample_cats = kwargs.pop('num_sample_cats') - super().__init__(**kwargs) - assert self.proposal_generator is not None - if self.with_caption: - assert not self.dynamic_classifier - self.text_encoder = build_text_encoder(pretrain=True) - for v in self.text_encoder.parameters(): - v.requires_grad = False - - - @classmethod - def from_config(cls, cfg): - ret = super().from_config(cfg) - ret.update({ - 'with_image_labels': cfg.WITH_IMAGE_LABELS, - 'dataset_loss_weight': cfg.MODEL.DATASET_LOSS_WEIGHT, - 'fp16': cfg.FP16, - 'with_caption': cfg.MODEL.WITH_CAPTION, - 'sync_caption_batch': cfg.MODEL.SYNC_CAPTION_BATCH, - 'dynamic_classifier': cfg.MODEL.DYNAMIC_CLASSIFIER, - 'roi_head_name': cfg.MODEL.ROI_HEADS.NAME, - 'cap_batch_ratio': cfg.MODEL.CAP_BATCH_RATIO, - }) - if ret['dynamic_classifier']: - ret['freq_weight'] = load_class_freq( - cfg.MODEL.ROI_BOX_HEAD.CAT_FREQ_PATH, - cfg.MODEL.ROI_BOX_HEAD.FED_LOSS_FREQ_WEIGHT) - ret['num_classes'] = cfg.MODEL.ROI_HEADS.NUM_CLASSES - ret['num_sample_cats'] = cfg.MODEL.NUM_SAMPLE_CATS - return ret - - - def inference( - self, - batched_inputs: Tuple[Dict[str, torch.Tensor]], - detected_instances: Optional[List[Instances]] = None, - do_postprocess: bool = True, - ): - assert not self.training - assert detected_instances is None - - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - proposals, _ = self.proposal_generator(images, features, None) - results, _ = self.roi_heads(images, features, proposals) - if do_postprocess: - assert not torch.jit.is_scripting(), \ - "Scripting is not supported for postprocess." - return CustomRCNN._postprocess( - results, batched_inputs, images.image_sizes) - else: - return results - - - def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]): - """ - Add ann_type - Ignore proposal loss when training with image labels - """ - if not self.training: - return self.inference(batched_inputs) - - images = self.preprocess_image(batched_inputs) - - ann_type = 'box' - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - if self.with_image_labels: - for inst, x in zip(gt_instances, batched_inputs): - inst._ann_type = x['ann_type'] - inst._pos_category_ids = x['pos_category_ids'] - ann_types = [x['ann_type'] for x in batched_inputs] - assert len(set(ann_types)) == 1 - ann_type = ann_types[0] - if ann_type in ['prop', 'proptag']: - for t in gt_instances: - t.gt_classes *= 0 - - if self.fp16: # TODO (zhouxy): improve - with autocast(): - features = self.backbone(images.tensor.half()) - features = {k: v.float() for k, v in features.items()} - else: - features = self.backbone(images.tensor) - - cls_features, cls_inds, caption_features = None, None, None - - if self.with_caption and 'caption' in ann_type: - inds = [torch.randint(len(x['captions']), (1,))[0].item() \ - for x in batched_inputs] - caps = [x['captions'][ind] for ind, x in zip(inds, batched_inputs)] - caption_features = self.text_encoder(caps).float() - if self.sync_caption_batch: - caption_features = self._sync_caption_features( - caption_features, ann_type, len(batched_inputs)) - - if self.dynamic_classifier and ann_type != 'caption': - cls_inds = self._sample_cls_inds(gt_instances, ann_type) # inds, inv_inds - ind_with_bg = cls_inds[0].tolist() + [-1] - cls_features = self.roi_heads.box_predictor[ - 0].cls_score.zs_weight[:, ind_with_bg].permute(1, 0).contiguous() - - classifier_info = cls_features, cls_inds, caption_features - proposals, proposal_losses = self.proposal_generator( - images, features, gt_instances) - - if self.roi_head_name in ['StandardROIHeads', 'CascadeROIHeads']: - proposals, detector_losses = self.roi_heads( - images, features, proposals, gt_instances) - else: - proposals, detector_losses = self.roi_heads( - images, features, proposals, gt_instances, - ann_type=ann_type, classifier_info=classifier_info) - - if self.vis_period > 0: - storage = get_event_storage() - if storage.iter % self.vis_period == 0: - self.visualize_training(batched_inputs, proposals) - - losses = {} - losses.update(detector_losses) - if self.with_image_labels: - if ann_type in ['box', 'prop', 'proptag']: - losses.update(proposal_losses) - else: # ignore proposal loss for non-bbox data - losses.update({k: v * 0 for k, v in proposal_losses.items()}) - else: - losses.update(proposal_losses) - if len(self.dataset_loss_weight) > 0: - dataset_sources = [x['dataset_source'] for x in batched_inputs] - assert len(set(dataset_sources)) == 1 - dataset_source = dataset_sources[0] - for k in losses: - losses[k] *= self.dataset_loss_weight[dataset_source] - - if self.return_proposal: - return proposals, losses - else: - return losses - - - def _sync_caption_features(self, caption_features, ann_type, BS): - has_caption_feature = (caption_features is not None) - BS = (BS * self.cap_batch_ratio) if (ann_type == 'box') else BS - rank = torch.full( - (BS, 1), comm.get_rank(), dtype=torch.float32, - device=self.device) - if not has_caption_feature: - caption_features = rank.new_zeros((BS, 512)) - caption_features = torch.cat([caption_features, rank], dim=1) - global_caption_features = comm.all_gather(caption_features) - caption_features = torch.cat( - [x.to(self.device) for x in global_caption_features], dim=0) \ - if has_caption_feature else None # (NB) x (D + 1) - return caption_features - - - def _sample_cls_inds(self, gt_instances, ann_type='box'): - if ann_type == 'box': - gt_classes = torch.cat( - [x.gt_classes for x in gt_instances]) - C = len(self.freq_weight) - freq_weight = self.freq_weight - else: - gt_classes = torch.cat( - [torch.tensor( - x._pos_category_ids, - dtype=torch.long, device=x.gt_classes.device) \ - for x in gt_instances]) - C = self.num_classes - freq_weight = None - assert gt_classes.max() < C, '{} {}'.format(gt_classes.max(), C) - inds = get_fed_loss_inds( - gt_classes, self.num_sample_cats, C, - weight=freq_weight) - cls_id_map = gt_classes.new_full( - (self.num_classes + 1,), len(inds)) - cls_id_map[inds] = torch.arange(len(inds), device=cls_id_map.device) - return inds, cls_id_map \ No newline at end of file diff --git a/spaces/Detomo/ai-comic-generation/src/components/ui/alert.tsx b/spaces/Detomo/ai-comic-generation/src/components/ui/alert.tsx deleted file mode 100644 index f589783193a6cfe14032a77b89055cb3e920fe8c..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/components/ui/alert.tsx +++ /dev/null @@ -1,59 +0,0 @@ -import * as React from "react" -import { cva, type VariantProps } from "class-variance-authority" - -import { cn } from "@/lib/utils" - -const alertVariants = cva( - "relative w-full rounded-lg border border-stone-200 p-4 [&:has(svg)]:pl-11 [&>svg+div]:translate-y-[-3px] [&>svg]:absolute [&>svg]:left-4 [&>svg]:top-4 [&>svg]:text-stone-950 dark:border-stone-800 dark:[&>svg]:text-stone-50", - { - variants: { - variant: { - default: "bg-white text-stone-950 dark:bg-stone-950 dark:text-stone-50", - destructive: - "border-red-500/50 text-red-500 dark:border-red-500 [&>svg]:text-red-500 dark:border-red-900/50 dark:text-red-900 dark:dark:border-red-900 dark:[&>svg]:text-red-900", - }, - }, - defaultVariants: { - variant: "default", - }, - } -) - -const Alert = React.forwardRef< - HTMLDivElement, - React.HTMLAttributes & VariantProps ->(({ className, variant, ...props }, ref) => ( -
      -)) -Alert.displayName = "Alert" - -const AlertTitle = React.forwardRef< - HTMLParagraphElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( -
      -)) -AlertTitle.displayName = "AlertTitle" - -const AlertDescription = React.forwardRef< - HTMLParagraphElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( -
      -)) -AlertDescription.displayName = "AlertDescription" - -export { Alert, AlertTitle, AlertDescription } diff --git a/spaces/ECCV2022/bytetrack/yolox/utils/boxes.py b/spaces/ECCV2022/bytetrack/yolox/utils/boxes.py deleted file mode 100644 index ac262b9608f85151e4bbeac3c7b02779dc63de75..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/yolox/utils/boxes.py +++ /dev/null @@ -1,133 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) 2014-2021 Megvii Inc. All rights reserved. - -import numpy as np - -import torch -import torchvision -import torch.nn.functional as F - -__all__ = [ - "filter_box", - "postprocess", - "bboxes_iou", - "matrix_iou", - "adjust_box_anns", - "xyxy2xywh", - "xyxy2cxcywh", -] - - -def filter_box(output, scale_range): - """ - output: (N, 5+class) shape - """ - min_scale, max_scale = scale_range - w = output[:, 2] - output[:, 0] - h = output[:, 3] - output[:, 1] - keep = (w * h > min_scale * min_scale) & (w * h < max_scale * max_scale) - return output[keep] - - -def postprocess(prediction, num_classes, conf_thre=0.7, nms_thre=0.45): - box_corner = prediction.new(prediction.shape) - box_corner[:, :, 0] = prediction[:, :, 0] - prediction[:, :, 2] / 2 - box_corner[:, :, 1] = prediction[:, :, 1] - prediction[:, :, 3] / 2 - box_corner[:, :, 2] = prediction[:, :, 0] + prediction[:, :, 2] / 2 - box_corner[:, :, 3] = prediction[:, :, 1] + prediction[:, :, 3] / 2 - prediction[:, :, :4] = box_corner[:, :, :4] - - output = [None for _ in range(len(prediction))] - for i, image_pred in enumerate(prediction): - - # If none are remaining => process next image - if not image_pred.size(0): - continue - # Get score and class with highest confidence - class_conf, class_pred = torch.max( - image_pred[:, 5 : 5 + num_classes], 1, keepdim=True - ) - - conf_mask = (image_pred[:, 4] * class_conf.squeeze() >= conf_thre).squeeze() - # _, conf_mask = torch.topk((image_pred[:, 4] * class_conf.squeeze()), 1000) - # Detections ordered as (x1, y1, x2, y2, obj_conf, class_conf, class_pred) - detections = torch.cat((image_pred[:, :5], class_conf, class_pred.float()), 1) - detections = detections[conf_mask] - if not detections.size(0): - continue - - nms_out_index = torchvision.ops.batched_nms( - detections[:, :4], - detections[:, 4] * detections[:, 5], - detections[:, 6], - nms_thre, - ) - detections = detections[nms_out_index] - if output[i] is None: - output[i] = detections - else: - output[i] = torch.cat((output[i], detections)) - - return output - - -def bboxes_iou(bboxes_a, bboxes_b, xyxy=True): - if bboxes_a.shape[1] != 4 or bboxes_b.shape[1] != 4: - raise IndexError - - if xyxy: - tl = torch.max(bboxes_a[:, None, :2], bboxes_b[:, :2]) - br = torch.min(bboxes_a[:, None, 2:], bboxes_b[:, 2:]) - area_a = torch.prod(bboxes_a[:, 2:] - bboxes_a[:, :2], 1) - area_b = torch.prod(bboxes_b[:, 2:] - bboxes_b[:, :2], 1) - else: - tl = torch.max( - (bboxes_a[:, None, :2] - bboxes_a[:, None, 2:] / 2), - (bboxes_b[:, :2] - bboxes_b[:, 2:] / 2), - ) - br = torch.min( - (bboxes_a[:, None, :2] + bboxes_a[:, None, 2:] / 2), - (bboxes_b[:, :2] + bboxes_b[:, 2:] / 2), - ) - - area_a = torch.prod(bboxes_a[:, 2:], 1) - area_b = torch.prod(bboxes_b[:, 2:], 1) - en = (tl < br).type(tl.type()).prod(dim=2) - area_i = torch.prod(br - tl, 2) * en # * ((tl < br).all()) - return area_i / (area_a[:, None] + area_b - area_i) - - -def matrix_iou(a, b): - """ - return iou of a and b, numpy version for data augenmentation - """ - lt = np.maximum(a[:, np.newaxis, :2], b[:, :2]) - rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:]) - - area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2) - area_a = np.prod(a[:, 2:] - a[:, :2], axis=1) - area_b = np.prod(b[:, 2:] - b[:, :2], axis=1) - return area_i / (area_a[:, np.newaxis] + area_b - area_i + 1e-12) - - -def adjust_box_anns(bbox, scale_ratio, padw, padh, w_max, h_max): - #bbox[:, 0::2] = np.clip(bbox[:, 0::2] * scale_ratio + padw, 0, w_max) - #bbox[:, 1::2] = np.clip(bbox[:, 1::2] * scale_ratio + padh, 0, h_max) - bbox[:, 0::2] = bbox[:, 0::2] * scale_ratio + padw - bbox[:, 1::2] = bbox[:, 1::2] * scale_ratio + padh - return bbox - - -def xyxy2xywh(bboxes): - bboxes[:, 2] = bboxes[:, 2] - bboxes[:, 0] - bboxes[:, 3] = bboxes[:, 3] - bboxes[:, 1] - return bboxes - - -def xyxy2cxcywh(bboxes): - bboxes[:, 2] = bboxes[:, 2] - bboxes[:, 0] - bboxes[:, 3] = bboxes[:, 3] - bboxes[:, 1] - bboxes[:, 0] = bboxes[:, 0] + bboxes[:, 2] * 0.5 - bboxes[:, 1] = bboxes[:, 1] + bboxes[:, 3] * 0.5 - return bboxes diff --git a/spaces/Eitan177/mutation_profiler/README.md b/spaces/Eitan177/mutation_profiler/README.md deleted file mode 100644 index 0da62c7ee67cd1795c5147c57dbb850f3feffad3..0000000000000000000000000000000000000000 --- a/spaces/Eitan177/mutation_profiler/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Expressionprofile -emoji: 🌖 -colorFrom: purple -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EronSamez/RVC_HFmeu/tools/infer/train-index-v2.py b/spaces/EronSamez/RVC_HFmeu/tools/infer/train-index-v2.py deleted file mode 100644 index cbeed5d4fbf65fcb9a697a99d5f7b41c844e95d6..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/tools/infer/train-index-v2.py +++ /dev/null @@ -1,79 +0,0 @@ -""" -格式:直接cid为自带的index位;aid放不下了,通过字典来查,反正就5w个 -""" -import os -import traceback -import logging - -logger = logging.getLogger(__name__) - -from multiprocessing import cpu_count - -import faiss -import numpy as np -from sklearn.cluster import MiniBatchKMeans - -# ###########如果是原始特征要先写save -n_cpu = 0 -if n_cpu == 0: - n_cpu = cpu_count() -inp_root = r"./logs/anz/3_feature768" -npys = [] -listdir_res = list(os.listdir(inp_root)) -for name in sorted(listdir_res): - phone = np.load("%s/%s" % (inp_root, name)) - npys.append(phone) -big_npy = np.concatenate(npys, 0) -big_npy_idx = np.arange(big_npy.shape[0]) -np.random.shuffle(big_npy_idx) -big_npy = big_npy[big_npy_idx] -logger.debug(big_npy.shape) # (6196072, 192)#fp32#4.43G -if big_npy.shape[0] > 2e5: - # if(1): - info = "Trying doing kmeans %s shape to 10k centers." % big_npy.shape[0] - logger.info(info) - try: - big_npy = ( - MiniBatchKMeans( - n_clusters=10000, - verbose=True, - batch_size=256 * n_cpu, - compute_labels=False, - init="random", - ) - .fit(big_npy) - .cluster_centers_ - ) - except: - info = traceback.format_exc() - logger.warn(info) - -np.save("tools/infer/big_src_feature_mi.npy", big_npy) - -##################train+add -# big_npy=np.load("/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/inference_f0/big_src_feature_mi.npy") -n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39) -index = faiss.index_factory(768, "IVF%s,Flat" % n_ivf) # mi -logger.info("Training...") -index_ivf = faiss.extract_index_ivf(index) # -index_ivf.nprobe = 1 -index.train(big_npy) -faiss.write_index( - index, "tools/infer/trained_IVF%s_Flat_baseline_src_feat_v2.index" % (n_ivf) -) -logger.info("Adding...") -batch_size_add = 8192 -for i in range(0, big_npy.shape[0], batch_size_add): - index.add(big_npy[i : i + batch_size_add]) -faiss.write_index( - index, "tools/infer/added_IVF%s_Flat_mi_baseline_src_feat.index" % (n_ivf) -) -""" -大小(都是FP32) -big_src_feature 2.95G - (3098036, 256) -big_emb 4.43G - (6196072, 192) -big_emb双倍是因为求特征要repeat后再加pitch - -""" diff --git a/spaces/EyanAn/vits-uma-genshin-honkai/Docker/Dockerfile b/spaces/EyanAn/vits-uma-genshin-honkai/Docker/Dockerfile deleted file mode 100644 index 4d39cdf02a2ec151686cc1d61234bf723068fed8..0000000000000000000000000000000000000000 --- a/spaces/EyanAn/vits-uma-genshin-honkai/Docker/Dockerfile +++ /dev/null @@ -1,12 +0,0 @@ -FROM python:3.9-bullseye -VOLUME ["/app"] -WORKDIR /app -# Set apt to Chinese mirror -RUN sed -i 's/deb.debian.org/mirrors.ustc.edu.cn/g' /etc/apt/sources.list -RUN apt-get update && apt-get -y install cmake git -RUN git clone https://huggingface.co/spaces/ikechan8370/vits-uma-genshin-honkai -WORKDIR /app/vits-uma-genshin-honkai -RUN sed -i "s/\.launch()/\.launch(server_name=\"0.0.0.0\")/" /app/vits-uma-genshin-honkai/app.py -ADD vits.sh /app/vits.sh -EXPOSE 7860 -ENTRYPOINT [ "/app/vits.sh" ] \ No newline at end of file diff --git a/spaces/FYP-23-S1-21/Refineverse_Plugin/TranslationFeature.py b/spaces/FYP-23-S1-21/Refineverse_Plugin/TranslationFeature.py deleted file mode 100644 index eac52af91031267987204dc3fb82ec75205ebf22..0000000000000000000000000000000000000000 --- a/spaces/FYP-23-S1-21/Refineverse_Plugin/TranslationFeature.py +++ /dev/null @@ -1,91 +0,0 @@ -import re -import sqlite3 -from flask import g -from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer - -model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_1.2B") # Setting the model to use -tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B") # Setting the tokenizer to use - -# Main function of the translation feature. Performs translation! -def translate_text(input_text, source_language, target_language): - - # Grabs the source language to be used in the tokenizer - tokenizer.src_lang = source_language - - # Check if the input is empty - if not input_text.strip(): - raise ValueError("Empty input!") - - # Validate that the input is in the correct format - if not validate_input(input_text): - raise ValueError("Incorrect format!") - - # Creates encoded text - encoded_text = tokenizer(input_text, return_tensors="pt") - - # Generates new tokens using encoded text from source language - generated_tokens = model.generate(**encoded_text, forced_bos_token_id=tokenizer.get_lang_id(target_language), max_new_tokens=512) - - # Decode generated tokens to display translated text - translated_text = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0] - - return translated_text - -# Helper function for displaying appropriate language names in flash messages -# Note: Python does not have a built-in switch function, so this is just a rough implementation of the logic -def switch(lang): - if lang == "en": - return "English" - elif lang == "zh": - return "Chinese" - elif lang == "ms": - return "Malay" - elif lang == "ta": - return "Tamil" - elif lang == "th": - return "Thai" - -# User Input Format Validation Function for all 4 languages -def validate_input(input_text): - - # Pattern for English language - pattern_en = r'As a (?P[^,.]+), I want to (?P[^,.]+)(,|.)+so that (?P.+)' - - # Pattern for Chinese language - pattern_zh = r'作为(?P[^,.]+),我想要(?P[^,.]+)(,|。)+以便(?P.+)' - - # Pattern for Malay language - pattern_ms = r'Sebagai(?P[^,.]+), saya mahu(?P[^,.]+)(,|.)+supaya(?P.+)' - - # Pattern for Tamil language - pattern_ta = r'என(?P[^,.]+) எனக்கு வேண்டும்(?P[^,.]+)(,|.)+அதனால்(?P.+) பயன்படுத்தி வைக்கும்' - - # Pattern for Thai language - pattern_th = r'ในฐานะ(?P[^,.]+) ฉันต้องการ(?P[^,.]+)(,|.)+เพื่อที่ฉัน(?P.+)' - - # Try each pattern to see if there is a match - match_en = re.search(pattern_en, input_text, flags=re.DOTALL) - match_zh = re.search(pattern_zh, input_text, flags=re.DOTALL) - match_ms = re.search(pattern_ms, input_text, flags=re.DOTALL) - match_ta = re.search(pattern_ta, input_text, flags=re.DOTALL) - match_th = re.search(pattern_th, input_text, flags=re.DOTALL) - - # Return True if at least one pattern matches, otherwise False - return bool(match_en or match_zh or match_ms or match_ta or match_th) - -# Function to grab all contents in the "Translation" table (except for unique ids) -def getTranslatedContents(): - db = getattr(g, '_database', None) # Gets the _database attribute from the 'g' object. If it does not exist, returns 'None' - if db is None: - db = g._database = sqlite3.connect('Refineverse.db') # If db is None, create a new connection for db and g._database - cursor = db.cursor() # Creates a cursor object to handle data - cursor.execute("SELECT input_text, translated_text FROM Translation") # The cursor executes the query - rows = cursor.fetchall() # Stores the results of fetchall() into a variable - return rows - -# Function to insert a new row into the "Translation" table -def insertTranslationRow(input_text, translated_text): - with sqlite3.connect('Refineverse.db') as conn: # 'With' will automatically take care of closing and opening the connection - cursor = conn.cursor() - cursor.execute("INSERT INTO Translation (input_text, translated_text) VALUES (?, ?)", (input_text, translated_text)) - conn.commit() diff --git a/spaces/FangLee/Generate-Music-in-Time-Series/README.md b/spaces/FangLee/Generate-Music-in-Time-Series/README.md deleted file mode 100644 index e2676562cb079ff1b6f5a0ccd4fceebd4878794e..0000000000000000000000000000000000000000 --- a/spaces/FangLee/Generate-Music-in-Time-Series/README.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Generate-Music-in-Time-Series -app_file: Deploy_gradio.py -sdk: gradio -sdk_version: 3.34.0 ---- -# Generate Music in Time Series -#### Try it here: https://huggingface.co/spaces/FangLee/Generate-Music-in-Time-Series -![Preview](https://github.com/FangLee2003/Generate-Music-in-Time-Series/assets/75077747/f902eda0-2c5f-450c-af20-ec9fca46cac0) diff --git a/spaces/Farazquraishi/pendora/retinaface/models.py b/spaces/Farazquraishi/pendora/retinaface/models.py deleted file mode 100644 index 60a1a9aeb84ac4631a7a810c9a68612a6fb80556..0000000000000000000000000000000000000000 --- a/spaces/Farazquraishi/pendora/retinaface/models.py +++ /dev/null @@ -1,301 +0,0 @@ -import tensorflow as tf -from tensorflow.keras import Model -from tensorflow.keras.applications import MobileNetV2, ResNet50 -from tensorflow.keras.layers import Input, Conv2D, ReLU, LeakyReLU -from retinaface.anchor import decode_tf, prior_box_tf - - -def _regularizer(weights_decay): - """l2 regularizer""" - return tf.keras.regularizers.l2(weights_decay) - - -def _kernel_init(scale=1.0, seed=None): - """He normal initializer""" - return tf.keras.initializers.he_normal() - - -class BatchNormalization(tf.keras.layers.BatchNormalization): - """Make trainable=False freeze BN for real (the og version is sad). - ref: https://github.com/zzh8829/yolov3-tf2 - """ - def __init__(self, axis=-1, momentum=0.9, epsilon=1e-5, center=True, - scale=True, name=None, **kwargs): - super(BatchNormalization, self).__init__( - axis=axis, momentum=momentum, epsilon=epsilon, center=center, - scale=scale, name=name, **kwargs) - - def call(self, x, training=False): - if training is None: - training = tf.constant(False) - training = tf.logical_and(training, self.trainable) - - return super().call(x, training) - - -def Backbone(backbone_type='ResNet50', use_pretrain=True): - """Backbone Model""" - weights = None - if use_pretrain: - weights = 'imagenet' - - def backbone(x): - if backbone_type == 'ResNet50': - extractor = ResNet50( - input_shape=x.shape[1:], include_top=False, weights=weights) - pick_layer1 = 80 # [80, 80, 512] - pick_layer2 = 142 # [40, 40, 1024] - pick_layer3 = 174 # [20, 20, 2048] - preprocess = tf.keras.applications.resnet.preprocess_input - elif backbone_type == 'MobileNetV2': - extractor = MobileNetV2( - input_shape=x.shape[1:], include_top=False, weights=weights) - pick_layer1 = 54 # [80, 80, 32] - pick_layer2 = 116 # [40, 40, 96] - pick_layer3 = 143 # [20, 20, 160] - preprocess = tf.keras.applications.mobilenet_v2.preprocess_input - else: - raise NotImplementedError( - 'Backbone type {} is not recognized.'.format(backbone_type)) - - return Model(extractor.input, - (extractor.layers[pick_layer1].output, - extractor.layers[pick_layer2].output, - extractor.layers[pick_layer3].output), - name=backbone_type + '_extrator')(preprocess(x)) - - return backbone - - -class ConvUnit(tf.keras.layers.Layer): - """Conv + BN + Act""" - def __init__(self, f, k, s, wd, act=None, **kwargs): - super(ConvUnit, self).__init__(**kwargs) - self.conv = Conv2D(filters=f, kernel_size=k, strides=s, padding='same', - kernel_initializer=_kernel_init(), - kernel_regularizer=_regularizer(wd), - use_bias=False) - self.bn = BatchNormalization() - - if act is None: - self.act_fn = tf.identity - elif act == 'relu': - self.act_fn = ReLU() - elif act == 'lrelu': - self.act_fn = LeakyReLU(0.1) - else: - raise NotImplementedError( - 'Activation function type {} is not recognized.'.format(act)) - - def call(self, x): - return self.act_fn(self.bn(self.conv(x))) - - -class FPN(tf.keras.layers.Layer): - """Feature Pyramid Network""" - def __init__(self, out_ch, wd, **kwargs): - super(FPN, self).__init__(**kwargs) - act = 'relu' - self.out_ch = out_ch - self.wd = wd - if (out_ch <= 64): - act = 'lrelu' - - self.output1 = ConvUnit(f=out_ch, k=1, s=1, wd=wd, act=act) - self.output2 = ConvUnit(f=out_ch, k=1, s=1, wd=wd, act=act) - self.output3 = ConvUnit(f=out_ch, k=1, s=1, wd=wd, act=act) - self.merge1 = ConvUnit(f=out_ch, k=3, s=1, wd=wd, act=act) - self.merge2 = ConvUnit(f=out_ch, k=3, s=1, wd=wd, act=act) - - def call(self, x): - output1 = self.output1(x[0]) # [80, 80, out_ch] - output2 = self.output2(x[1]) # [40, 40, out_ch] - output3 = self.output3(x[2]) # [20, 20, out_ch] - - up_h, up_w = tf.shape(output2)[1], tf.shape(output2)[2] - up3 = tf.image.resize(output3, [up_h, up_w], method='nearest') - output2 = output2 + up3 - output2 = self.merge2(output2) - - up_h, up_w = tf.shape(output1)[1], tf.shape(output1)[2] - up2 = tf.image.resize(output2, [up_h, up_w], method='nearest') - output1 = output1 + up2 - output1 = self.merge1(output1) - - return output1, output2, output3 - - def get_config(self): - config = { - 'out_ch': self.out_ch, - 'wd': self.wd, - } - base_config = super(FPN, self).get_config() - return dict(list(base_config.items()) + list(config.items())) - - -class SSH(tf.keras.layers.Layer): - """Single Stage Headless Layer""" - def __init__(self, out_ch, wd, **kwargs): - super(SSH, self).__init__(**kwargs) - assert out_ch % 4 == 0 - self.out_ch = out_ch - self.wd = wd - act = 'relu' - if (out_ch <= 64): - act = 'lrelu' - - self.conv_3x3 = ConvUnit(f=out_ch // 2, k=3, s=1, wd=wd, act=None) - - self.conv_5x5_1 = ConvUnit(f=out_ch // 4, k=3, s=1, wd=wd, act=act) - self.conv_5x5_2 = ConvUnit(f=out_ch // 4, k=3, s=1, wd=wd, act=None) - - self.conv_7x7_2 = ConvUnit(f=out_ch // 4, k=3, s=1, wd=wd, act=act) - self.conv_7x7_3 = ConvUnit(f=out_ch // 4, k=3, s=1, wd=wd, act=None) - - self.relu = ReLU() - - def call(self, x): - conv_3x3 = self.conv_3x3(x) - - conv_5x5_1 = self.conv_5x5_1(x) - conv_5x5 = self.conv_5x5_2(conv_5x5_1) - - conv_7x7_2 = self.conv_7x7_2(conv_5x5_1) - conv_7x7 = self.conv_7x7_3(conv_7x7_2) - - output = tf.concat([conv_3x3, conv_5x5, conv_7x7], axis=3) - output = self.relu(output) - - return output - - def get_config(self): - config = { - 'out_ch': self.out_ch, - 'wd': self.wd, - } - base_config = super(SSH, self).get_config() - return dict(list(base_config.items()) + list(config.items())) - - -class BboxHead(tf.keras.layers.Layer): - """Bbox Head Layer""" - def __init__(self, num_anchor, wd, **kwargs): - super(BboxHead, self).__init__(**kwargs) - self.num_anchor = num_anchor - self.wd = wd - self.conv = Conv2D(filters=num_anchor * 4, kernel_size=1, strides=1) - - def call(self, x): - h, w = tf.shape(x)[1], tf.shape(x)[2] - x = self.conv(x) - - return tf.reshape(x, [-1, h * w * self.num_anchor, 4]) - - def get_config(self): - config = { - 'num_anchor': self.num_anchor, - 'wd': self.wd, - } - base_config = super(BboxHead, self).get_config() - return dict(list(base_config.items()) + list(config.items())) - - -class LandmarkHead(tf.keras.layers.Layer): - """Landmark Head Layer""" - def __init__(self, num_anchor, wd, name='LandmarkHead', **kwargs): - super(LandmarkHead, self).__init__(name=name, **kwargs) - self.num_anchor = num_anchor - self.wd = wd - self.conv = Conv2D(filters=num_anchor * 10, kernel_size=1, strides=1) - - def call(self, x): - h, w = tf.shape(x)[1], tf.shape(x)[2] - x = self.conv(x) - - return tf.reshape(x, [-1, h * w * self.num_anchor, 10]) - - def get_config(self): - config = { - 'num_anchor': self.num_anchor, - 'wd': self.wd, - } - base_config = super(LandmarkHead, self).get_config() - return dict(list(base_config.items()) + list(config.items())) - - -class ClassHead(tf.keras.layers.Layer): - """Class Head Layer""" - def __init__(self, num_anchor, wd, name='ClassHead', **kwargs): - super(ClassHead, self).__init__(name=name, **kwargs) - self.num_anchor = num_anchor - self.wd = wd - self.conv = Conv2D(filters=num_anchor * 2, kernel_size=1, strides=1) - - def call(self, x): - h, w = tf.shape(x)[1], tf.shape(x)[2] - x = self.conv(x) - - return tf.reshape(x, [-1, h * w * self.num_anchor, 2]) - - def get_config(self): - config = { - 'num_anchor': self.num_anchor, - 'wd': self.wd, - } - base_config = super(ClassHead, self).get_config() - return dict(list(base_config.items()) + list(config.items())) - - -def RetinaFaceModel(cfg, training=False, iou_th=0.4, score_th=0.02, - name='RetinaFaceModel'): - """Retina Face Model""" - input_size = cfg['input_size'] if training else None - wd = cfg['weights_decay'] - out_ch = cfg['out_channel'] - num_anchor = len(cfg['min_sizes'][0]) - backbone_type = cfg['backbone_type'] - - # define model - x = inputs = Input([input_size, input_size, 3], name='input_image') - - x = Backbone(backbone_type=backbone_type)(x) - - fpn = FPN(out_ch=out_ch, wd=wd)(x) - - features = [SSH(out_ch=out_ch, wd=wd)(f) - for i, f in enumerate(fpn)] - - bbox_regressions = tf.concat( - [BboxHead(num_anchor, wd=wd)(f) - for i, f in enumerate(features)], axis=1) - landm_regressions = tf.concat( - [LandmarkHead(num_anchor, wd=wd, name=f'LandmarkHead_{i}')(f) - for i, f in enumerate(features)], axis=1) - classifications = tf.concat( - [ClassHead(num_anchor, wd=wd, name=f'ClassHead_{i}')(f) - for i, f in enumerate(features)], axis=1) - - classifications = tf.keras.layers.Softmax(axis=-1)(classifications) - - if training: - out = (bbox_regressions, landm_regressions, classifications) - else: - # only for batch size 1 - preds = tf.concat( # [bboxes, landms, landms_valid, conf] - [bbox_regressions[0], - landm_regressions[0], - tf.ones_like(classifications[0, :, 0][..., tf.newaxis]), - classifications[0, :, 1][..., tf.newaxis]], 1) - priors = prior_box_tf((tf.shape(inputs)[1], tf.shape(inputs)[2]), cfg['min_sizes'], cfg['steps'], cfg['clip']) - decode_preds = decode_tf(preds, priors, cfg['variances']) - - selected_indices = tf.image.non_max_suppression( - boxes=decode_preds[:, :4], - scores=decode_preds[:, -1], - max_output_size=tf.shape(decode_preds)[0], - iou_threshold=iou_th, - score_threshold=score_th) - - out = tf.gather(decode_preds, selected_indices) - - return Model(inputs, out, name=name), Model(inputs, [bbox_regressions, landm_regressions, classifications], name=name + '_bb_only') \ No newline at end of file diff --git a/spaces/FathomNet/MBARI_Monterey_Bay_Benthic/README.md b/spaces/FathomNet/MBARI_Monterey_Bay_Benthic/README.md deleted file mode 100644 index 23fb2a5a88aeffe3985ef4a6db54ee0c26793a28..0000000000000000000000000000000000000000 --- a/spaces/FathomNet/MBARI_Monterey_Bay_Benthic/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MBARI Monterey Bay Benthic -emoji: 💻 -colorFrom: blue -colorTo: indigo -sdk: gradio -python_version: 3.10 -sdk_version: 3.47.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Felix123456/bingo/postcss.config.js b/spaces/Felix123456/bingo/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/Felix123456/bingo/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/Fia/StableDiffusionCPU/app.py b/spaces/Fia/StableDiffusionCPU/app.py deleted file mode 100644 index 28c46cda8c07fc0aa07c1c139499de044ab80129..0000000000000000000000000000000000000000 --- a/spaces/Fia/StableDiffusionCPU/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import gradio as gr - -import torch -from torch import autocast -from diffusers import StableDiffusionPipeline -from datasets import load_dataset -from PIL import Image -import re -import streamlit as st - -model_id = "CompVis/stable-diffusion-v1-4" -device = "cpu" - -#If you are running this code locally, you need to either do a 'huggingface-cli login` or paste your User Access Token from here https://huggingface.co/settings/tokens into the use_auth_token field below. -pipe = StableDiffusionPipeline.from_pretrained(model_id, use_auth_token=st.secrets["AUTH_KEY"], torch_dtype=torch.float32) -def dummy(images, **kwargs): return images, False -pipe.safety_checker = dummy - -def infer(prompt, width, height, steps, scale, seed): - if seed == -1: - images_list = pipe( - [prompt], - height=height, - width=width, - num_inference_steps=steps, - guidance_scale=scale, - generator=torch.Generator(device=device).manual_seed(seed)) - else: - images_list = pipe( - [prompt], - height=height, - width=width, - num_inference_steps=steps, - guidance_scale=scale) - - return images_list["sample"] - -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: black; - background: black; - } - input[type='range'] { - accent-color: black; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } -""" - -block = gr.Blocks(css=css) - -with block: - gr.HTML( - """ -
      -
      -

      - Stable Diffusion CPU -

      -
      -
      - """ - ) - with gr.Group(): - with gr.Box(): - with gr.Row().style(mobile_collapse=False, equal_height=True): - text = gr.Textbox( - label="Enter your prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt", - ).style( - border=(True, False, True, True), - rounded=(True, False, False, True), - container=False, - ) - btn = gr.Button("Generate image").style( - margin=False, - rounded=(False, True, True, False), - ) - - gallery = gr.Gallery( - label="Generated images", show_label=False, elem_id="gallery" - ).style(grid=[2], height="auto") - - with gr.Row().style(mobile_collapse=False, equal_height=True): - width = gr.Slider(label="Width", minimum=32, maximum=1024, value=512, step=8) - height = gr.Slider(label="Height", minimum=32, maximum=1024, value=512, step=8) - - with gr.Row(): - steps = gr.Slider(label="Steps", minimum=1, maximum=50, value=30, step=1) - scale = gr.Slider( - label="Guidance Scale", minimum=0, maximum=50, value=7.5, step=0.1 - ) - seed = gr.Slider( - label="Seed", - minimum=-1, - maximum=2147483647, - step=1, - value=-1, - ) - - text.submit(infer, inputs=[text, width, height, steps, scale, seed], outputs=gallery) - btn.click(infer, inputs=[text, width, height, steps, scale, seed], outputs=gallery) - -block.queue(max_size=10).launch() \ No newline at end of file diff --git a/spaces/FoxMeo/fire-detector/utils/datasets.py b/spaces/FoxMeo/fire-detector/utils/datasets.py deleted file mode 100644 index 5fe4f7bcc28a91e83313c5372029928d0b8c0fd5..0000000000000000000000000000000000000000 --- a/spaces/FoxMeo/fire-detector/utils/datasets.py +++ /dev/null @@ -1,1320 +0,0 @@ -# Dataset utils and dataloaders - -import glob -import logging -import math -import os -import random -import shutil -import time -from itertools import repeat -from multiprocessing.pool import ThreadPool -from pathlib import Path -from threading import Thread - -import cv2 -import numpy as np -import torch -import torch.nn.functional as F -from PIL import Image, ExifTags -from torch.utils.data import Dataset -from tqdm import tqdm - -import pickle -from copy import deepcopy -#from pycocotools import mask as maskUtils -from torchvision.utils import save_image -from torchvision.ops import roi_pool, roi_align, ps_roi_pool, ps_roi_align - -from utils.general import check_requirements, xyxy2xywh, xywh2xyxy, xywhn2xyxy, xyn2xy, segment2box, segments2boxes, \ - resample_segments, clean_str -from utils.torch_utils import torch_distributed_zero_first - -# Parameters -help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data' -img_formats = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng', 'webp', 'mpo'] # acceptable image suffixes -vid_formats = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes -logger = logging.getLogger(__name__) - -# Get orientation exif tag -for orientation in ExifTags.TAGS.keys(): - if ExifTags.TAGS[orientation] == 'Orientation': - break - - -def get_hash(files): - # Returns a single hash value of a list of files - return sum(os.path.getsize(f) for f in files if os.path.isfile(f)) - - -def exif_size(img): - # Returns exif-corrected PIL size - s = img.size # (width, height) - try: - rotation = dict(img._getexif().items())[orientation] - if rotation == 6: # rotation 270 - s = (s[1], s[0]) - elif rotation == 8: # rotation 90 - s = (s[1], s[0]) - except: - pass - - return s - - -def create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False, - rank=-1, world_size=1, workers=8, image_weights=False, quad=False, prefix=''): - # Make sure only the first process in DDP process the dataset first, and the following others can use the cache - with torch_distributed_zero_first(rank): - dataset = LoadImagesAndLabels(path, imgsz, batch_size, - augment=augment, # augment images - hyp=hyp, # augmentation hyperparameters - rect=rect, # rectangular training - cache_images=cache, - single_cls=opt.single_cls, - stride=int(stride), - pad=pad, - image_weights=image_weights, - prefix=prefix) - - batch_size = min(batch_size, len(dataset)) - nw = min([os.cpu_count() // world_size, batch_size if batch_size > 1 else 0, workers]) # number of workers - sampler = torch.utils.data.distributed.DistributedSampler(dataset) if rank != -1 else None - loader = torch.utils.data.DataLoader if image_weights else InfiniteDataLoader - # Use torch.utils.data.DataLoader() if dataset.properties will update during training else InfiniteDataLoader() - dataloader = loader(dataset, - batch_size=batch_size, - num_workers=nw, - sampler=sampler, - pin_memory=True, - collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn) - return dataloader, dataset - - -class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader): - """ Dataloader that reuses workers - - Uses same syntax as vanilla DataLoader - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler)) - self.iterator = super().__iter__() - - def __len__(self): - return len(self.batch_sampler.sampler) - - def __iter__(self): - for i in range(len(self)): - yield next(self.iterator) - - -class _RepeatSampler(object): - """ Sampler that repeats forever - - Args: - sampler (Sampler) - """ - - def __init__(self, sampler): - self.sampler = sampler - - def __iter__(self): - while True: - yield from iter(self.sampler) - - -class LoadImages: # for inference - def __init__(self, path, img_size=640, stride=32): - p = str(Path(path).absolute()) # os-agnostic absolute path - if '*' in p: - files = sorted(glob.glob(p, recursive=True)) # glob - elif os.path.isdir(p): - files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir - elif os.path.isfile(p): - files = [p] # files - else: - raise Exception(f'ERROR: {p} does not exist') - - images = [x for x in files if x.split('.')[-1].lower() in img_formats] - videos = [x for x in files if x.split('.')[-1].lower() in vid_formats] - ni, nv = len(images), len(videos) - - self.img_size = img_size - self.stride = stride - self.files = images + videos - self.nf = ni + nv # number of files - self.video_flag = [False] * ni + [True] * nv - self.mode = 'image' - if any(videos): - self.new_video(videos[0]) # new video - else: - self.cap = None - assert self.nf > 0, f'No images or videos found in {p}. ' \ - f'Supported formats are:\nimages: {img_formats}\nvideos: {vid_formats}' - - def __iter__(self): - self.count = 0 - return self - - def __next__(self): - if self.count == self.nf: - raise StopIteration - path = self.files[self.count] - - if self.video_flag[self.count]: - # Read video - self.mode = 'video' - ret_val, img0 = self.cap.read() - if not ret_val: - self.count += 1 - self.cap.release() - if self.count == self.nf: # last video - raise StopIteration - else: - path = self.files[self.count] - self.new_video(path) - ret_val, img0 = self.cap.read() - - self.frame += 1 - print(f'video {self.count + 1}/{self.nf} ({self.frame}/{self.nframes}) {path}: ', end='') - - else: - # Read image - self.count += 1 - img0 = cv2.imread(path) # BGR - assert img0 is not None, 'Image Not Found ' + path - #print(f'image {self.count}/{self.nf} {path}: ', end='') - - # Padded resize - img = letterbox(img0, self.img_size, stride=self.stride)[0] - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - - return path, img, img0, self.cap - - def new_video(self, path): - self.frame = 0 - self.cap = cv2.VideoCapture(path) - self.nframes = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - - def __len__(self): - return self.nf # number of files - - -class LoadWebcam: # for inference - def __init__(self, pipe='0', img_size=640, stride=32): - self.img_size = img_size - self.stride = stride - - if pipe.isnumeric(): - pipe = eval(pipe) # local camera - # pipe = 'rtsp://192.168.1.64/1' # IP camera - # pipe = 'rtsp://username:password@192.168.1.64/1' # IP camera with login - # pipe = 'http://wmccpinetop.axiscam.net/mjpg/video.mjpg' # IP golf camera - - self.pipe = pipe - self.cap = cv2.VideoCapture(pipe) # video capture object - self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 3) # set buffer size - - def __iter__(self): - self.count = -1 - return self - - def __next__(self): - self.count += 1 - if cv2.waitKey(1) == ord('q'): # q to quit - self.cap.release() - cv2.destroyAllWindows() - raise StopIteration - - # Read frame - if self.pipe == 0: # local camera - ret_val, img0 = self.cap.read() - img0 = cv2.flip(img0, 1) # flip left-right - else: # IP camera - n = 0 - while True: - n += 1 - self.cap.grab() - if n % 30 == 0: # skip frames - ret_val, img0 = self.cap.retrieve() - if ret_val: - break - - # Print - assert ret_val, f'Camera Error {self.pipe}' - img_path = 'webcam.jpg' - print(f'webcam {self.count}: ', end='') - - # Padded resize - img = letterbox(img0, self.img_size, stride=self.stride)[0] - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - - return img_path, img, img0, None - - def __len__(self): - return 0 - - -class LoadStreams: # multiple IP or RTSP cameras - def __init__(self, sources='streams.txt', img_size=640, stride=32): - self.mode = 'stream' - self.img_size = img_size - self.stride = stride - - if os.path.isfile(sources): - with open(sources, 'r') as f: - sources = [x.strip() for x in f.read().strip().splitlines() if len(x.strip())] - else: - sources = [sources] - - n = len(sources) - self.imgs = [None] * n - self.sources = [clean_str(x) for x in sources] # clean source names for later - for i, s in enumerate(sources): - # Start the thread to read frames from the video stream - print(f'{i + 1}/{n}: {s}... ', end='') - url = eval(s) if s.isnumeric() else s - if 'youtube.com/' in str(url) or 'youtu.be/' in str(url): # if source is YouTube video - check_requirements(('pafy', 'youtube_dl')) - import pafy - url = pafy.new(url).getbest(preftype="mp4").url - cap = cv2.VideoCapture(url) - assert cap.isOpened(), f'Failed to open {s}' - w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - self.fps = cap.get(cv2.CAP_PROP_FPS) % 100 - - _, self.imgs[i] = cap.read() # guarantee first frame - thread = Thread(target=self.update, args=([i, cap]), daemon=True) - print(f' success ({w}x{h} at {self.fps:.2f} FPS).') - thread.start() - print('') # newline - - # check for common shapes - s = np.stack([letterbox(x, self.img_size, stride=self.stride)[0].shape for x in self.imgs], 0) # shapes - self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal - if not self.rect: - print('WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.') - - def update(self, index, cap): - # Read next stream frame in a daemon thread - n = 0 - while cap.isOpened(): - n += 1 - # _, self.imgs[index] = cap.read() - cap.grab() - if n == 4: # read every 4th frame - success, im = cap.retrieve() - self.imgs[index] = im if success else self.imgs[index] * 0 - n = 0 - time.sleep(1 / self.fps) # wait time - - def __iter__(self): - self.count = -1 - return self - - def __next__(self): - self.count += 1 - img0 = self.imgs.copy() - if cv2.waitKey(1) == ord('q'): # q to quit - cv2.destroyAllWindows() - raise StopIteration - - # Letterbox - img = [letterbox(x, self.img_size, auto=self.rect, stride=self.stride)[0] for x in img0] - - # Stack - img = np.stack(img, 0) - - # Convert - img = img[:, :, :, ::-1].transpose(0, 3, 1, 2) # BGR to RGB, to bsx3x416x416 - img = np.ascontiguousarray(img) - - return self.sources, img, img0, None - - def __len__(self): - return 0 # 1E12 frames = 32 streams at 30 FPS for 30 years - - -def img2label_paths(img_paths): - # Define label paths as a function of image paths - sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings - return ['txt'.join(x.replace(sa, sb, 1).rsplit(x.split('.')[-1], 1)) for x in img_paths] - - -class LoadImagesAndLabels(Dataset): # for training/testing - def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False, - cache_images=False, single_cls=False, stride=32, pad=0.0, prefix=''): - self.img_size = img_size - self.augment = augment - self.hyp = hyp - self.image_weights = image_weights - self.rect = False if image_weights else rect - self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training) - self.mosaic_border = [-img_size // 2, -img_size // 2] - self.stride = stride - self.path = path - #self.albumentations = Albumentations() if augment else None - - try: - f = [] # image files - for p in path if isinstance(path, list) else [path]: - p = Path(p) # os-agnostic - if p.is_dir(): # dir - f += glob.glob(str(p / '**' / '*.*'), recursive=True) - # f = list(p.rglob('**/*.*')) # pathlib - elif p.is_file(): # file - with open(p, 'r') as t: - t = t.read().strip().splitlines() - parent = str(p.parent) + os.sep - f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path - # f += [p.parent / x.lstrip(os.sep) for x in t] # local to global path (pathlib) - else: - raise Exception(f'{prefix}{p} does not exist') - self.img_files = sorted([x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in img_formats]) - # self.img_files = sorted([x for x in f if x.suffix[1:].lower() in img_formats]) # pathlib - assert self.img_files, f'{prefix}No images found' - except Exception as e: - raise Exception(f'{prefix}Error loading data from {path}: {e}\nSee {help_url}') - - # Check cache - self.label_files = img2label_paths(self.img_files) # labels - cache_path = (p if p.is_file() else Path(self.label_files[0]).parent).with_suffix('.cache') # cached labels - if cache_path.is_file(): - cache, exists = torch.load(cache_path), True # load - #if cache['hash'] != get_hash(self.label_files + self.img_files) or 'version' not in cache: # changed - # cache, exists = self.cache_labels(cache_path, prefix), False # re-cache - else: - cache, exists = self.cache_labels(cache_path, prefix), False # cache - - # Display cache - nf, nm, ne, nc, n = cache.pop('results') # found, missing, empty, corrupted, total - if exists: - d = f"Scanning '{cache_path}' images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted" - tqdm(None, desc=prefix + d, total=n, initial=n) # display cache results - assert nf > 0 or not augment, f'{prefix}No labels in {cache_path}. Can not train without labels. See {help_url}' - - # Read cache - cache.pop('hash') # remove hash - cache.pop('version') # remove version - labels, shapes, self.segments = zip(*cache.values()) - self.labels = list(labels) - self.shapes = np.array(shapes, dtype=np.float64) - self.img_files = list(cache.keys()) # update - self.label_files = img2label_paths(cache.keys()) # update - if single_cls: - for x in self.labels: - x[:, 0] = 0 - - n = len(shapes) # number of images - bi = np.floor(np.arange(n) / batch_size).astype(int) # batch index - nb = bi[-1] + 1 # number of batches - self.batch = bi # batch index of image - self.n = n - self.indices = range(n) - - # Rectangular Training - if self.rect: - # Sort by aspect ratio - s = self.shapes # wh - ar = s[:, 1] / s[:, 0] # aspect ratio - irect = ar.argsort() - self.img_files = [self.img_files[i] for i in irect] - self.label_files = [self.label_files[i] for i in irect] - self.labels = [self.labels[i] for i in irect] - self.shapes = s[irect] # wh - ar = ar[irect] - - # Set training image shapes - shapes = [[1, 1]] * nb - for i in range(nb): - ari = ar[bi == i] - mini, maxi = ari.min(), ari.max() - if maxi < 1: - shapes[i] = [maxi, 1] - elif mini > 1: - shapes[i] = [1, 1 / mini] - - self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(int) * stride - - # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM) - self.imgs = [None] * n - if cache_images: - if cache_images == 'disk': - self.im_cache_dir = Path(Path(self.img_files[0]).parent.as_posix() + '_npy') - self.img_npy = [self.im_cache_dir / Path(f).with_suffix('.npy').name for f in self.img_files] - self.im_cache_dir.mkdir(parents=True, exist_ok=True) - gb = 0 # Gigabytes of cached images - self.img_hw0, self.img_hw = [None] * n, [None] * n - results = ThreadPool(8).imap(lambda x: load_image(*x), zip(repeat(self), range(n))) - pbar = tqdm(enumerate(results), total=n) - for i, x in pbar: - if cache_images == 'disk': - if not self.img_npy[i].exists(): - np.save(self.img_npy[i].as_posix(), x[0]) - gb += self.img_npy[i].stat().st_size - else: - self.imgs[i], self.img_hw0[i], self.img_hw[i] = x - gb += self.imgs[i].nbytes - pbar.desc = f'{prefix}Caching images ({gb / 1E9:.1f}GB)' - pbar.close() - - def cache_labels(self, path=Path('./labels.cache'), prefix=''): - # Cache dataset labels, check images and read shapes - x = {} # dict - nm, nf, ne, nc = 0, 0, 0, 0 # number missing, found, empty, duplicate - pbar = tqdm(zip(self.img_files, self.label_files), desc='Scanning images', total=len(self.img_files)) - for i, (im_file, lb_file) in enumerate(pbar): - try: - # verify images - im = Image.open(im_file) - im.verify() # PIL verify - shape = exif_size(im) # image size - segments = [] # instance segments - assert (shape[0] > 9) & (shape[1] > 9), f'image size {shape} <10 pixels' - assert im.format.lower() in img_formats, f'invalid image format {im.format}' - - # verify labels - if os.path.isfile(lb_file): - nf += 1 # label found - with open(lb_file, 'r') as f: - l = [x.split() for x in f.read().strip().splitlines()] - if any([len(x) > 8 for x in l]): # is segment - classes = np.array([x[0] for x in l], dtype=np.float32) - segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in l] # (cls, xy1...) - l = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh) - l = np.array(l, dtype=np.float32) - if len(l): - assert l.shape[1] == 5, 'labels require 5 columns each' - assert (l >= 0).all(), 'negative labels' - assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels' - assert np.unique(l, axis=0).shape[0] == l.shape[0], 'duplicate labels' - else: - ne += 1 # label empty - l = np.zeros((0, 5), dtype=np.float32) - else: - nm += 1 # label missing - l = np.zeros((0, 5), dtype=np.float32) - x[im_file] = [l, shape, segments] - except Exception as e: - nc += 1 - print(f'{prefix}WARNING: Ignoring corrupted image and/or label {im_file}: {e}') - - pbar.desc = f"{prefix}Scanning '{path.parent / path.stem}' images and labels... " \ - f"{nf} found, {nm} missing, {ne} empty, {nc} corrupted" - pbar.close() - - if nf == 0: - print(f'{prefix}WARNING: No labels found in {path}. See {help_url}') - - x['hash'] = get_hash(self.label_files + self.img_files) - x['results'] = nf, nm, ne, nc, i + 1 - x['version'] = 0.1 # cache version - torch.save(x, path) # save for next time - logging.info(f'{prefix}New cache created: {path}') - return x - - def __len__(self): - return len(self.img_files) - - # def __iter__(self): - # self.count = -1 - # print('ran dataset iter') - # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF) - # return self - - def __getitem__(self, index): - index = self.indices[index] # linear, shuffled, or image_weights - - hyp = self.hyp - mosaic = self.mosaic and random.random() < hyp['mosaic'] - if mosaic: - # Load mosaic - if random.random() < 0.8: - img, labels = load_mosaic(self, index) - else: - img, labels = load_mosaic9(self, index) - shapes = None - - # MixUp https://arxiv.org/pdf/1710.09412.pdf - if random.random() < hyp['mixup']: - if random.random() < 0.8: - img2, labels2 = load_mosaic(self, random.randint(0, len(self.labels) - 1)) - else: - img2, labels2 = load_mosaic9(self, random.randint(0, len(self.labels) - 1)) - r = np.random.beta(8.0, 8.0) # mixup ratio, alpha=beta=8.0 - img = (img * r + img2 * (1 - r)).astype(np.uint8) - labels = np.concatenate((labels, labels2), 0) - - else: - # Load image - img, (h0, w0), (h, w) = load_image(self, index) - - # Letterbox - shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape - img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment) - shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling - - labels = self.labels[index].copy() - if labels.size: # normalized xywh to pixel xyxy format - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1]) - - if self.augment: - # Augment imagespace - if not mosaic: - img, labels = random_perspective(img, labels, - degrees=hyp['degrees'], - translate=hyp['translate'], - scale=hyp['scale'], - shear=hyp['shear'], - perspective=hyp['perspective']) - - - #img, labels = self.albumentations(img, labels) - - # Augment colorspace - augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v']) - - # Apply cutouts - # if random.random() < 0.9: - # labels = cutout(img, labels) - - if random.random() < hyp['paste_in']: - sample_labels, sample_images, sample_masks = [], [], [] - while len(sample_labels) < 30: - sample_labels_, sample_images_, sample_masks_ = load_samples(self, random.randint(0, len(self.labels) - 1)) - sample_labels += sample_labels_ - sample_images += sample_images_ - sample_masks += sample_masks_ - #print(len(sample_labels)) - if len(sample_labels) == 0: - break - labels = pastein(img, labels, sample_labels, sample_images, sample_masks) - - nL = len(labels) # number of labels - if nL: - labels[:, 1:5] = xyxy2xywh(labels[:, 1:5]) # convert xyxy to xywh - labels[:, [2, 4]] /= img.shape[0] # normalized height 0-1 - labels[:, [1, 3]] /= img.shape[1] # normalized width 0-1 - - if self.augment: - # flip up-down - if random.random() < hyp['flipud']: - img = np.flipud(img) - if nL: - labels[:, 2] = 1 - labels[:, 2] - - # flip left-right - if random.random() < hyp['fliplr']: - img = np.fliplr(img) - if nL: - labels[:, 1] = 1 - labels[:, 1] - - labels_out = torch.zeros((nL, 6)) - if nL: - labels_out[:, 1:] = torch.from_numpy(labels) - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - - return torch.from_numpy(img), labels_out, self.img_files[index], shapes - - @staticmethod - def collate_fn(batch): - img, label, path, shapes = zip(*batch) # transposed - for i, l in enumerate(label): - l[:, 0] = i # add target image index for build_targets() - return torch.stack(img, 0), torch.cat(label, 0), path, shapes - - @staticmethod - def collate_fn4(batch): - img, label, path, shapes = zip(*batch) # transposed - n = len(shapes) // 4 - img4, label4, path4, shapes4 = [], [], path[:n], shapes[:n] - - ho = torch.tensor([[0., 0, 0, 1, 0, 0]]) - wo = torch.tensor([[0., 0, 1, 0, 0, 0]]) - s = torch.tensor([[1, 1, .5, .5, .5, .5]]) # scale - for i in range(n): # zidane torch.zeros(16,3,720,1280) # BCHW - i *= 4 - if random.random() < 0.5: - im = F.interpolate(img[i].unsqueeze(0).float(), scale_factor=2., mode='bilinear', align_corners=False)[ - 0].type(img[i].type()) - l = label[i] - else: - im = torch.cat((torch.cat((img[i], img[i + 1]), 1), torch.cat((img[i + 2], img[i + 3]), 1)), 2) - l = torch.cat((label[i], label[i + 1] + ho, label[i + 2] + wo, label[i + 3] + ho + wo), 0) * s - img4.append(im) - label4.append(l) - - for i, l in enumerate(label4): - l[:, 0] = i # add target image index for build_targets() - - return torch.stack(img4, 0), torch.cat(label4, 0), path4, shapes4 - - -# Ancillary functions -------------------------------------------------------------------------------------------------- -def load_image(self, index): - # loads 1 image from dataset, returns img, original hw, resized hw - img = self.imgs[index] - if img is None: # not cached - path = self.img_files[index] - img = cv2.imread(path) # BGR - assert img is not None, 'Image Not Found ' + path - h0, w0 = img.shape[:2] # orig hw - r = self.img_size / max(h0, w0) # resize image to img_size - if r != 1: # always resize down, only resize up if training with augmentation - interp = cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR - img = cv2.resize(img, (int(w0 * r), int(h0 * r)), interpolation=interp) - return img, (h0, w0), img.shape[:2] # img, hw_original, hw_resized - else: - return self.imgs[index], self.img_hw0[index], self.img_hw[index] # img, hw_original, hw_resized - - -def augment_hsv(img, hgain=0.5, sgain=0.5, vgain=0.5): - r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains - hue, sat, val = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2HSV)) - dtype = img.dtype # uint8 - - x = np.arange(0, 256, dtype=np.int16) - lut_hue = ((x * r[0]) % 180).astype(dtype) - lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) - lut_val = np.clip(x * r[2], 0, 255).astype(dtype) - - img_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))).astype(dtype) - cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img) # no return needed - - -def hist_equalize(img, clahe=True, bgr=False): - # Equalize histogram on BGR image 'img' with img.shape(n,m,3) and range 0-255 - yuv = cv2.cvtColor(img, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV) - if clahe: - c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8)) - yuv[:, :, 0] = c.apply(yuv[:, :, 0]) - else: - yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0]) # equalize Y channel histogram - return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB) # convert YUV image to RGB - - -def load_mosaic(self, index): - # loads images in a 4-mosaic - - labels4, segments4 = [], [] - s = self.img_size - yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y - indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img4 - if i == 0: # top left - img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - elif i == 1: # top right - x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - elif i == 2: # bottom left - x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - elif i == 3: # bottom right - x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - - img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - padw = x1a - x1b - padh = y1a - y1b - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - labels4.append(labels) - segments4.extend(segments) - - # Concat/clip labels - labels4 = np.concatenate(labels4, 0) - for x in (labels4[:, 1:], *segments4): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img4, labels4 = replicate(img4, labels4) # replicate - - # Augment - #img4, labels4, segments4 = remove_background(img4, labels4, segments4) - #sample_segments(img4, labels4, segments4, probability=self.hyp['copy_paste']) - img4, labels4, segments4 = copy_paste(img4, labels4, segments4, probability=self.hyp['copy_paste']) - img4, labels4 = random_perspective(img4, labels4, segments4, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - - return img4, labels4 - - -def load_mosaic9(self, index): - # loads images in a 9-mosaic - - labels9, segments9 = [], [] - s = self.img_size - indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img9 - if i == 0: # center - img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - h0, w0 = h, w - c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates - elif i == 1: # top - c = s, s - h, s + w, s - elif i == 2: # top right - c = s + wp, s - h, s + wp + w, s - elif i == 3: # right - c = s + w0, s, s + w0 + w, s + h - elif i == 4: # bottom right - c = s + w0, s + hp, s + w0 + w, s + hp + h - elif i == 5: # bottom - c = s + w0 - w, s + h0, s + w0, s + h0 + h - elif i == 6: # bottom left - c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h - elif i == 7: # left - c = s - w, s + h0 - h, s, s + h0 - elif i == 8: # top left - c = s - w, s + h0 - hp - h, s, s + h0 - hp - - padx, pady = c[:2] - x1, y1, x2, y2 = [max(x, 0) for x in c] # allocate coords - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padx, pady) for x in segments] - labels9.append(labels) - segments9.extend(segments) - - # Image - img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax] - hp, wp = h, w # height, width previous - - # Offset - yc, xc = [int(random.uniform(0, s)) for _ in self.mosaic_border] # mosaic center x, y - img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s] - - # Concat/clip labels - labels9 = np.concatenate(labels9, 0) - labels9[:, [1, 3]] -= xc - labels9[:, [2, 4]] -= yc - c = np.array([xc, yc]) # centers - segments9 = [x - c for x in segments9] - - for x in (labels9[:, 1:], *segments9): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img9, labels9 = replicate(img9, labels9) # replicate - - # Augment - #img9, labels9, segments9 = remove_background(img9, labels9, segments9) - img9, labels9, segments9 = copy_paste(img9, labels9, segments9, probability=self.hyp['copy_paste']) - img9, labels9 = random_perspective(img9, labels9, segments9, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - - return img9, labels9 - - -def load_samples(self, index): - # loads images in a 4-mosaic - - labels4, segments4 = [], [] - s = self.img_size - yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y - indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img4 - if i == 0: # top left - img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - elif i == 1: # top right - x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - elif i == 2: # bottom left - x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - elif i == 3: # bottom right - x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - - img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - padw = x1a - x1b - padh = y1a - y1b - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - labels4.append(labels) - segments4.extend(segments) - - # Concat/clip labels - labels4 = np.concatenate(labels4, 0) - for x in (labels4[:, 1:], *segments4): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img4, labels4 = replicate(img4, labels4) # replicate - - # Augment - #img4, labels4, segments4 = remove_background(img4, labels4, segments4) - sample_labels, sample_images, sample_masks = sample_segments(img4, labels4, segments4, probability=0.5) - - return sample_labels, sample_images, sample_masks - - -def copy_paste(img, labels, segments, probability=0.5): - # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy) - n = len(segments) - if probability and n: - h, w, c = img.shape # height, width, channels - im_new = np.zeros(img.shape, np.uint8) - for j in random.sample(range(n), k=round(probability * n)): - l, s = labels[j], segments[j] - box = w - l[3], l[2], w - l[1], l[4] - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - if (ioa < 0.30).all(): # allow 30% obscuration of existing labels - labels = np.concatenate((labels, [[l[0], *box]]), 0) - segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1)) - cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED) - - result = cv2.bitwise_and(src1=img, src2=im_new) - result = cv2.flip(result, 1) # augment segments (flip left-right) - i = result > 0 # pixels to replace - # i[:, :] = result.max(2).reshape(h, w, 1) # act over ch - img[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug - - return img, labels, segments - - -def remove_background(img, labels, segments): - # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy) - n = len(segments) - h, w, c = img.shape # height, width, channels - im_new = np.zeros(img.shape, np.uint8) - img_new = np.ones(img.shape, np.uint8) * 114 - for j in range(n): - cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED) - - result = cv2.bitwise_and(src1=img, src2=im_new) - - i = result > 0 # pixels to replace - img_new[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug - - return img_new, labels, segments - - -def sample_segments(img, labels, segments, probability=0.5): - # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy) - n = len(segments) - sample_labels = [] - sample_images = [] - sample_masks = [] - if probability and n: - h, w, c = img.shape # height, width, channels - for j in random.sample(range(n), k=round(probability * n)): - l, s = labels[j], segments[j] - box = l[1].astype(int).clip(0,w-1), l[2].astype(int).clip(0,h-1), l[3].astype(int).clip(0,w-1), l[4].astype(int).clip(0,h-1) - - #print(box) - if (box[2] <= box[0]) or (box[3] <= box[1]): - continue - - sample_labels.append(l[0]) - - mask = np.zeros(img.shape, np.uint8) - - cv2.drawContours(mask, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED) - sample_masks.append(mask[box[1]:box[3],box[0]:box[2],:]) - - result = cv2.bitwise_and(src1=img, src2=mask) - i = result > 0 # pixels to replace - mask[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug - #print(box) - sample_images.append(mask[box[1]:box[3],box[0]:box[2],:]) - - return sample_labels, sample_images, sample_masks - - -def replicate(img, labels): - # Replicate labels - h, w = img.shape[:2] - boxes = labels[:, 1:].astype(int) - x1, y1, x2, y2 = boxes.T - s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels) - for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices - x1b, y1b, x2b, y2b = boxes[i] - bh, bw = y2b - y1b, x2b - x1b - yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y - x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh] - img[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0) - - return img, labels - - -def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32): - # Resize and pad image while meeting stride-multiple constraints - shape = img.shape[:2] # current shape [height, width] - if isinstance(new_shape, int): - new_shape = (new_shape, new_shape) - - # Scale ratio (new / old) - r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) - if not scaleup: # only scale down, do not scale up (for better test mAP) - r = min(r, 1.0) - - # Compute padding - ratio = r, r # width, height ratios - new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) - dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding - if auto: # minimum rectangle - dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding - elif scaleFill: # stretch - dw, dh = 0.0, 0.0 - new_unpad = (new_shape[1], new_shape[0]) - ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios - - dw /= 2 # divide padding into 2 sides - dh /= 2 - - if shape[::-1] != new_unpad: # resize - img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) - top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) - left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) - img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border - return img, ratio, (dw, dh) - - -def random_perspective(img, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0, - border=(0, 0)): - # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10)) - # targets = [cls, xyxy] - - height = img.shape[0] + border[0] * 2 # shape(h,w,c) - width = img.shape[1] + border[1] * 2 - - # Center - C = np.eye(3) - C[0, 2] = -img.shape[1] / 2 # x translation (pixels) - C[1, 2] = -img.shape[0] / 2 # y translation (pixels) - - # Perspective - P = np.eye(3) - P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y) - P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x) - - # Rotation and Scale - R = np.eye(3) - a = random.uniform(-degrees, degrees) - # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations - s = random.uniform(1 - scale, 1.1 + scale) - # s = 2 ** random.uniform(-scale, scale) - R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s) - - # Shear - S = np.eye(3) - S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg) - S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg) - - # Translation - T = np.eye(3) - T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels) - T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels) - - # Combined rotation matrix - M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT - if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed - if perspective: - img = cv2.warpPerspective(img, M, dsize=(width, height), borderValue=(114, 114, 114)) - else: # affine - img = cv2.warpAffine(img, M[:2], dsize=(width, height), borderValue=(114, 114, 114)) - - # Visualize - # import matplotlib.pyplot as plt - # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel() - # ax[0].imshow(img[:, :, ::-1]) # base - # ax[1].imshow(img2[:, :, ::-1]) # warped - - # Transform label coordinates - n = len(targets) - if n: - use_segments = any(x.any() for x in segments) - new = np.zeros((n, 4)) - if use_segments: # warp segments - segments = resample_segments(segments) # upsample - for i, segment in enumerate(segments): - xy = np.ones((len(segment), 3)) - xy[:, :2] = segment - xy = xy @ M.T # transform - xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine - - # clip - new[i] = segment2box(xy, width, height) - - else: # warp boxes - xy = np.ones((n * 4, 3)) - xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1 - xy = xy @ M.T # transform - xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine - - # create new boxes - x = xy[:, [0, 2, 4, 6]] - y = xy[:, [1, 3, 5, 7]] - new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T - - # clip - new[:, [0, 2]] = new[:, [0, 2]].clip(0, width) - new[:, [1, 3]] = new[:, [1, 3]].clip(0, height) - - # filter candidates - i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10) - targets = targets[i] - targets[:, 1:5] = new[i] - - return img, targets - - -def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n) - # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio - w1, h1 = box1[2] - box1[0], box1[3] - box1[1] - w2, h2 = box2[2] - box2[0], box2[3] - box2[1] - ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio - return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates - - -def bbox_ioa(box1, box2): - # Returns the intersection over box2 area given box1, box2. box1 is 4, box2 is nx4. boxes are x1y1x2y2 - box2 = box2.transpose() - - # Get the coordinates of bounding boxes - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - - # Intersection area - inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \ - (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0) - - # box2 area - box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + 1e-16 - - # Intersection over box2 area - return inter_area / box2_area - - -def cutout(image, labels): - # Applies image cutout augmentation https://arxiv.org/abs/1708.04552 - h, w = image.shape[:2] - - # create random masks - scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction - for s in scales: - mask_h = random.randint(1, int(h * s)) - mask_w = random.randint(1, int(w * s)) - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - # apply random color mask - image[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)] - - # return unobscured labels - if len(labels) and s > 0.03: - box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - labels = labels[ioa < 0.60] # remove >60% obscured labels - - return labels - - -def pastein(image, labels, sample_labels, sample_images, sample_masks): - # Applies image cutout augmentation https://arxiv.org/abs/1708.04552 - h, w = image.shape[:2] - - # create random masks - scales = [0.75] * 2 + [0.5] * 4 + [0.25] * 4 + [0.125] * 4 + [0.0625] * 6 # image size fraction - for s in scales: - if random.random() < 0.2: - continue - mask_h = random.randint(1, int(h * s)) - mask_w = random.randint(1, int(w * s)) - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - if len(labels): - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - else: - ioa = np.zeros(1) - - if (ioa < 0.30).all() and len(sample_labels) and (xmax > xmin+20) and (ymax > ymin+20): # allow 30% obscuration of existing labels - sel_ind = random.randint(0, len(sample_labels)-1) - #print(len(sample_labels)) - #print(sel_ind) - #print((xmax-xmin, ymax-ymin)) - #print(image[ymin:ymax, xmin:xmax].shape) - #print([[sample_labels[sel_ind], *box]]) - #print(labels.shape) - hs, ws, cs = sample_images[sel_ind].shape - r_scale = min((ymax-ymin)/hs, (xmax-xmin)/ws) - r_w = int(ws*r_scale) - r_h = int(hs*r_scale) - - if (r_w > 10) and (r_h > 10): - r_mask = cv2.resize(sample_masks[sel_ind], (r_w, r_h)) - r_image = cv2.resize(sample_images[sel_ind], (r_w, r_h)) - temp_crop = image[ymin:ymin+r_h, xmin:xmin+r_w] - m_ind = r_mask > 0 - if m_ind.astype(np.int32).sum() > 60: - temp_crop[m_ind] = r_image[m_ind] - #print(sample_labels[sel_ind]) - #print(sample_images[sel_ind].shape) - #print(temp_crop.shape) - box = np.array([xmin, ymin, xmin+r_w, ymin+r_h], dtype=np.float32) - if len(labels): - labels = np.concatenate((labels, [[sample_labels[sel_ind], *box]]), 0) - else: - labels = np.array([[sample_labels[sel_ind], *box]]) - - image[ymin:ymin+r_h, xmin:xmin+r_w] = temp_crop - - return labels - -class Albumentations: - # YOLOv5 Albumentations class (optional, only used if package is installed) - def __init__(self): - self.transform = None - import albumentations as A - - self.transform = A.Compose([ - A.CLAHE(p=0.01), - A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2, p=0.01), - A.RandomGamma(gamma_limit=[80, 120], p=0.01), - A.Blur(p=0.01), - A.MedianBlur(p=0.01), - A.ToGray(p=0.01), - A.ImageCompression(quality_lower=75, p=0.01),], - bbox_params=A.BboxParams(format='pascal_voc', label_fields=['class_labels'])) - - #logging.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p)) - - def __call__(self, im, labels, p=1.0): - if self.transform and random.random() < p: - new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed - im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])]) - return im, labels - - -def create_folder(path='./new'): - # Create folder - if os.path.exists(path): - shutil.rmtree(path) # delete output folder - os.makedirs(path) # make new output folder - - -def flatten_recursive(path='../coco'): - # Flatten a recursive directory by bringing all files to top level - new_path = Path(path + '_flat') - create_folder(new_path) - for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)): - shutil.copyfile(file, new_path / Path(file).name) - - -def extract_boxes(path='../coco/'): # from utils.datasets import *; extract_boxes('../coco128') - # Convert detection dataset into classification dataset, with one directory per class - - path = Path(path) # images dir - shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing - files = list(path.rglob('*.*')) - n = len(files) # number of files - for im_file in tqdm(files, total=n): - if im_file.suffix[1:] in img_formats: - # image - im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB - h, w = im.shape[:2] - - # labels - lb_file = Path(img2label_paths([str(im_file)])[0]) - if Path(lb_file).exists(): - with open(lb_file, 'r') as f: - lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - - for j, x in enumerate(lb): - c = int(x[0]) # class - f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - if not f.parent.is_dir(): - f.parent.mkdir(parents=True) - - b = x[1:] * [w, h, w, h] # box - # b[2:] = b[2:].max() # rectangle to square - b[2:] = b[2:] * 1.2 + 3 # pad - b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int) - - b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image - b[[1, 3]] = np.clip(b[[1, 3]], 0, h) - assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}' - - -def autosplit(path='../coco', weights=(0.9, 0.1, 0.0), annotated_only=False): - """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - Usage: from utils.datasets import *; autosplit('../coco') - Arguments - path: Path to images directory - weights: Train, val, test weights (list) - annotated_only: Only use images with an annotated txt file - """ - path = Path(path) # images dir - files = sum([list(path.rglob(f"*.{img_ext}")) for img_ext in img_formats], []) # image files only - n = len(files) # number of files - indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split - - txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files - [(path / x).unlink() for x in txt if (path / x).exists()] # remove existing - - print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only) - for i, img in tqdm(zip(indices, files), total=n): - if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label - with open(path / txt[i], 'a') as f: - f.write(str(img) + '\n') # add image to txt file - - -def load_segmentations(self, index): - key = '/work/handsomejw66/coco17/' + self.img_files[index] - #print(key) - # /work/handsomejw66/coco17/ - return self.segs[key] diff --git a/spaces/Francesco/FairytaleDJ/names.py b/spaces/Francesco/FairytaleDJ/names.py deleted file mode 100644 index 7c4f4fb37b4623449c281d32a8225dd0732fb4a5..0000000000000000000000000000000000000000 --- a/spaces/Francesco/FairytaleDJ/names.py +++ /dev/null @@ -1,3 +0,0 @@ -MODEL_ID = "text-embedding-ada-002" -# DATASET_ID = "disney-lyrics" -DATASET_ID = "disney-lyrics-emotions" diff --git a/spaces/FrankZxShen/vits-fast-finetuning-pcr/monotonic_align/__init__.py b/spaces/FrankZxShen/vits-fast-finetuning-pcr/monotonic_align/__init__.py deleted file mode 100644 index e97eecc595dd3bd97d0104ec62799e2e5efea57c..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-pcr/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/mel_processing.py b/spaces/FrankZxShen/vits-fast-finetuning-umamusume/mel_processing.py deleted file mode 100644 index 3614150259809983e776d3fed83021decca06a9c..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y.float(), n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/FritsLyneborg/kunstnerfrits/setup.py b/spaces/FritsLyneborg/kunstnerfrits/setup.py deleted file mode 100644 index 7f1a1763ca9cebc7bc16576d353d3284ee5d3c7d..0000000000000000000000000000000000000000 --- a/spaces/FritsLyneborg/kunstnerfrits/setup.py +++ /dev/null @@ -1,4 +0,0 @@ -from setuptools import setup - -if __name__ == "__main__": - setup() diff --git a/spaces/GIZ/vulnerability_analysis/README.md b/spaces/GIZ/vulnerability_analysis/README.md deleted file mode 100644 index fe78c398680247f18a88a088d1687ab66e4d018d..0000000000000000000000000000000000000000 --- a/spaces/GIZ/vulnerability_analysis/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Vulnerable Groups -emoji: 🦀 -colorFrom: blue -colorTo: pink -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/vocoder/display.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/vocoder/display.py deleted file mode 100644 index 956880722a3f05613ebd06f5686b3d8a59642e92..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/vocoder/display.py +++ /dev/null @@ -1,120 +0,0 @@ -import matplotlib.pyplot as plt -import time -import numpy as np -import sys - - -def progbar(i, n, size=16): - done = (i * size) // n - bar = '' - for i in range(size): - bar += '█' if i <= done else '░' - return bar - - -def stream(message) : - try: - sys.stdout.write("\r{%s}" % message) - except: - #Remove non-ASCII characters from message - message = ''.join(i for i in message if ord(i)<128) - sys.stdout.write("\r{%s}" % message) - - -def simple_table(item_tuples) : - - border_pattern = '+---------------------------------------' - whitespace = ' ' - - headings, cells, = [], [] - - for item in item_tuples : - - heading, cell = str(item[0]), str(item[1]) - - pad_head = True if len(heading) < len(cell) else False - - pad = abs(len(heading) - len(cell)) - pad = whitespace[:pad] - - pad_left = pad[:len(pad)//2] - pad_right = pad[len(pad)//2:] - - if pad_head : - heading = pad_left + heading + pad_right - else : - cell = pad_left + cell + pad_right - - headings += [heading] - cells += [cell] - - border, head, body = '', '', '' - - for i in range(len(item_tuples)) : - - temp_head = f'| {headings[i]} ' - temp_body = f'| {cells[i]} ' - - border += border_pattern[:len(temp_head)] - head += temp_head - body += temp_body - - if i == len(item_tuples) - 1 : - head += '|' - body += '|' - border += '+' - - print(border) - print(head) - print(border) - print(body) - print(border) - print(' ') - - -def time_since(started) : - elapsed = time.time() - started - m = int(elapsed // 60) - s = int(elapsed % 60) - if m >= 60 : - h = int(m // 60) - m = m % 60 - return f'{h}h {m}m {s}s' - else : - return f'{m}m {s}s' - - -def save_attention(attn, path) : - fig = plt.figure(figsize=(12, 6)) - plt.imshow(attn.T, interpolation='nearest', aspect='auto') - fig.savefig(f'{path}.png', bbox_inches='tight') - plt.close(fig) - - -def save_spectrogram(M, path, length=None) : - M = np.flip(M, axis=0) - if length : M = M[:, :length] - fig = plt.figure(figsize=(12, 6)) - plt.imshow(M, interpolation='nearest', aspect='auto') - fig.savefig(f'{path}.png', bbox_inches='tight') - plt.close(fig) - - -def plot(array) : - fig = plt.figure(figsize=(30, 5)) - ax = fig.add_subplot(111) - ax.xaxis.label.set_color('grey') - ax.yaxis.label.set_color('grey') - ax.xaxis.label.set_fontsize(23) - ax.yaxis.label.set_fontsize(23) - ax.tick_params(axis='x', colors='grey', labelsize=23) - ax.tick_params(axis='y', colors='grey', labelsize=23) - plt.plot(array) - - -def plot_spec(M) : - M = np.flip(M, axis=0) - plt.figure(figsize=(18,4)) - plt.imshow(M, interpolation='nearest', aspect='auto') - plt.show() - diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/README_HG.md b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/README_HG.md deleted file mode 100644 index 99a0776d1a4669fa8387cc77e162c60084100a92..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/README_HG.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Anything V3.0 -emoji: 🏃 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Gradio-Blocks/StyleGAN-NADA/app.py b/spaces/Gradio-Blocks/StyleGAN-NADA/app.py deleted file mode 100644 index 0c3986cf66e5b73f66b55846ad2798b9617b378b..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/StyleGAN-NADA/app.py +++ /dev/null @@ -1,387 +0,0 @@ -import os -import random - -import torch -import gradio as gr - -from e4e.models.psp import pSp -from util import * -from huggingface_hub import hf_hub_download - -import tempfile -from argparse import Namespace -import shutil - -import dlib -import numpy as np -import torchvision.transforms as transforms -from torchvision import utils - -from model.sg2_model import Generator -from generate_videos import generate_frames, video_from_interpolations, project_code_by_edit_name -from styleclip.styleclip_global import project_code_with_styleclip, style_tensor_to_style_dict - -import clip - -model_dir = "models" -os.makedirs(model_dir, exist_ok=True) - -model_repos = {"e4e": ("akhaliq/JoJoGAN_e4e_ffhq_encode", "e4e_ffhq_encode.pt"), - "dlib": ("akhaliq/jojogan_dlib", "shape_predictor_68_face_landmarks.dat"), - "sc_fs3": ("rinong/stylegan-nada-models", "fs3.npy"), - "base": ("akhaliq/jojogan-stylegan2-ffhq-config-f", "stylegan2-ffhq-config-f.pt"), - "sketch": ("rinong/stylegan-nada-models", "sketch.pt"), - "joker": ("rinong/stylegan-nada-models", "joker.pt"), - "pixar": ("rinong/stylegan-nada-models", "pixar.pt"), - "botero": ("rinong/stylegan-nada-models", "botero.pt"), - "white_walker": ("rinong/stylegan-nada-models", "white_walker.pt"), - "zuckerberg": ("rinong/stylegan-nada-models", "zuckerberg.pt"), - "simpson": ("rinong/stylegan-nada-models", "simpson.pt"), - "ssj": ("rinong/stylegan-nada-models", "ssj.pt"), - "cubism": ("rinong/stylegan-nada-models", "cubism.pt"), - "disney_princess": ("rinong/stylegan-nada-models", "disney_princess.pt"), - "edvard_munch": ("rinong/stylegan-nada-models", "edvard_munch.pt"), - "van_gogh": ("rinong/stylegan-nada-models", "van_gogh.pt"), - "oil": ("rinong/stylegan-nada-models", "oil.pt"), - "rick_morty": ("rinong/stylegan-nada-models", "rick_morty.pt"), - "anime": ("rinong/stylegan-nada-models", "anime.pt"), - "shrek": ("rinong/stylegan-nada-models", "shrek.pt"), - "thanos": ("rinong/stylegan-nada-models", "thanos.pt"), - "ukiyoe": ("rinong/stylegan-nada-models", "ukiyoe.pt"), - "groot": ("rinong/stylegan-nada-models", "groot.pt"), - "witcher": ("rinong/stylegan-nada-models", "witcher.pt"), - "grafitti_on_wall": ("rinong/stylegan-nada-models", "grafitti_on_wall.pt"), - "modernism": ("rinong/stylegan-nada-models", "modernism.pt"), - "marble": ("rinong/stylegan-nada-models", "marble.pt"), - "vintage_comics": ("rinong/stylegan-nada-models", "vintage_comics.pt"), - "crochet": ("rinong/stylegan-nada-models", "crochet.pt"), - "modigliani": ("rinong/stylegan-nada-models", "modigliani.pt"), - "ghibli": ("rinong/stylegan-nada-models", "ghibli.pt"), - "elf": ("rinong/stylegan-nada-models", "elf.pt"), - "zombie": ("rinong/stylegan-nada-models", "zombie.pt"), - "werewolf": ("rinong/stylegan-nada-models", "werewolf.pt"), - "plastic_puppet": ("rinong/stylegan-nada-models", "plastic_puppet.pt"), - "mona_lisa": ("rinong/stylegan-nada-models", "mona_lisa.pt"), - } - -def get_models(): - os.makedirs(model_dir, exist_ok=True) - - model_paths = {} - - for model_name, repo_details in model_repos.items(): - download_path = hf_hub_download(repo_id=repo_details[0], filename=repo_details[1]) - model_paths[model_name] = download_path - - return model_paths - -model_paths = get_models() - -class ImageEditor(object): - def __init__(self): - self.device = "cuda" if torch.cuda.is_available() else "cpu" - - latent_size = 512 - n_mlp = 8 - channel_mult = 2 - model_size = 1024 - - self.generators = {} - - self.model_list = [name for name in model_paths.keys() if name not in ["e4e", "dlib", "sc_fs3"]] - - for model in self.model_list: - g_ema = Generator( - model_size, latent_size, n_mlp, channel_multiplier=channel_mult - ).to(self.device) - - checkpoint = torch.load(model_paths[model], map_location=self.device) - - g_ema.load_state_dict(checkpoint['g_ema']) - - self.generators[model] = g_ema - - self.experiment_args = {"model_path": model_paths["e4e"]} - self.experiment_args["transform"] = transforms.Compose( - [ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), - ] - ) - self.resize_dims = (256, 256) - - model_path = self.experiment_args["model_path"] - - ckpt = torch.load(model_path, map_location="cpu") - opts = ckpt["opts"] - - opts["checkpoint_path"] = model_path - opts = Namespace(**opts) - - self.e4e_net = pSp(opts, self.device) - self.e4e_net.eval() - - self.shape_predictor = dlib.shape_predictor( - model_paths["dlib"] - ) - - self.styleclip_fs3 = torch.from_numpy(np.load(model_paths["sc_fs3"])).to(self.device) - - self.clip_model, _ = clip.load("ViT-B/32", device=self.device) - - print("setup complete") - - def get_style_list(self): - style_list = [] - - for key in self.generators: - style_list.append(key) - - return style_list - - def invert_image(self, input_image): - input_image = self.run_alignment(str(input_image)) - - input_image = input_image.resize(self.resize_dims) - - img_transforms = self.experiment_args["transform"] - transformed_image = img_transforms(input_image) - - with torch.no_grad(): - images, latents = self.run_on_batch(transformed_image.unsqueeze(0)) - result_image, latent = images[0], latents[0] - - inverted_latent = latent.unsqueeze(0).unsqueeze(1) - - return inverted_latent - - def get_generators_for_styles(self, output_styles, loop_styles=False): - - if "base" in output_styles: # always start with base if chosen - output_styles.insert(0, output_styles.pop(output_styles.index("base"))) - if loop_styles: - output_styles.append(output_styles[0]) - - return [self.generators[style] for style in output_styles] - - def _pack_edits(func): - def inner(self, - edit_type_choice, - pose_slider, - smile_slider, - gender_slider, - age_slider, - hair_slider, - src_text_styleclip, - tar_text_styleclip, - alpha_styleclip, - beta_styleclip, - *args): - - edit_choices = {"edit_type": edit_type_choice, - "pose": pose_slider, - "smile": smile_slider, - "gender": gender_slider, - "age": age_slider, - "hair_length": hair_slider, - "src_text": src_text_styleclip, - "tar_text": tar_text_styleclip, - "alpha": alpha_styleclip, - "beta": beta_styleclip} - - - return func(self, *args, edit_choices) - - return inner - - def get_target_latents(self, source_latent, edit_choices, generators): - - target_latents = [] - - if edit_choices["edit_type"] == "InterFaceGAN": - np_source_latent = source_latent.squeeze(0).cpu().detach().numpy() - - for attribute_name in ["pose", "smile", "gender", "age", "hair_length"]: - strength = edit_choices[attribute_name] - if strength != 0.0: - projected_code_np = project_code_by_edit_name(np_source_latent, attribute_name, strength) - target_latents.append(torch.from_numpy(projected_code_np).float().to(self.device)) - - elif edit_choices["edit_type"] == "StyleCLIP": - if edit_choices["alpha"] != 0.0: - source_s_dict = generators[0].get_s_code(source_latent, input_is_latent=True)[0] - target_latents.append(project_code_with_styleclip(source_s_dict, - edit_choices["src_text"], - edit_choices["tar_text"], - edit_choices["alpha"], - edit_choices["beta"], - generators[0], - self.styleclip_fs3, - self.clip_model)) - - # if edit type is none or if all sliders were set to 0 - if not target_latents: - target_latents = [source_latent.squeeze(0), ] * max((len(generators) - 1), 1) - - return target_latents - - @_pack_edits - def edit_image(self, input, output_styles, edit_choices): - return self.predict(input, output_styles, edit_choices=edit_choices) - - @_pack_edits - def edit_video(self, input, output_styles, loop_styles, edit_choices): - return self.predict(input, output_styles, generate_video=True, loop_styles=loop_styles, edit_choices=edit_choices) - - def predict( - self, - input, # Input image path - output_styles, # Style checkbox options. - generate_video = False, # Generate a video instead of an output image - loop_styles = False, # Loop back to the initial style - edit_choices = None, # Optional dictionary with edit choice arguments - ): - - if edit_choices is None: - edit_choices = {"edit_type": "None"} - - # @title Align image - out_dir = tempfile.mkdtemp() - - inverted_latent = self.invert_image(input) - generators = self.get_generators_for_styles(output_styles, loop_styles) - - target_latents = self.get_target_latents(inverted_latent, edit_choices, generators) - - if not generate_video: - output_paths = [] - - with torch.no_grad(): - for g_ema in generators: - latent_for_gen = random.choice(target_latents) - - if edit_choices["edit_type"] == "StyleCLIP": - latent_for_gen = style_tensor_to_style_dict(latent_for_gen, g_ema) - img, _ = g_ema(latent_for_gen, input_is_s_code=True, input_is_latent=True, truncation=1, randomize_noise=False) - else: - img, _ = g_ema([latent_for_gen], input_is_latent=True, truncation=1, randomize_noise=False) - - output_path = os.path.join(out_dir, f"out_{len(output_paths)}.jpg") - utils.save_image(img, output_path, nrow=1, normalize=True, range=(-1, 1)) - - output_paths.append(output_path) - - return output_paths - - return self.generate_vid(generators, inverted_latent, target_latents, out_dir) - - def generate_vid(self, generators, source_latent, target_latents, out_dir): - - fps = 24 - - with tempfile.TemporaryDirectory() as dirpath: - generate_frames(source_latent, target_latents, generators, dirpath) - video_from_interpolations(fps, dirpath) - - gen_path = os.path.join(dirpath, "out.mp4") - out_path = os.path.join(out_dir, "out.mp4") - - shutil.copy2(gen_path, out_path) - - return out_path - - def run_alignment(self, image_path): - aligned_image = align_face(filepath=image_path, predictor=self.shape_predictor) - print("Aligned image has shape: {}".format(aligned_image.size)) - return aligned_image - - def run_on_batch(self, inputs): - images, latents = self.e4e_net( - inputs.to(self.device).float(), randomize_noise=False, return_latents=True - ) - return images, latents - -editor = ImageEditor() - -blocks = gr.Blocks() - -with blocks: - gr.Markdown("

      StyleGAN-NADA

      ") - gr.Markdown( - "

      Inference demo for StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators (SIGGRAPH 2022).

      " - ) - gr.Markdown( - "

      Usage

      Upload an image of your face, pick your desired output styles, and apply StyleGAN-based editing.
      " - "
      Choose the edit image tab to create static images in all chosen styles. Choose the video tab in order to interpolate between all chosen styles
      (To make it easier on the servers, we've limited video length. If you add too many styles (we recommend no more than 3!), they'll pass in the blink of an eye! 🤗)
      " - ) - gr.Markdown( - "For more information about the paper and code for training your own models (with text or images), please visit our project page or the official repository." - ) - - gr.Markdown("

      A note on social impact

      This model relies on StyleGAN and CLIP, both of which are prone to biases inherited from their training data and their architecture. These may include (but are not limited to) poor representation of minorities or the perpetution of societal biases, such as gender norms. In particular, StyleGAN editing may induce undesired changes in skin tones. Moreover, generative models can, and have been used to create deep fake imagery which may assist in the spread of propaganda. However, tools are available for identifying StyleGAN generated imagery, and any 'realistic' results produced by this model should be easily identifiable through such tools.
      ") - - with gr.Row(): - with gr.Column(): - input_img = gr.inputs.Image(type="filepath", label="Input image") - - with gr.Column(): - style_choice = gr.inputs.CheckboxGroup(choices=editor.get_style_list(), type="value", label="Choose your styles!") - - editing_type_choice = gr.Radio(choices=["None", "InterFaceGAN", "StyleCLIP"], label="Choose latent space editing option. For InterFaceGAN and StyleCLIP, set the options below:") - - with gr.Row(): - with gr.Column(): - with gr.Tabs(): - with gr.TabItem("Edit Images"): - img_button = gr.Button("Edit Image") - img_output = gr.Gallery(label="Output Images") - - with gr.TabItem("Create Video"): - with gr.Row(): - vid_button = gr.Button("Generate Video") - loop_styles = gr.inputs.Checkbox(default=True, label="Loop video back to the initial style?") - with gr.Row(): - with gr.Column(): - gr.Markdown("Warning: Videos generation requires the synthesis of hundreds of frames and is expected to take several minutes.") - gr.Markdown("To reduce queue times, we significantly reduced the number of video frames. Using more than 3 styles will further reduce the frames per style, leading to quicker transitions. For better control, we recommend cloning the gradio app, adjusting num_alphas in generate_videos.py, and running the code locally.") - vid_output = gr.outputs.Video(label="Output Video") - - with gr.Column(): - with gr.Tabs(): - with gr.TabItem("InterFaceGAN Editing Options"): - gr.Markdown("Move the sliders to make the chosen attribute stronger (e.g. the person older) or leave at 0 to disable editing.") - gr.Markdown("If multiple options are provided, they will be used randomly between images (or sequentially for a video), not together.") - gr.Markdown("Please note that some directions may be entangled. For example, hair length adjustments are likely to also modify the perceived gender.") - - gr.Markdown("For more information about InterFaceGAN, please visit the official repository") - - pose_slider = gr.Slider(label="Pose", minimum=-1, maximum=1, value=0, step=0.05) - smile_slider = gr.Slider(label="Smile", minimum=-1, maximum=1, value=0, step=0.05) - gender_slider = gr.Slider(label="Perceived Gender", minimum=-1, maximum=1, value=0, step=0.05) - age_slider = gr.Slider(label="Age", minimum=-1, maximum=1, value=0, step=0.05) - hair_slider = gr.Slider(label="Hair Length", minimum=-1, maximum=1, value=0, step=0.05) - - ig_edit_choices = [pose_slider, smile_slider, gender_slider, age_slider, hair_slider] - - with gr.TabItem("StyleCLIP Editing Options"): - gr.Markdown("Choose source and target descriptors, such as 'face with hair' to 'face with curly hair'") - gr.Markdown("Editing strength controls the magnitude of change. Disentanglement thresholds limits the number of channels the network can modify, reducing possible leak into other attributes. Setting the threshold too high may lead to no available channels. If you see an error, lower the threshold and try again.") - gr.Markdown("For more information about StyleCLIP, please visit the official repository") - - src_text_styleclip = gr.Textbox(label="Source text") - tar_text_styleclip = gr.Textbox(label="Target text") - - alpha_styleclip = gr.Slider(label="Edit strength", minimum=-10, maximum=10, value=1, step=0.1) - beta_styleclip = gr.Slider(label="Disentanglement Threshold", minimum=0.08, maximum=0.3, value=0.14, step=0.01) - - sc_edit_choices = [src_text_styleclip, tar_text_styleclip, alpha_styleclip, beta_styleclip] - - edit_inputs = [editing_type_choice] + ig_edit_choices + sc_edit_choices - img_button.click(fn=editor.edit_image, inputs=edit_inputs + [input_img, style_choice], outputs=img_output) - vid_button.click(fn=editor.edit_video, inputs=edit_inputs + [input_img, style_choice, loop_styles], outputs=vid_output) - - article = "

      StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators | Project Page | Code

      visitor badge
      " - gr.Markdown(article) - -blocks.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/Gradio-Blocks/pokemon-move-generator-app/app.py b/spaces/Gradio-Blocks/pokemon-move-generator-app/app.py deleted file mode 100644 index 0b2807e302806e6969f9779cea60408b44932bae..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/pokemon-move-generator-app/app.py +++ /dev/null @@ -1,241 +0,0 @@ -import gradio as gr -from transformers import AutoTokenizer -from transformers import pipeline -from utils import format_moves -import pandas as pd -import tensorflow as tf -import json - -model_checkpoint = "distilgpt2" - -tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) - -generate = pipeline("text-generation", - model="arjunpatel/distilgpt2-finetuned-pokemon-moves", - tokenizer=tokenizer) -# load in the model -seed_text = "This move is called " - -tf.random.set_seed(0) - - -def update_history(df, move_name, move_desc, generation, parameters): - # get rid of first seed phrase - - move_desc = move_desc.split("\n")[1:] - move_desc = "\n".join(move_desc) - new_row = [{"Move Name": move_name, - "Move Description": move_desc, - "Generation Type": generation, - "Parameters": json.dumps(parameters)}] - return pd.concat([df, pd.DataFrame(new_row)]) - - -def create_move(move, history): - generated_move = format_moves(generate(seed_text + move, num_return_sequences=1)) - return generated_move, update_history(history, move, generated_move, - "baseline", "None") - - -def create_greedy_search_move(move, history): - generated_move = format_moves(generate(seed_text + move, do_sample=False)) - return generated_move, update_history(history, move, generated_move, - "greedy", "None") - - -def create_beam_search_move(move, num_beams, history): - generated_move = format_moves(generate(seed_text + move, num_beams=num_beams, - num_return_sequences=1, - do_sample=False, early_stopping=True)) - return generated_move, update_history(history, move, generated_move, - "beam", {"num_beams": 2}) - - -def create_sampling_search_move(move, do_sample, temperature, history): - generated_move = format_moves(generate(seed_text + move, do_sample=do_sample, temperature=float(temperature), - num_return_sequences=1, topk=0)) - return generated_move, update_history(history, move, generated_move, - "temperature", {"do_sample": do_sample, - "temperature": temperature}) - - -def create_top_search_move(move, topk, topp, history): - generated_move = format_moves(generate( - seed_text + move, - do_sample=True, - num_return_sequences=1, - top_k=topk, - top_p=topp, - force_word_ids=tokenizer.encode("The user", return_tensors='tf'))) - return generated_move, update_history(history, move, generated_move, - "top", {"top k": topk, - "top p": topp}) - - -demo = gr.Blocks() - -with demo: - gr.Markdown("

      What's that Pokemon Move?

      ") - gr.Markdown( - """This Gradio demo allows you to generate Pokemon Move descriptions given a name, and learn more about text - decoding methods in the process! Each tab aims to explain each generation methodology available for the - model. The dataframe below allows you to keep track of each move generated, to compare!""") - gr.Markdown("

      How does text generation work?

      ") - gr.Markdown("""Roughly, text generation models accept an input sequence of words (or parts of words, - known as tokens). - These models then output a corresponding set of words or tokens. Given the input, the model - estimates the probability of another possible word or token appearing right after the given sequence. In - other words, the model estimates conditional probabilities and ranks them in order to generate sequences - . """) - gr.Markdown("Enter a two to three word Pokemon Move name of your imagination below, with each word capitalized!") - gr.Markdown("

      Move Generation

      ") - with gr.Tabs(): - with gr.TabItem("Standard Generation"): - gr.Markdown( - """The default parameters for distilgpt2 work well to generate moves. Use this tab as - a baseline for your experiments.""") - with gr.Row(): - text_input_baseline = gr.Textbox(label="Move", - placeholder="Type a two or three word move name here! Try \"Wonder " - "Shield\"!") - text_output_baseline = gr.Textbox(label="Move Description", - placeholder="Leave this blank!") - text_button_baseline = gr.Button("Create my move!") - with gr.TabItem("Greedy Search Decoding"): - gr.Markdown(""" - - Greedy search is a decoding method that relies on finding words that has the highest estimated - probability of following the sequence thus far. - - Therefore, the model \"greedily\" grabs the highest - probability word and continues generating the sentence. - - This has the side effect of finding sequences that are reasonable, but avoids sequences that are - less probable but way more interesting. - Try the other decoding methods to get sentences with more variety! - """) - with gr.Row(): - text_input_greedy = gr.Textbox(label="Move") - text_output_greedy = gr.Textbox(label="Move Description") - text_button_greedy = gr.Button("Create my move!") - with gr.TabItem("Beam Search"): - gr.Markdown("""Beam search is an improvement on Greedy Search. Instead of directly grabbing the word that - maximizes probability, we conduct a search with B number of candidates. We then try to find the next word - that would most likely follow each beam, and we grab the top B candidates of that search. This may - eliminate one of the original beams we started with, and that's okay! That is how the algorithm decides - on an optimal candidate. Eventually, the beam sequence terminate or are eliminated due to being too - improbable. - - Increasing the number of beams will increase model generation time, but also result in a more thorough - search. Decreasing the number of beams will decrease decoding time, but it may not find an optimal - sentence. - - Play around with the num_beams parameter to experiment! """ - ) - with gr.Row(): - num_beams = gr.Slider(minimum=2, maximum=10, value=2, step=1, - label="Number of Beams") - text_input_beam = gr.Textbox(label="Move") - text_output_beam = gr.Textbox(label="Move Description") - text_button_beam = gr.Button("Create my move!") - with gr.TabItem("Sampling and Temperature Search"): - gr.Markdown( - """Greedy Search and Beam Search were both good at finding sequences that are likely to follow our - input text, but when generating cool move descriptions, we want some more variety! - - Instead of choosing the word or token that is most likely to follow a given sequence, we can instead - ask the model to sample across the probability distribution of likely words. - - It's kind of like walking into the tall grass and finding a Pokemon encounter. - There are different encounter rates, which allow - for the most common mons to appear (looking at you, Zubat), but also account for surprise, like shinys! - - We might even want to go further, though. We can rescale the probability distributions directly - instead, allowing for rare words to temporarily become more frequently. We do this using the - temperature parameter. - - Turn the temperature up, and rare tokens become very likely! Cool down, and we approach more sensible - output. - - Experiment with turning sampling on and off, and by varying temperature below!. - """) - with gr.Row(): - temperature = gr.Slider(minimum=0.3, maximum=4.0, value=1.0, step=0.1, - label="Temperature") - text_input_temp = gr.Textbox(label="Move") - with gr.Row(): - sample_boolean = gr.Checkbox(label="Enable Sampling?") - text_output_temp = gr.Textbox(label="Move Description") - text_button_temp = gr.Button("Create my move!") - with gr.TabItem("Top K and Top P Sampling"): - gr.Markdown( - """When we want more control over the words we get to sample from, we turn to Top K and Top P - decoding methods! - - - The Top K sampling method selects the K most probable words given a sequence, and then samples from - that subset, rather than the whole vocabulary. This effectively cuts out low probability words. - - - Top P also reduces the available vocabulary to sample from, but instead of choosing the number of - words or tokens in advance, we sort the vocabulary from most to least likely word, and we - grab the smallest set of words that sum to P. This allows for the number of words we look at to - change while sampling, instead of being fixed. - - We can even use both methods at the same time! To disable Top K, set it to 0 using the slider. - To disable Top P, set it to 1""") - - with gr.Row(): - topk = gr.Slider(minimum=0, maximum=200, value=0, step=5, - label="Top K") - - text_input_top = gr.Textbox(label="Move") - with gr.Row(): - topp = gr.Slider(minimum=0.10, maximum=1, value=1, step=0.05, - label="Top P") - text_output_top = gr.Textbox(label="Move Description") - text_button_top = gr.Button("Create my move!") - with gr.Box(): - gr.Markdown("

      Generation History

      ") - # Displays a dataframe with the history of moves generated, with parameters - history = gr.Dataframe(headers=["Move Name", "Move Description", "Generation Type", "Parameters"]) - with gr.Box(): - gr.Markdown("

      How did you make this?

      ") - gr.Markdown(""" - - Hi! My name is Arjun Patel and I'm Lead Data Scientist - over at Speeko. - Nice to meet you! - - - I collected the dataset from Serebii, a news source and aggregator of - Pokemon info. - - - I then added a seed phrase "This move is called" just before each move in order to assist the model in - generation. - - - I then followed HuggingFace's handy language_modeling.ipynb for fine-tuning distillgpt2 on this tiny dataset, - and it surprisingly worked! - - - I learned all about text generation using the book Natural Language Processing - with Transformers by Lewis Tunstall, Leandro von Werra and Thomas Wolf, as well as this fantastic article by Patrick von Platen. Thanks to all - of these folks for creating these learning materials, and thanks to the Hugging Face team for developing this - product! """) - - text_button_baseline.click(create_move, inputs=[text_input_baseline, history], - outputs=[text_output_baseline, history]) - text_button_greedy.click(create_greedy_search_move, inputs=[text_input_greedy, history], - outputs=[text_output_greedy, history]) - text_button_temp.click(create_sampling_search_move, inputs=[text_input_temp, sample_boolean, temperature, history], - outputs=[text_output_temp, history]) - text_button_beam.click(create_beam_search_move, inputs=[text_input_beam, num_beams, history], - outputs=[text_output_beam, history]) - text_button_top.click(create_top_search_move, inputs=[text_input_top, topk, topp, history], - outputs=[text_output_top, history]) - -demo.launch(share=True) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_x101_64x4d_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_x101_64x4d_fpn_1x_coco.py deleted file mode 100644 index 1b18c2ba41d1493380bab3515be8e29547988ebf..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_x101_64x4d_fpn_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './ga_retinanet_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes.py deleted file mode 100644 index bf39d2f12b719b1c91e38bef71f0f5232543b0dc..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d16-mg124_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,11 +0,0 @@ -_base_ = './deeplabv3plus_r50-d8_512x1024_40k_cityscapes.py' -model = dict( - pretrained='open-mmlab://resnet101_v1c', - backbone=dict( - depth=101, - dilations=(1, 1, 1, 2), - strides=(1, 2, 2, 1), - multi_grid=(1, 2, 4)), - decode_head=dict( - dilations=(1, 6, 12, 18), - sampler=dict(type='OHEMPixelSampler', min_kept=100000))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index 1a36e3c80a13f91e37e4d90b7ae47c7e0d204144..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './dnl_r50-d8_512x1024_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index f34373d9ebab5ef6f4c01e3eab8a97c288495be0..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './encnet_r50-d8_512x1024_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_20k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_20k_voc12aug.py deleted file mode 100644 index 17206a5171dcc357c589a1711afa52d87faeece0..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x512_20k_voc12aug.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/pascal_voc12_aug.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_20k.py' -] -model = dict( - decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21)) diff --git a/spaces/HaloMaster/chinesesummary/fengshen/data/megatron_dataloader/bert_dataset.py b/spaces/HaloMaster/chinesesummary/fengshen/data/megatron_dataloader/bert_dataset.py deleted file mode 100644 index 2c007f060fd07fc9c6302b7f88e191469d599222..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/data/megatron_dataloader/bert_dataset.py +++ /dev/null @@ -1,196 +0,0 @@ -# coding=utf-8 -# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""BERT Style dataset.""" - - -import numpy as np -import torch - -from fengshen.data.megatron_dataloader.dataset_utils import ( - get_samples_mapping, - get_a_and_b_segments, - create_masked_lm_predictions, - create_tokens_and_tokentypes, -) - - -class BertDataset(torch.utils.data.Dataset): - - def __init__(self, name, indexed_dataset, data_prefix, - num_epochs, max_num_samples, masked_lm_prob, - max_seq_length, short_seq_prob, seed, binary_head, tokenizer, masking_style): - # Params to store. - self.name = name - self.seed = seed - self.masked_lm_prob = masked_lm_prob - self.max_seq_length = max_seq_length - self.short_seq_prob = short_seq_prob - self.binary_head = binary_head - self.masking_style = masking_style - - # Dataset. - self.indexed_dataset = indexed_dataset - - # Build the samples mapping. - self.samples_mapping = get_samples_mapping(self.indexed_dataset, - data_prefix, - num_epochs, - max_num_samples, - # account for added tokens - self.max_seq_length - 3, - short_seq_prob, - self.seed, - self.name, - self.binary_head) - inv_vocab = {v: k for k, v in tokenizer.vocab.items()} - self.vocab_id_list = list(inv_vocab.keys()) - self.vocab_id_to_token_dict = inv_vocab - self.cls_id = tokenizer.cls_token_id - self.sep_id = tokenizer.sep_token_id - self.mask_id = tokenizer.mask_token_id - self.pad_id = tokenizer.pad_token_id - self.tokenizer = tokenizer - - def __len__(self): - return self.samples_mapping.shape[0] - - def __getitem__(self, idx): - start_idx, end_idx, seq_length = self.samples_mapping[idx] - sample = [self.indexed_dataset[i] for i in range(start_idx, end_idx)] - # Note that this rng state should be numpy and not python since - # python randint is inclusive whereas the numpy one is exclusive. - # We % 2**32 since numpy requres the seed to be between 0 and 2**32 - 1 - np_rng = np.random.RandomState(seed=((self.seed + idx) % 2**32)) - return build_training_sample(sample, seq_length, - self.max_seq_length, # needed for padding - self.vocab_id_list, - self.vocab_id_to_token_dict, - self.cls_id, self.sep_id, - self.mask_id, self.pad_id, - self.masked_lm_prob, np_rng, - self.binary_head, - tokenizer=self.tokenizer, - masking_style=self.masking_style) - - -def build_training_sample(sample, - target_seq_length, max_seq_length, - vocab_id_list, vocab_id_to_token_dict, - cls_id, sep_id, mask_id, pad_id, - masked_lm_prob, np_rng, binary_head, - tokenizer, - masking_style='bert'): - """Biuld training sample. - - Arguments: - sample: A list of sentences in which each sentence is a list token ids. - target_seq_length: Desired sequence length. - max_seq_length: Maximum length of the sequence. All values are padded to - this length. - vocab_id_list: List of vocabulary ids. Used to pick a random id. - vocab_id_to_token_dict: A dictionary from vocab ids to text tokens. - cls_id: Start of example id. - sep_id: Separator id. - mask_id: Mask token id. - pad_id: Padding token id. - masked_lm_prob: Probability to mask tokens. - np_rng: Random number genenrator. Note that this rng state should be - numpy and not python since python randint is inclusive for - the opper bound whereas the numpy one is exclusive. - """ - - if binary_head: - # We assume that we have at least two sentences in the sample - assert len(sample) > 1 - assert target_seq_length <= max_seq_length - - # Divide sample into two segments (A and B). - if binary_head: - tokens_a, tokens_b, is_next_random = get_a_and_b_segments(sample, - np_rng) - else: - tokens_a = [] - for j in range(len(sample)): - tokens_a.extend(sample[j]) - tokens_b = [] - is_next_random = False - - if len(tokens_a) >= max_seq_length-3: - tokens_a = tokens_a[:max_seq_length-3] - - # Truncate to `target_sequence_length`. - max_num_tokens = target_seq_length - '''' - truncated = truncate_segments(tokens_a, tokens_b, len(tokens_a), - len(tokens_b), max_num_tokens, np_rng) - ''' - - # Build tokens and toketypes. - tokens, tokentypes = create_tokens_and_tokentypes(tokens_a, tokens_b, - cls_id, sep_id) - # Masking. - max_predictions_per_seq = masked_lm_prob * max_num_tokens - (tokens, masked_positions, masked_labels, _, _) = create_masked_lm_predictions( - tokens, vocab_id_list, vocab_id_to_token_dict, masked_lm_prob, - cls_id, sep_id, mask_id, max_predictions_per_seq, np_rng, - tokenizer=tokenizer, - masking_style=masking_style) - - # Padding. - tokens_np, tokentypes_np, labels_np, padding_mask_np, loss_mask_np \ - = pad_and_convert_to_numpy(tokens, tokentypes, masked_positions, - masked_labels, pad_id, max_seq_length) - - train_sample = { - 'input_ids': tokens_np, - 'token_type_ids': tokentypes_np, - 'labels': labels_np, - 'next_sentence_label': int(is_next_random), - 'attention_mask': padding_mask_np} - return train_sample - - -def pad_and_convert_to_numpy(tokens, tokentypes, masked_positions, - masked_labels, pad_id, max_seq_length): - """Pad sequences and convert them to numpy.""" - - # Some checks. - num_tokens = len(tokens) - padding_length = max_seq_length - num_tokens - assert padding_length >= 0 - assert len(tokentypes) == num_tokens - assert len(masked_positions) == len(masked_labels) - - # Tokens and token types. - filler = [pad_id] * padding_length - tokens_np = np.array(tokens + filler, dtype=np.int64) - tokentypes_np = np.array(tokentypes + filler, dtype=np.int64) - - # Padding mask. - padding_mask_np = np.array([1] * num_tokens + [0] * padding_length, - dtype=np.int64) - - # Lables and loss mask. - labels = [-100] * max_seq_length - loss_mask = [0] * max_seq_length - for i in range(len(masked_positions)): - assert masked_positions[i] < num_tokens - labels[masked_positions[i]] = masked_labels[i] - loss_mask[masked_positions[i]] = 1 - labels_np = np.array(labels, dtype=np.int64) - loss_mask_np = np.array(loss_mask, dtype=np.int64) - - return tokens_np, tokentypes_np, labels_np, padding_mask_np, loss_mask_np diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/huffman/huffman_coder.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/huffman/huffman_coder.py deleted file mode 100644 index 6531f1547cbd7250aa03e0ef8c2efbac49bb1aff..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/huffman/huffman_coder.py +++ /dev/null @@ -1,265 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import re -import typing as tp -from collections import Counter, deque -from dataclasses import dataclass - -from bitarray import bitarray, util -from fairseq.data import Dictionary - -# basically we have to write to addressable bytes for the memory mapped -# dataset loader. Sentences that get encoded to a length that is not a -# multiple of BLOCKSIZE (a byte) will be padded to fit. (see _pad in the coder) -BLOCKSIZE = 8 - - -class HuffmanCoder: - def __init__( - self, root: "HuffmanNode", bos="", pad="", eos="", unk="" - ): - self.root = root - self.table = root.code_table() - self.bos_word, self.unk_word, self.pad_word, self.eos_word = bos, unk, pad, eos - - def _pad(self, a: bitarray) -> bitarray: - """ - bitpadding, 1 then 0. - - If the array is already a multiple of blocksize, we add a full block. - """ - pad_len = BLOCKSIZE - (len(a) % BLOCKSIZE) - 1 - padding = bitarray("1" + "0" * pad_len) - return a + padding - - def _unpad(self, a: bitarray) -> bitarray: - """ - remove the bitpadding. - - There will be a set of 0s preceded by a 1 at the end of the bitarray, we remove that - """ - # count the 0 padding at the end until we find the first 1 - # we want to remove the one too - remove_cnt = util.rindex(a, 1) - return a[:remove_cnt] - - def encode(self, iter: tp.List[str]) -> bytes: - """ - encode a list of tokens a return bytes. We use bitpadding to make sure the encoded bits fit in bytes. - """ - a = bitarray() - for token in iter: - code = self.get_code(token) - if code is None: - if self.unk_word is None: - raise Exception(f"unknown token {token} cannot be encoded.") - else: - token = self.unk_word - a = a + self.get_code(token) - return self._pad(a).tobytes() - - def decode(self, bits: bytes) -> tp.Iterator["HuffmanNode"]: - """ - take bitpadded bytes and decode it to a set of leaves. You can then use each node to find the symbol/id - """ - a = bitarray() - a.frombytes(bits) - return self.root.decode(self._unpad(a)) - - def get_code(self, symbol: str) -> tp.Optional[bitarray]: - node = self.get_node(symbol) - return None if node is None else node.code - - def get_node(self, symbol: str) -> "HuffmanNode": - return self.table.get(symbol) - - @classmethod - def from_file( - cls, - filename: str, - bos="", - pad="", - eos="", - unk="", - ) -> "HuffmanCoder": - builder = HuffmanCodeBuilder.from_file(filename) - return builder.build_code(bos=bos, pad=pad, eos=eos, unk=unk) - - def to_file(self, filename, sep="\t"): - nodes = list(self.table.values()) - nodes.sort(key=lambda n: n.id) - with open(filename, "w", encoding="utf-8") as output: - for n in nodes: - output.write(f"{n.symbol}{sep}{n.count}\n") - - def __iter__(self): - for n in self.table.values(): - yield n - - def merge(self, other_coder: "HuffmanCoder") -> "HuffmanCoder": - builder = HuffmanCodeBuilder() - for n in self: - builder.increment(n.symbol, n.count) - for n in other_coder: - builder.increment(n.symbol, n.count) - return builder.build_code() - - def __eq__(self, other: "HuffmanCoder") -> bool: - return self.table == other.table - - def __len__(self) -> int: - return len(self.table) - - def __contains__(self, sym: str) -> bool: - return sym in self.table - - def to_dictionary(self) -> Dictionary: - dictionary = Dictionary(bos=self.bos, unk=self.unk, pad=self.pad, eos=self.eos) - for n in self: - dictionary.add_symbol(n.symbol, n=n.count) - dictionary.finalize() - return dictionary - - -@dataclass -class HuffmanNode: - """ - a node in a Huffman tree - """ - - id: int - count: int - symbol: tp.Optional[str] = None - left: tp.Optional["HuffmanNode"] = None - right: tp.Optional["HuffmanNode"] = None - code: tp.Optional[bitarray] = None - - def is_leaf(self) -> bool: - return self.left is None and self.right is None - - def code_table(self, prefix: tp.Optional[bitarray] = None) -> tp.Dict[str, "HuffmanNode"]: - defaulted_prefix = prefix if prefix is not None else bitarray() - if self.is_leaf(): - self.code = ( - defaulted_prefix if len(defaulted_prefix) > 0 else bitarray("0") - ) # leaf could be the root if there is only one symbol - return {self.symbol: self} - - codes_right = self.right.code_table(defaulted_prefix + bitarray([0])) - codes_left = self.left.code_table(defaulted_prefix + bitarray([1])) - return {**codes_left, **codes_right} - - def decode(self, bits: bitarray) -> tp.Iterator["HuffmanNode"]: - current_node = self - for bit in bits: - if bit == 0: # go right - current_node = current_node.right - else: # go left - current_node = current_node.left - if current_node is None: - # we shouldn't be on a leaf here - raise Exception("fell off a leaf") - if current_node.is_leaf(): - yield current_node - current_node = self - if current_node != self: - raise Exception("couldn't decode all the bits") - - -class HuffmanCodeBuilder: - """ - build a dictionary with occurence count and then build the Huffman code for it. - """ - - def __init__(self): - self.symbols = Counter() - - def add_symbols(self, *syms) -> None: - self.symbols.update(syms) - - def increment(self, symbol: str, cnt: int) -> None: - self.symbols[symbol] += cnt - - @classmethod - def from_file(cls, filename): - c = cls() - with open(filename, "r", encoding="utf-8") as input: - for line in input: - split = re.split(r"[\s]+", line) - c.increment(split[0], int(split[1])) - return c - - def to_file(self, filename, sep="\t"): - with open(filename, "w", encoding="utf-8") as output: - for (tok, cnt) in self.symbols.most_common(): - output.write(f"{tok}{sep}{cnt}\n") - - def _smallest(self, q1: deque, q2: deque) -> HuffmanNode: - if len(q1) == 0: - return q2.pop() - - if len(q2) == 0: - return q1.pop() - - if q1[-1].count < q2[-1].count: - return q1.pop() - - return q2.pop() - - def __add__(self, c: "HuffmanCodeBuilder") -> "HuffmanCodeBuilder": - new_c = self.symbols + c.symbols - new_b = HuffmanCodeBuilder() - new_b.symbols = new_c - return new_b - - def build_code( - self, - bos="", - pad="", - eos="", - unk="", - ) -> HuffmanCoder: - assert len(self.symbols) > 0, "cannot build code from empty list of symbols" - - if self.symbols[bos] == 0: - self.add_symbols(bos) - if self.symbols[pad] == 0: - self.add_symbols(pad) - if self.symbols[eos] == 0: - self.add_symbols(eos) - if self.symbols[unk] == 0: - self.add_symbols(unk) - - node_id = 0 - leaves_queue = deque( - [ - HuffmanNode(symbol=symbol, count=count, id=idx) - for idx, (symbol, count) in enumerate(self.symbols.most_common()) - ] - ) # left are the most common, right are the least common - - if len(leaves_queue) == 1: - root = leaves_queue.pop() - root.id = 0 - return HuffmanCoder(root) - - nodes_queue = deque() - - while len(leaves_queue) > 0 or len(nodes_queue) != 1: - # get the lowest two nodes at the head of each queue - node1 = self._smallest(leaves_queue, nodes_queue) - node2 = self._smallest(leaves_queue, nodes_queue) - - # add new node - nodes_queue.appendleft( - HuffmanNode( - count=node1.count + node2.count, left=node1, right=node2, id=node_id - ) - ) - node_id += 1 - - # we are left with the root - return HuffmanCoder(nodes_queue.pop(), bos=bos, pad=pad, eos=eos, unk=unk) diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/monotonic_align/monotonic_align/__init__.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/monotonic_align/monotonic_align/__init__.py deleted file mode 100644 index 47a4dbf3177302af6b8e7d08b0b78343b1329efa..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/monotonic_align/monotonic_align/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -import pkg_resources - -__version__ = pkg_resources.get_distribution("monotonic_align").version - -from monotonic_align.mas import * diff --git a/spaces/Heisenberg08/Text2SQL/README.md b/spaces/Heisenberg08/Text2SQL/README.md deleted file mode 100644 index 33700586ef76ea7cf8f1d77da925b9623c511bb1..0000000000000000000000000000000000000000 --- a/spaces/Heisenberg08/Text2SQL/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Text2SQL -emoji: 🐨 -colorFrom: purple -colorTo: yellow -sdk: streamlit -sdk_version: 1.9.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/text_duplicates/text_duplicates.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/text_duplicates/text_duplicates.py deleted file mode 100644 index 323461f7b3d569e1e9e99daae6f41e3767da6d84..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/text_duplicates/text_duplicates.py +++ /dev/null @@ -1,93 +0,0 @@ -import evaluate -import logging -import os -import pandas as pd -import plotly.express as px -import utils -import utils.dataset_utils as ds_utils -from collections import Counter -from os.path import exists, isdir -from os.path import join as pjoin - -TEXT = "text" -# These are string constants defined in the evaluate library. -# They may need to be updated if the evaluate library changes these strings -DUPS_FRAC = "duplicate_fraction" -# Evaluate calls the dictionary a "list" -DUPS_DICT = "duplicates_dict" -# This isn't in the evaluate measurement, but TODO to add that... -# DUPS_SUM = "duplicate_sum" - -logs = utils.prepare_logging(__file__) - -class DMTHelper: - """Helper class for the Data Measurements Tool. - This allows us to keep all variables and functions related to labels - in one file. - Does caching and using the evaluate library for computation. - """ - - def __init__(self, dstats, load_only, save): - # Input HuggingFace Dataset. - self.dset = dstats.text_dset[TEXT] - if self.dset is None: - dstats.load_or_prepare_text_dset() - self.dset = dstats.text_dset - self.use_cache = dstats.use_cache - # Note: This is None as it can be called different times with different - # settings, and so we want fresh results each time. With the evaluate - # integration, results are different depending on whether - # list_duplicates is set. - self.duplicates_results = None - self.cache_dir = dstats.dataset_cache_dir - self.save = save - self.load_only = load_only - # Filenames - self.dups_dir = "text_duplicates" - dups_json = "text_duplicates.json" - dups_html = "text_duplicates.html" - self.dups_result_json_fid = pjoin(self.cache_dir, self.dups_dir, dups_json) - self.dups_result_html_fid = pjoin(self.cache_dir, self.dups_dir, dups_html) - - def run_DMT_processing(self, list_duplicates=True): - """Calls functions to do the main work. - DMT uses the full duplicates list in a widget, - so it is set to default True. - """ - # First look to see what we can load from cache. - if self.use_cache: - self.duplicates_results = self._load_duplicates_cache() - if self.duplicates_results: - logs.info("Loaded cached text duplicate results.") - if not self.duplicates_results and not self.load_only: - self.duplicates_results = self._prepare_duplicates(list_duplicates=list_duplicates) - logs.info("Prepared duplicates.") - if self.save: - self._write_duplicates_cache() - - def _prepare_duplicates(self, list_duplicates=True): - """Wraps the evaluate library.""" - duplicates = evaluate.load("text_duplicates") - results = duplicates.compute(data=self.dset, list_duplicates=list_duplicates) - return results - - def _load_duplicates_cache(self): - """Loads previously computed results from cache.""" - results = {} - if exists(self.dups_result_json_fid): - results = ds_utils.read_json(self.dups_result_json_fid) - return results - - def _write_duplicates_cache(self): - """Writes newly computed results to cache.""" - ds_utils.make_path(pjoin(self.cache_dir, self.dups_dir)) - if self.duplicates_results: - ds_utils.write_json(self.duplicates_results, self.dups_result_json_fid) - # TODO: Use df_to_html rather than write_json_as_html; - # this will make it possible to order the results. - # But they must first be turned into a dataframe. - ds_utils.write_json_as_html(self.duplicates_results, self.dups_result_html_fid) - - def get_duplicates_filenames(self): - dups_fid_dict = {"statistics": self.dups_result_json_fid, "html":self.dups_result_html_fid} - return dups_fid_dict diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/docs/vctk_example.md b/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/docs/vctk_example.md deleted file mode 100644 index 2ba78f3f73d6ea30f9de89150fbbc9dd5923b6fa..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/docs/vctk_example.md +++ /dev/null @@ -1,51 +0,0 @@ -[[Back]](..) - -# VCTK - -[VCTK](https://datashare.ed.ac.uk/handle/10283/3443) is an open English speech corpus. We provide examples -for building [Transformer](https://arxiv.org/abs/1809.08895) models on this dataset. - - -## Data preparation -Download data, create splits and generate audio manifests with -```bash -python -m examples.speech_synthesis.preprocessing.get_vctk_audio_manifest \ - --output-data-root ${AUDIO_DATA_ROOT} \ - --output-manifest-root ${AUDIO_MANIFEST_ROOT} -``` - -Then, extract log-Mel spectrograms, generate feature manifest and create data configuration YAML with -```bash -python -m examples.speech_synthesis.preprocessing.get_feature_manifest \ - --audio-manifest-root ${AUDIO_MANIFEST_ROOT} \ - --output-root ${FEATURE_MANIFEST_ROOT} \ - --ipa-vocab --use-g2p -``` -where we use phoneme inputs (`--ipa-vocab --use-g2p`) as example. - -To denoise audio and trim leading/trailing silence using signal processing based VAD, run -```bash -for SPLIT in dev test train; do - python -m examples.speech_synthesis.preprocessing.denoise_and_vad_audio \ - --audio-manifest ${AUDIO_MANIFEST_ROOT}/${SPLIT}.audio.tsv \ - --output-dir ${PROCESSED_DATA_ROOT} \ - --denoise --vad --vad-agg-level 3 -done -``` - -## Training -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#transformer).) - -## Inference -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#inference).) - -## Automatic Evaluation -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#automatic-evaluation).) - -## Results - -| --arch | Params | Test MCD | Model | -|---|---|---|---| -| tts_transformer | 54M | 3.4 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2/vctk_transformer_phn.tar) | - -[[Back]](..) diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/onnx/onnx_export.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros/onnx/onnx_export.py deleted file mode 100644 index 976bfe97a213d1390bdc044b5d86cab84d10e63b..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/onnx/onnx_export.py +++ /dev/null @@ -1,73 +0,0 @@ -import argparse -import time -import numpy as np -import onnx -from onnxsim import simplify -import onnxruntime as ort -import onnxoptimizer -import torch -from model_onnx import SynthesizerTrn -import utils -from hubert import hubert_model_onnx - -def main(HubertExport,NetExport): - - path = "NyaruTaffy" - - if(HubertExport): - device = torch.device("cuda") - hubert_soft = utils.get_hubert_model() - test_input = torch.rand(1, 1, 16000) - input_names = ["source"] - output_names = ["embed"] - torch.onnx.export(hubert_soft.to(device), - test_input.to(device), - "hubert3.0.onnx", - dynamic_axes={ - "source": { - 2: "sample_length" - } - }, - verbose=False, - opset_version=13, - input_names=input_names, - output_names=output_names) - if(NetExport): - device = torch.device("cuda") - hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - SVCVITS = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model) - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", SVCVITS, None) - _ = SVCVITS.eval().to(device) - for i in SVCVITS.parameters(): - i.requires_grad = False - test_hidden_unit = torch.rand(1, 50, 256) - test_lengths = torch.LongTensor([50]) - test_pitch = torch.rand(1, 50) - test_sid = torch.LongTensor([0]) - input_names = ["hidden_unit", "lengths", "pitch", "sid"] - output_names = ["audio", ] - SVCVITS.eval() - torch.onnx.export(SVCVITS, - ( - test_hidden_unit.to(device), - test_lengths.to(device), - test_pitch.to(device), - test_sid.to(device) - ), - f"checkpoints/{path}/model.onnx", - dynamic_axes={ - "hidden_unit": [0, 1], - "pitch": [1] - }, - do_constant_folding=False, - opset_version=16, - verbose=False, - input_names=input_names, - output_names=output_names) - - -if __name__ == '__main__': - main(False,True) diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/data/pretty_json.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/data/pretty_json.py deleted file mode 100644 index 426fadc2dd83675840488d85c64093ed4983ecf6..0000000000000000000000000000000000000000 --- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/data/pretty_json.py +++ /dev/null @@ -1,20 +0,0 @@ -""" -Usage: -python3 pretty_json.py --in in.json --out out.json -""" - -import argparse -import json - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--in-file", type=str, required=True) - parser.add_argument("--out-file", type=str, required=True) - args = parser.parse_args() - - with open(args.in_file, "r") as fin: - data = json.load(fin) - - with open(args.out_file, "w") as fout: - json.dump(data, fout, indent=2) diff --git a/spaces/JUNGU/VToonify/vtoonify/model/stylegan/readme.md b/spaces/JUNGU/VToonify/vtoonify/model/stylegan/readme.md deleted file mode 100644 index c0f2bce780fe2d7a9239c944b165eee7bcdeb9cb..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/VToonify/vtoonify/model/stylegan/readme.md +++ /dev/null @@ -1,7 +0,0 @@ -# StyleGAN 2 in PyTorch - -Implementation of Analyzing and Improving the Image Quality of StyleGAN (https://arxiv.org/abs/1912.04958) in PyTorch - -Fork from [https://github.com/rosinality/stylegan2-pytorch](https://github.com/rosinality/stylegan2-pytorch) - -In VToonify, we modify it to accept z+ latent codes. diff --git a/spaces/JerryYou/ChatGPT-prompt-generator/app.py b/spaces/JerryYou/ChatGPT-prompt-generator/app.py deleted file mode 100644 index 5da2e5088053267553b6f5af9760a0a7d58c2a1f..0000000000000000000000000000000000000000 --- a/spaces/JerryYou/ChatGPT-prompt-generator/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -import gradio as gr - -tokenizer = AutoTokenizer.from_pretrained("merve/chatgpt-prompts-bart-long") -model = AutoModelForSeq2SeqLM.from_pretrained("merve/chatgpt-prompts-bart-long", from_tf=True) - -def generate(prompt): - - batch = tokenizer(prompt, return_tensors="pt") - generated_ids = model.generate(batch["input_ids"], max_new_tokens=150) - output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) - return output[0] - -input_component = gr.Textbox(label = "Input a persona, e.g. photographer", value = "photographer") -output_component = gr.Textbox(label = "Prompt") -examples = [["photographer"], ["developer"]] -description = "This app generates ChatGPT prompts, it's based on a BART model trained on [this dataset](https://huggingface.co/datasets/fka/awesome-chatgpt-prompts). 📓 Simply enter a persona that you want the prompt to be generated based on. 🧙🏻🧑🏻‍🚀🧑🏻‍🎨🧑🏻‍🔬🧑🏻‍💻🧑🏼‍🏫🧑🏽‍🌾" -gr.Interface(generate, inputs = input_component, outputs=output_component, examples=examples, title = "👨🏻‍🎤 ChatGPT Prompt Generator 👨🏻‍🎤", description=description).launch() diff --git a/spaces/KNDLR/trash-ai/README.md b/spaces/KNDLR/trash-ai/README.md deleted file mode 100644 index 9bf24e389d1a79dbe1be7736a6519662623a6a34..0000000000000000000000000000000000000000 --- a/spaces/KNDLR/trash-ai/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Trash AI -emoji: 🌿 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: gpl-3.0 ---- diff --git a/spaces/Kangarroar/ApplioRVC-Inference/Fixes/local_fixes.py b/spaces/Kangarroar/ApplioRVC-Inference/Fixes/local_fixes.py deleted file mode 100644 index 8a418076eee6f65fe06eb0f607061796b839c1ee..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/Fixes/local_fixes.py +++ /dev/null @@ -1,136 +0,0 @@ -import os -import sys -import time -import shutil -import requests -import zipfile - -def insert_new_line(file_name, line_to_find, text_to_insert): - lines = [] - with open(file_name, 'r', encoding='utf-8') as read_obj: - lines = read_obj.readlines() - already_exists = False - with open(file_name + '.tmp', 'w', encoding='utf-8') as write_obj: - for i in range(len(lines)): - write_obj.write(lines[i]) - if lines[i].strip() == line_to_find: - # If next line exists and starts with sys.path.append, skip - if i+1 < len(lines) and lines[i+1].strip().startswith("sys.path.append"): - print('It was already fixed! Skip adding a line...') - already_exists = True - break - else: - write_obj.write(text_to_insert + '\n') - # If no existing sys.path.append line was found, replace the original file - if not already_exists: - os.replace(file_name + '.tmp', file_name) - return True - else: - # If existing line was found, delete temporary file - os.remove(file_name + '.tmp') - return False - -def replace_in_file(file_name, old_text, new_text): - with open(file_name, 'r', encoding='utf-8') as file: - file_contents = file.read() - - if old_text in file_contents: - file_contents = file_contents.replace(old_text, new_text) - with open(file_name, 'w', encoding='utf-8') as file: - file.write(file_contents) - return True - - return False - -if __name__ == "__main__": - current_path = os.getcwd() - file_name = os.path.join(current_path, "infer", "modules", "train", "extract", "extract_f0_print.py") - line_to_find = 'import numpy as np, logging' - text_to_insert = "sys.path.append(r'" + current_path + "')" - - - success_1 = insert_new_line(file_name, line_to_find, text_to_insert) - if success_1: - print('The first operation was successful!') - else: - print('He skipped the first operation because it was already fixed!') - - file_name = 'infer-web.py' - old_text = 'with gr.Blocks(theme=gr.themes.Soft()) as app:' - new_text = 'with gr.Blocks() as app:' - - success_2 = replace_in_file(file_name, old_text, new_text) - if success_2: - print('The second operation was successful!') - else: - print('The second operation was omitted because it was already fixed!') - - print('Local corrections successful! You should now be able to infer and train locally in Applio RVC Fork.') - - time.sleep(5) - -def find_torchcrepe_directory(directory): - """ - Recursively searches for the topmost folder named 'torchcrepe' within a directory. - Returns the path of the directory found or None if none is found. - """ - for root, dirs, files in os.walk(directory): - if 'torchcrepe' in dirs: - return os.path.join(root, 'torchcrepe') - return None - -def download_and_extract_torchcrepe(): - url = 'https://github.com/maxrmorrison/torchcrepe/archive/refs/heads/master.zip' - temp_dir = 'temp_torchcrepe' - destination_dir = os.getcwd() - - try: - torchcrepe_dir_path = os.path.join(destination_dir, 'torchcrepe') - - if os.path.exists(torchcrepe_dir_path): - print("Skipping the torchcrepe download. The folder already exists.") - return - - # Download the file - print("Starting torchcrepe download...") - response = requests.get(url) - - # Raise an error if the GET request was unsuccessful - response.raise_for_status() - print("Download completed.") - - # Save the downloaded file - zip_file_path = os.path.join(temp_dir, 'master.zip') - os.makedirs(temp_dir, exist_ok=True) - with open(zip_file_path, 'wb') as file: - file.write(response.content) - print(f"Zip file saved to {zip_file_path}") - - # Extract the zip file - print("Extracting content...") - with zipfile.ZipFile(zip_file_path, 'r') as zip_file: - zip_file.extractall(temp_dir) - print("Extraction completed.") - - # Locate the torchcrepe folder and move it to the destination directory - torchcrepe_dir = find_torchcrepe_directory(temp_dir) - if torchcrepe_dir: - shutil.move(torchcrepe_dir, destination_dir) - print(f"Moved the torchcrepe directory to {destination_dir}!") - else: - print("The torchcrepe directory could not be located.") - - except Exception as e: - print("Torchcrepe not successfully downloaded", e) - - # Clean up temporary directory - if os.path.exists(temp_dir): - shutil.rmtree(temp_dir) - -# Run the function -download_and_extract_torchcrepe() - -temp_dir = 'temp_torchcrepe' - -if os.path.exists(temp_dir): - shutil.rmtree(temp_dir) diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/global_style_token.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/global_style_token.py deleted file mode 100644 index 21ce07e7056ee575ee37e3855e1489d6cea7ccae..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/global_style_token.py +++ /dev/null @@ -1,145 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.init as init -import torch.nn.functional as tFunctional -from synthesizer.gst_hyperparameters import GSTHyperparameters as hp -from synthesizer.hparams import hparams - - -class GlobalStyleToken(nn.Module): - """ - inputs: style mel spectrograms [batch_size, num_spec_frames, num_mel] - speaker_embedding: speaker mel spectrograms [batch_size, num_spec_frames, num_mel] - outputs: [batch_size, embedding_dim] - """ - def __init__(self, speaker_embedding_dim=None): - - super().__init__() - self.encoder = ReferenceEncoder() - self.stl = STL(speaker_embedding_dim) - - def forward(self, inputs, speaker_embedding=None): - enc_out = self.encoder(inputs) - # concat speaker_embedding according to https://github.com/mozilla/TTS/blob/master/TTS/tts/layers/gst_layers.py - if hparams.use_ser_for_gst and speaker_embedding is not None: - enc_out = torch.cat([enc_out, speaker_embedding], dim=-1) - style_embed = self.stl(enc_out) - - return style_embed - - -class ReferenceEncoder(nn.Module): - ''' - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - ''' - - def __init__(self): - - super().__init__() - K = len(hp.ref_enc_filters) - filters = [1] + hp.ref_enc_filters - convs = [nn.Conv2d(in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1)) for i in range(K)] - self.convs = nn.ModuleList(convs) - self.bns = nn.ModuleList([nn.BatchNorm2d(num_features=hp.ref_enc_filters[i]) for i in range(K)]) - - out_channels = self.calculate_channels(hp.n_mels, 3, 2, 1, K) - self.gru = nn.GRU(input_size=hp.ref_enc_filters[-1] * out_channels, - hidden_size=hp.E // 2, - batch_first=True) - - def forward(self, inputs): - N = inputs.size(0) - out = inputs.view(N, 1, -1, hp.n_mels) # [N, 1, Ty, n_mels] - for conv, bn in zip(self.convs, self.bns): - out = conv(out) - out = bn(out) - out = tFunctional.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, E//2] - - return out.squeeze(0) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class STL(nn.Module): - ''' - inputs --- [N, E//2] - ''' - - def __init__(self, speaker_embedding_dim=None): - - super().__init__() - self.embed = nn.Parameter(torch.FloatTensor(hp.token_num, hp.E // hp.num_heads)) - d_q = hp.E // 2 - d_k = hp.E // hp.num_heads - # self.attention = MultiHeadAttention(hp.num_heads, d_model, d_q, d_v) - if hparams.use_ser_for_gst and speaker_embedding_dim is not None: - d_q += speaker_embedding_dim - self.attention = MultiHeadAttention(query_dim=d_q, key_dim=d_k, num_units=hp.E, num_heads=hp.num_heads) - - init.normal_(self.embed, mean=0, std=0.5) - - def forward(self, inputs): - N = inputs.size(0) - query = inputs.unsqueeze(1) # [N, 1, E//2] - keys = torch.tanh(self.embed).unsqueeze(0).expand(N, -1, -1) # [N, token_num, E // num_heads] - style_embed = self.attention(query, keys) - - return style_embed - - -class MultiHeadAttention(nn.Module): - ''' - input: - query --- [N, T_q, query_dim] - key --- [N, T_k, key_dim] - output: - out --- [N, T_q, num_units] - ''' - - def __init__(self, query_dim, key_dim, num_units, num_heads): - - super().__init__() - self.num_units = num_units - self.num_heads = num_heads - self.key_dim = key_dim - - self.W_query = nn.Linear(in_features=query_dim, out_features=num_units, bias=False) - self.W_key = nn.Linear(in_features=key_dim, out_features=num_units, bias=False) - self.W_value = nn.Linear(in_features=key_dim, out_features=num_units, bias=False) - - def forward(self, query, key): - querys = self.W_query(query) # [N, T_q, num_units] - keys = self.W_key(key) # [N, T_k, num_units] - values = self.W_value(key) - - split_size = self.num_units // self.num_heads - querys = torch.stack(torch.split(querys, split_size, dim=2), dim=0) # [h, N, T_q, num_units/h] - keys = torch.stack(torch.split(keys, split_size, dim=2), dim=0) # [h, N, T_k, num_units/h] - values = torch.stack(torch.split(values, split_size, dim=2), dim=0) # [h, N, T_k, num_units/h] - - # score = softmax(QK^T / (d_k ** 0.5)) - scores = torch.matmul(querys, keys.transpose(2, 3)) # [h, N, T_q, T_k] - scores = scores / (self.key_dim ** 0.5) - scores = tFunctional.softmax(scores, dim=3) - - # out = score * V - out = torch.matmul(scores, values) # [h, N, T_q, num_units/h] - out = torch.cat(torch.split(out, 1, dim=0), dim=3).squeeze(0) # [N, T_q, num_units] - - return out diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/multi_task.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/multi_task.py deleted file mode 100644 index 443df0e7d7de11962d472d33b25b4bbff562524f..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/multi_task.py +++ /dev/null @@ -1,337 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import os.path as osp -from os import PathLike -from typing import Optional, Sequence - -import mmengine -from mmcv.transforms import Compose -from mmengine.fileio import get_file_backend - -from .builder import DATASETS - - -def expanduser(path): - if isinstance(path, (str, PathLike)): - return osp.expanduser(path) - else: - return path - - -def isabs(uri): - return osp.isabs(uri) or ('://' in uri) - - -@DATASETS.register_module() -class MultiTaskDataset: - """Custom dataset for multi-task dataset. - - To use the dataset, please generate and provide an annotation file in the - below format: - - .. code-block:: json - - { - "metainfo": { - "tasks": - [ - 'gender' - 'wear' - ] - }, - "data_list": [ - { - "img_path": "a.jpg", - gt_label:{ - "gender": 0, - "wear": [1, 0, 1, 0] - } - }, - { - "img_path": "b.jpg", - gt_label:{ - "gender": 1, - "wear": [1, 0, 1, 0] - } - } - ] - } - - Assume we put our dataset in the ``data/mydataset`` folder in the - repository and organize it as the below format: :: - - mmpretrain/ - └── data - └── mydataset - ├── annotation - │   ├── train.json - │   ├── test.json - │   └── val.json - ├── train - │   ├── a.jpg - │   └── ... - ├── test - │   ├── b.jpg - │   └── ... - └── val - ├── c.jpg - └── ... - - We can use the below config to build datasets: - - .. code:: python - - >>> from mmpretrain.datasets import build_dataset - >>> train_cfg = dict( - ... type="MultiTaskDataset", - ... ann_file="annotation/train.json", - ... data_root="data/mydataset", - ... # The `img_path` field in the train annotation file is relative - ... # to the `train` folder. - ... data_prefix='train', - ... ) - >>> train_dataset = build_dataset(train_cfg) - - Or we can put all files in the same folder: :: - - mmpretrain/ - └── data - └── mydataset - ├── train.json - ├── test.json - ├── val.json - ├── a.jpg - ├── b.jpg - ├── c.jpg - └── ... - - And we can use the below config to build datasets: - - .. code:: python - - >>> from mmpretrain.datasets import build_dataset - >>> train_cfg = dict( - ... type="MultiTaskDataset", - ... ann_file="train.json", - ... data_root="data/mydataset", - ... # the `data_prefix` is not required since all paths are - ... # relative to the `data_root`. - ... ) - >>> train_dataset = build_dataset(train_cfg) - - - Args: - ann_file (str): The annotation file path. It can be either absolute - path or relative path to the ``data_root``. - metainfo (dict, optional): The extra meta information. It should be - a dict with the same format as the ``"metainfo"`` field in the - annotation file. Defaults to None. - data_root (str, optional): The root path of the data directory. It's - the prefix of the ``data_prefix`` and the ``ann_file``. And it can - be a remote path like "s3://openmmlab/xxx/". Defaults to None. - data_prefix (str, optional): The base folder relative to the - ``data_root`` for the ``"img_path"`` field in the annotation file. - Defaults to None. - pipeline (Sequence[dict]): A list of dict, where each element - represents a operation defined in - :mod:`mmpretrain.datasets.pipelines`. Defaults to an empty tuple. - test_mode (bool): in train mode or test mode. Defaults to False. - """ - METAINFO = dict() - - def __init__(self, - ann_file: str, - metainfo: Optional[dict] = None, - data_root: Optional[str] = None, - data_prefix: Optional[str] = None, - pipeline: Sequence = (), - test_mode: bool = False): - - self.data_root = expanduser(data_root) - - # Inference the file client - if self.data_root is not None: - self.file_backend = get_file_backend(uri=self.data_root) - else: - self.file_backend = None - - self.ann_file = self._join_root(expanduser(ann_file)) - self.data_prefix = self._join_root(data_prefix) - - self.test_mode = test_mode - self.pipeline = Compose(pipeline) - self.data_list = self.load_data_list(self.ann_file, metainfo) - - def _join_root(self, path): - """Join ``self.data_root`` with the specified path. - - If the path is an absolute path, just return the path. And if the - path is None, return ``self.data_root``. - - Examples: - >>> self.data_root = 'a/b/c' - >>> self._join_root('d/e/') - 'a/b/c/d/e' - >>> self._join_root('https://openmmlab.com') - 'https://openmmlab.com' - >>> self._join_root(None) - 'a/b/c' - """ - if path is None: - return self.data_root - if isabs(path): - return path - - joined_path = self.file_backend.join_path(self.data_root, path) - return joined_path - - @classmethod - def _get_meta_info(cls, in_metainfo: dict = None) -> dict: - """Collect meta information from the dictionary of meta. - - Args: - in_metainfo (dict): Meta information dict. - - Returns: - dict: Parsed meta information. - """ - # `cls.METAINFO` will be overwritten by in_meta - metainfo = copy.deepcopy(cls.METAINFO) - if in_metainfo is None: - return metainfo - - metainfo.update(in_metainfo) - - return metainfo - - def load_data_list(self, ann_file, metainfo_override=None): - """Load annotations from an annotation file. - - Args: - ann_file (str): Absolute annotation file path if ``self.root=None`` - or relative path if ``self.root=/path/to/data/``. - - Returns: - list[dict]: A list of annotation. - """ - annotations = mmengine.load(ann_file) - if not isinstance(annotations, dict): - raise TypeError(f'The annotations loaded from annotation file ' - f'should be a dict, but got {type(annotations)}!') - if 'data_list' not in annotations: - raise ValueError('The annotation file must have the `data_list` ' - 'field.') - metainfo = annotations.get('metainfo', {}) - raw_data_list = annotations['data_list'] - - # Set meta information. - assert isinstance(metainfo, dict), 'The `metainfo` field in the '\ - f'annotation file should be a dict, but got {type(metainfo)}' - if metainfo_override is not None: - assert isinstance(metainfo_override, dict), 'The `metainfo` ' \ - f'argument should be a dict, but got {type(metainfo_override)}' - metainfo.update(metainfo_override) - self._metainfo = self._get_meta_info(metainfo) - - data_list = [] - for i, raw_data in enumerate(raw_data_list): - try: - data_list.append(self.parse_data_info(raw_data)) - except AssertionError as e: - raise RuntimeError( - f'The format check fails during parse the item {i} of ' - f'the annotation file with error: {e}') - return data_list - - def parse_data_info(self, raw_data): - """Parse raw annotation to target format. - - This method will return a dict which contains the data information of a - sample. - - Args: - raw_data (dict): Raw data information load from ``ann_file`` - - Returns: - dict: Parsed annotation. - """ - assert isinstance(raw_data, dict), \ - f'The item should be a dict, but got {type(raw_data)}' - assert 'img_path' in raw_data, \ - "The item doesn't have `img_path` field." - data = dict( - img_path=self._join_root(raw_data['img_path']), - gt_label=raw_data['gt_label'], - ) - return data - - @property - def metainfo(self) -> dict: - """Get meta information of dataset. - - Returns: - dict: meta information collected from ``cls.METAINFO``, - annotation file and metainfo argument during instantiation. - """ - return copy.deepcopy(self._metainfo) - - def prepare_data(self, idx): - """Get data processed by ``self.pipeline``. - - Args: - idx (int): The index of ``data_info``. - - Returns: - Any: Depends on ``self.pipeline``. - """ - results = copy.deepcopy(self.data_list[idx]) - return self.pipeline(results) - - def __len__(self): - """Get the length of the whole dataset. - - Returns: - int: The length of filtered dataset. - """ - return len(self.data_list) - - def __getitem__(self, idx): - """Get the idx-th image and data information of dataset after - ``self.pipeline``. - - Args: - idx (int): The index of of the data. - - Returns: - dict: The idx-th image and data information after - ``self.pipeline``. - """ - return self.prepare_data(idx) - - def __repr__(self): - """Print the basic information of the dataset. - - Returns: - str: Formatted string. - """ - head = 'Dataset ' + self.__class__.__name__ - body = [f'Number of samples: \t{self.__len__()}'] - if self.data_root is not None: - body.append(f'Root location: \t{self.data_root}') - body.append(f'Annotation file: \t{self.ann_file}') - if self.data_prefix is not None: - body.append(f'Prefix of images: \t{self.data_prefix}') - # -------------------- extra repr -------------------- - tasks = self.metainfo['tasks'] - body.append(f'For {len(tasks)} tasks') - for task in tasks: - body.append(f' {task} ') - # ---------------------------------------------------- - - if len(self.pipeline.transforms) > 0: - body.append('With transforms:') - for t in self.pipeline.transforms: - body.append(f' {t}') - - lines = [head] + [' ' * 4 + line for line in body] - return '\n'.join(lines) diff --git "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT.py" "b/spaces/Liu-LAB/GPT-academic/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT.py" deleted file mode 100644 index 4ed9aebf97834418d145ab1dd5b22ca7f4f9b214..0000000000000000000000000000000000000000 --- "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT.py" +++ /dev/null @@ -1,106 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping -import requests -from bs4 import BeautifulSoup -from request_llm.bridge_all import model_info - -def google(query, proxies): - query = query # 在此处替换您要搜索的关键词 - url = f"https://www.google.com/search?q={query}" - headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36'} - response = requests.get(url, headers=headers, proxies=proxies) - soup = BeautifulSoup(response.content, 'html.parser') - results = [] - for g in soup.find_all('div', class_='g'): - anchors = g.find_all('a') - if anchors: - link = anchors[0]['href'] - if link.startswith('/url?q='): - link = link[7:] - if not link.startswith('http'): - continue - title = g.find('h3').text - item = {'title': title, 'link': link} - results.append(item) - - for r in results: - print(r['link']) - return results - -def scrape_text(url, proxies) -> str: - """Scrape text from a webpage - - Args: - url (str): The URL to scrape text from - - Returns: - str: The scraped text - """ - headers = { - 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36', - 'Content-Type': 'text/plain', - } - try: - response = requests.get(url, headers=headers, proxies=proxies, timeout=8) - if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding - except: - return "无法连接到该网页" - soup = BeautifulSoup(response.text, "html.parser") - for script in soup(["script", "style"]): - script.extract() - text = soup.get_text() - lines = (line.strip() for line in text.splitlines()) - chunks = (phrase.strip() for line in lines for phrase in line.split(" ")) - text = "\n".join(chunk for chunk in chunks if chunk) - return text - -@CatchException -def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((f"请结合互联网信息回答以下问题:{txt}", - "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。您若希望分享新的功能模组,请不吝PR!")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # ------------- < 第1步:爬取搜索引擎的结果 > ------------- - from toolbox import get_conf - proxies, = get_conf('proxies') - urls = google(txt, proxies) - history = [] - if len(urls) == 0: - chatbot.append((f"结论:{txt}", - "[Local Message] 受到google限制,无法从google获取信息!")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - return - # ------------- < 第2步:依次访问网页 > ------------- - max_search_result = 5 # 最多收纳多少个网页的结果 - for index, url in enumerate(urls[:max_search_result]): - res = scrape_text(url['link'], proxies) - history.extend([f"第{index}份搜索结果:", res]) - chatbot.append([f"第{index}份搜索结果:", res[:500]+"......"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # ------------- < 第3步:ChatGPT综合 > ------------- - i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}" - i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token - inputs=i_say, - history=history, - max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4 - ) - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, inputs_show_user=i_say, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。" - ) - chatbot[-1] = (i_say, gpt_say) - history.append(i_say);history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 - diff --git a/spaces/MLVKU/Human_Object_Interaction/hotr/data/evaluators/vcoco_eval.py b/spaces/MLVKU/Human_Object_Interaction/hotr/data/evaluators/vcoco_eval.py deleted file mode 100644 index e922f4f55e796f2f5a02992845bd5b55da674a13..0000000000000000000000000000000000000000 --- a/spaces/MLVKU/Human_Object_Interaction/hotr/data/evaluators/vcoco_eval.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) KakaoBrain, Inc. and its affiliates. All Rights Reserved -""" -V-COCO evaluator that works in distributed mode. -""" -import os -import numpy as np -import torch - -from hotr.util.misc import all_gather -from hotr.metrics.vcoco.ap_role import APRole -from functools import partial - -def init_vcoco_evaluators(human_act_name, object_act_name): - role_eval1 = APRole(act_name=object_act_name, scenario_flag=True, iou_threshold=0.5) - role_eval2 = APRole(act_name=object_act_name, scenario_flag=False, iou_threshold=0.5) - - return role_eval1, role_eval2 - -class VCocoEvaluator(object): - def __init__(self, args): - self.img_ids = [] - self.eval_imgs = [] - self.role_eval1, self.role_eval2 = init_vcoco_evaluators(args.human_actions, args.object_actions) - self.num_human_act = args.num_human_act - self.action_idx = args.valid_ids - - def update(self, outputs): - img_ids = list(np.unique(list(outputs.keys()))) - for img_num, img_id in enumerate(img_ids): - print(f"Evaluating Score Matrix... : [{(img_num+1):>4}/{len(img_ids):<4}]" ,flush=True, end="\r") - prediction = outputs[img_id]['prediction'] - target = outputs[img_id]['target'] - - # score with prediction - hbox, hcat, obox, ocat = list(map(lambda x: prediction[x], \ - ['h_box', 'h_cat', 'o_box', 'o_cat'])) - - assert 'pair_score' in prediction - score = prediction['pair_score'] - - hbox, hcat, obox, ocat, score =\ - list(map(lambda x: x.cpu().numpy(), [hbox, hcat, obox, ocat, score])) - - # ground-truth - gt_h_inds = (target['labels'] == 1) - gt_h_box = target['boxes'][gt_h_inds, :4].cpu().numpy() - gt_h_act = target['inst_actions'][gt_h_inds, :self.num_human_act].cpu().numpy() - - gt_p_box = target['pair_boxes'].cpu().numpy() - gt_p_act = target['pair_actions'].cpu().numpy() - - score = score[self.action_idx, :, :] - gt_p_act = gt_p_act[:, self.action_idx] - - self.role_eval1.add_data(hbox, obox, score, gt_h_box, gt_h_act, gt_p_box, gt_p_act) - self.role_eval2.add_data(hbox, obox, score, gt_h_box, gt_h_act, gt_p_box, gt_p_act) - self.img_ids.append(img_id) \ No newline at end of file diff --git a/spaces/Making/goofyai-Leonardo_Ai_Style_Illustration/app.py b/spaces/Making/goofyai-Leonardo_Ai_Style_Illustration/app.py deleted file mode 100644 index 27bc4e3218de6591df27bb2a2201ef650e09ae93..0000000000000000000000000000000000000000 --- a/spaces/Making/goofyai-Leonardo_Ai_Style_Illustration/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/goofyai/Leonardo_Ai_Style_Illustration").launch() \ No newline at end of file diff --git a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/config/GroundingDINO_SwinT_OGC.py b/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/config/GroundingDINO_SwinT_OGC.py deleted file mode 100644 index 9158d5f6260ec74bded95377d382387430d7cd70..0000000000000000000000000000000000000000 --- a/spaces/MingGatsby/Grounding_DINO_demo/groundingdino/config/GroundingDINO_SwinT_OGC.py +++ /dev/null @@ -1,43 +0,0 @@ -batch_size = 1 -modelname = "groundingdino" -backbone = "swin_T_224_1k" -position_embedding = "sine" -pe_temperatureH = 20 -pe_temperatureW = 20 -return_interm_indices = [1, 2, 3] -backbone_freeze_keywords = None -enc_layers = 6 -dec_layers = 6 -pre_norm = False -dim_feedforward = 2048 -hidden_dim = 256 -dropout = 0.0 -nheads = 8 -num_queries = 900 -query_dim = 4 -num_patterns = 0 -num_feature_levels = 4 -enc_n_points = 4 -dec_n_points = 4 -two_stage_type = "standard" -two_stage_bbox_embed_share = False -two_stage_class_embed_share = False -transformer_activation = "relu" -dec_pred_bbox_embed_share = True -dn_box_noise_scale = 1.0 -dn_label_noise_ratio = 0.5 -dn_label_coef = 1.0 -dn_bbox_coef = 1.0 -embed_init_tgt = True -dn_labelbook_size = 2000 -max_text_len = 256 -text_encoder_type = "bert-base-uncased" -use_text_enhancer = True -use_fusion_layer = True -use_checkpoint = True -use_transformer_ckpt = True -use_text_cross_attention = True -text_dropout = 0.0 -fusion_dropout = 0.0 -fusion_droppath = 0.1 -sub_sentence_present = True diff --git a/spaces/MirageML/lowpoly-cyberpunk/README.md b/spaces/MirageML/lowpoly-cyberpunk/README.md deleted file mode 100644 index a69a778e413bdfb1adff7030b23c2cf243efd329..0000000000000000000000000000000000000000 --- a/spaces/MirageML/lowpoly-cyberpunk/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Lowpoly Cyberpunk -emoji: 🐢 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Miuzarte/SUI-svc-4.0/modules/mel_processing.py b/spaces/Miuzarte/SUI-svc-4.0/modules/mel_processing.py deleted file mode 100644 index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-4.0/modules/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/__init__.py deleted file mode 100644 index f590e60dc695b21da4ed859e25a5dbecc0551601..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/recognizers/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .abinet import ABINet -from .aster import ASTER -from .base import BaseRecognizer -from .crnn import CRNN -from .encoder_decoder_recognizer import EncoderDecoderRecognizer -from .encoder_decoder_recognizer_tta import EncoderDecoderRecognizerTTAModel -from .master import MASTER -from .nrtr import NRTR -from .robust_scanner import RobustScanner -from .sar import SARNet -from .satrn import SATRN -from .svtr import SVTR -from .maerec import MAERec - -__all__ = [ - 'BaseRecognizer', 'EncoderDecoderRecognizer', 'CRNN', 'SARNet', 'NRTR', - 'RobustScanner', 'SATRN', 'ABINet', 'MASTER', 'SVTR', 'ASTER', - 'EncoderDecoderRecognizerTTAModel', 'MAERec' -] diff --git a/spaces/MrBodean/VoiceClone/encoder/data_objects/__init__.py b/spaces/MrBodean/VoiceClone/encoder/data_objects/__init__.py deleted file mode 100644 index ef04ade68544d0477a7f6deb4e7d51e97f592910..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/encoder/data_objects/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataset -from encoder.data_objects.speaker_verification_dataset import SpeakerVerificationDataLoader diff --git a/spaces/Msp/docVQA_donut/README.md b/spaces/Msp/docVQA_donut/README.md deleted file mode 100644 index 4c811fb912ce8a61ae5559eeee649e72e200c295..0000000000000000000000000000000000000000 --- a/spaces/Msp/docVQA_donut/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: DocVQA Donut -emoji: 😻 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NAACL2022/GlobEnc/README.md b/spaces/NAACL2022/GlobEnc/README.md deleted file mode 100644 index 42fbedb3e5232e5ad2ea35bf2b3ef45b1e4ee277..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/GlobEnc/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GlobEnc -emoji: 🔎 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/README.md b/spaces/NCTCMumbai/NCTC/models/official/utils/flags/README.md deleted file mode 100644 index 18160f780a0928a2f28ab9a8e66433938179d581..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/README.md +++ /dev/null @@ -1,97 +0,0 @@ -# Adding Abseil (absl) flags quickstart -## Defining a flag -absl flag definitions are similar to argparse, although they are defined on a global namespace. - -For instance defining a string flag looks like: -```$xslt -from absl import flags -flags.DEFINE_string( - name="my_flag", - default="a_sensible_default", - help="Here is what this flag does." -) -``` - -All three arguments are required, but default may be `None`. A common optional argument is -short_name for defining abreviations. Certain `DEFINE_*` methods will have other required arguments. -For instance `DEFINE_enum` requires the `enum_values` argument to be specified. - -## Key Flags -absl has the concept of a key flag. Any flag defined in `__main__` is considered a key flag by -default. Key flags are displayed in `--help`, others only appear in `--helpfull`. In order to -handle key flags that are defined outside the module in question, absl provides the -`flags.adopt_module_key_flags()` method. This adds the key flags of a different module to one's own -key flags. For example: -```$xslt -File: flag_source.py ---------------------------------------- - -from absl import flags -flags.DEFINE_string(name="my_flag", default="abc", help="a flag.") -``` - -```$xslt -File: my_module.py ---------------------------------------- - -from absl import app as absl_app -from absl import flags - -import flag_source - -flags.adopt_module_key_flags(flag_source) - -def main(_): - pass - -absl_app.run(main, [__file__, "-h"] -``` - -when `my_module.py` is run it will show the help text for `my_flag`. Because not all flags defined -in a file are equally important, `official/utils/flags/core.py` (generally imported as flags_core) -provides an abstraction for handling key flag declaration in an easy way through the -`register_key_flags_in_core()` function, which allows a module to make a single -`adopt_key_flags(flags_core)` call when using the util flag declaration functions. - -## Validators -Often the constraints on a flag are complicated. absl provides the validator decorator to allow -one to mark a function as a flag validation function. Suppose we want users to provide a flag -which is a palindrome. - -```$xslt -from absl import flags - -flags.DEFINE_string(name="pal_flag", short_name="pf", default="", help="Give me a palindrome") - -@flags.validator("pal_flag") -def _check_pal(provided_pal_flag): - return provided_pal_flag == provided_pal_flag[::-1] - -``` - -Validators take the form that returning True (truthy) passes, and all others -(False, None, exception) fail. - -## Testing -To test using absl, simply declare flags in the setupClass method of TensorFlow's TestCase. - -```$xslt -from absl import flags -import tensorflow as tf - -def define_flags(): - flags.DEFINE_string(name="test_flag", default="abc", help="an example flag") - - -class BaseTester(unittest.TestCase): - - @classmethod - def setUpClass(cls): - super(BaseTester, cls).setUpClass() - define_flags() - - def test_trivial(self): - flags_core.parse_flags([__file__, "test_flag", "def"]) - self.AssertEqual(flags.FLAGS.test_flag, "def") - -``` diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/evaluation/coco_evaluator.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/evaluation/coco_evaluator.py deleted file mode 100644 index dc56a9332784dd66d5393bbf0d4c996fe5141c6d..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/evaluation/coco_evaluator.py +++ /dev/null @@ -1,343 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""The COCO-style evaluator. - -The following snippet demonstrates the use of interfaces: - - evaluator = COCOEvaluator(...) - for _ in range(num_evals): - for _ in range(num_batches_per_eval): - predictions, groundtruth = predictor.predict(...) # pop a batch. - evaluator.update(predictions, groundtruths) # aggregate internal stats. - evaluator.evaluate() # finish one full eval. - -See also: https://github.com/cocodataset/cocoapi/ -""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import atexit -import tempfile -import numpy as np -from absl import logging -from pycocotools import cocoeval -import six -import tensorflow as tf - -from official.vision.detection.evaluation import coco_utils -from official.vision.detection.utils import class_utils - - -class MetricWrapper(object): - # This is only a wrapper for COCO metric and works on for numpy array. So it - # doesn't inherit from tf.keras.layers.Layer or tf.keras.metrics.Metric. - - def __init__(self, evaluator): - self._evaluator = evaluator - - def update_state(self, y_true, y_pred): - labels = tf.nest.map_structure(lambda x: x.numpy(), y_true) - outputs = tf.nest.map_structure(lambda x: x.numpy(), y_pred) - groundtruths = {} - predictions = {} - for key, val in outputs.items(): - if isinstance(val, tuple): - val = np.concatenate(val) - predictions[key] = val - for key, val in labels.items(): - if isinstance(val, tuple): - val = np.concatenate(val) - groundtruths[key] = val - self._evaluator.update(predictions, groundtruths) - - def result(self): - return self._evaluator.evaluate() - - def reset_states(self): - return self._evaluator.reset() - - -class COCOEvaluator(object): - """COCO evaluation metric class.""" - - def __init__(self, annotation_file, include_mask, need_rescale_bboxes=True): - """Constructs COCO evaluation class. - - The class provides the interface to metrics_fn in TPUEstimator. The - _update_op() takes detections from each image and push them to - self.detections. The _evaluate() loads a JSON file in COCO annotation format - as the groundtruths and runs COCO evaluation. - - Args: - annotation_file: a JSON file that stores annotations of the eval dataset. - If `annotation_file` is None, groundtruth annotations will be loaded - from the dataloader. - include_mask: a boolean to indicate whether or not to include the mask - eval. - need_rescale_bboxes: If true bboxes in `predictions` will be rescaled back - to absolute values (`image_info` is needed in this case). - """ - if annotation_file: - if annotation_file.startswith('gs://'): - _, local_val_json = tempfile.mkstemp(suffix='.json') - tf.io.gfile.remove(local_val_json) - - tf.io.gfile.copy(annotation_file, local_val_json) - atexit.register(tf.io.gfile.remove, local_val_json) - else: - local_val_json = annotation_file - self._coco_gt = coco_utils.COCOWrapper( - eval_type=('mask' if include_mask else 'box'), - annotation_file=local_val_json) - self._annotation_file = annotation_file - self._include_mask = include_mask - self._metric_names = [ - 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'ARmax1', 'ARmax10', - 'ARmax100', 'ARs', 'ARm', 'ARl' - ] - self._required_prediction_fields = [ - 'source_id', 'num_detections', 'detection_classes', 'detection_scores', - 'detection_boxes' - ] - self._need_rescale_bboxes = need_rescale_bboxes - if self._need_rescale_bboxes: - self._required_prediction_fields.append('image_info') - self._required_groundtruth_fields = [ - 'source_id', 'height', 'width', 'classes', 'boxes' - ] - if self._include_mask: - mask_metric_names = ['mask_' + x for x in self._metric_names] - self._metric_names.extend(mask_metric_names) - self._required_prediction_fields.extend(['detection_masks']) - self._required_groundtruth_fields.extend(['masks']) - - self.reset() - - def reset(self): - """Resets internal states for a fresh run.""" - self._predictions = {} - if not self._annotation_file: - self._groundtruths = {} - - def evaluate(self): - """Evaluates with detections from all images with COCO API. - - Returns: - coco_metric: float numpy array with shape [24] representing the - coco-style evaluation metrics (box and mask). - """ - if not self._annotation_file: - logging.info('Thre is no annotation_file in COCOEvaluator.') - gt_dataset = coco_utils.convert_groundtruths_to_coco_dataset( - self._groundtruths) - coco_gt = coco_utils.COCOWrapper( - eval_type=('mask' if self._include_mask else 'box'), - gt_dataset=gt_dataset) - else: - logging.info('Using annotation file: %s', self._annotation_file) - coco_gt = self._coco_gt - coco_predictions = coco_utils.convert_predictions_to_coco_annotations( - self._predictions) - coco_dt = coco_gt.loadRes(predictions=coco_predictions) - image_ids = [ann['image_id'] for ann in coco_predictions] - - coco_eval = cocoeval.COCOeval(coco_gt, coco_dt, iouType='bbox') - coco_eval.params.imgIds = image_ids - coco_eval.evaluate() - coco_eval.accumulate() - coco_eval.summarize() - coco_metrics = coco_eval.stats - - if self._include_mask: - mcoco_eval = cocoeval.COCOeval(coco_gt, coco_dt, iouType='segm') - mcoco_eval.params.imgIds = image_ids - mcoco_eval.evaluate() - mcoco_eval.accumulate() - mcoco_eval.summarize() - mask_coco_metrics = mcoco_eval.stats - - if self._include_mask: - metrics = np.hstack((coco_metrics, mask_coco_metrics)) - else: - metrics = coco_metrics - - # Cleans up the internal variables in order for a fresh eval next time. - self.reset() - - metrics_dict = {} - for i, name in enumerate(self._metric_names): - metrics_dict[name] = metrics[i].astype(np.float32) - return metrics_dict - - def _process_predictions(self, predictions): - image_scale = np.tile(predictions['image_info'][:, 2:3, :], (1, 1, 2)) - predictions['detection_boxes'] = ( - predictions['detection_boxes'].astype(np.float32)) - predictions['detection_boxes'] /= image_scale - if 'detection_outer_boxes' in predictions: - predictions['detection_outer_boxes'] = ( - predictions['detection_outer_boxes'].astype(np.float32)) - predictions['detection_outer_boxes'] /= image_scale - - def update(self, predictions, groundtruths=None): - """Update and aggregate detection results and groundtruth data. - - Args: - predictions: a dictionary of numpy arrays including the fields below. - See different parsers under `../dataloader` for more details. - Required fields: - - source_id: a numpy array of int or string of shape [batch_size]. - - image_info [if `need_rescale_bboxes` is True]: a numpy array of - float of shape [batch_size, 4, 2]. - - num_detections: a numpy array of - int of shape [batch_size]. - - detection_boxes: a numpy array of float of shape [batch_size, K, 4]. - - detection_classes: a numpy array of int of shape [batch_size, K]. - - detection_scores: a numpy array of float of shape [batch_size, K]. - Optional fields: - - detection_masks: a numpy array of float of shape - [batch_size, K, mask_height, mask_width]. - groundtruths: a dictionary of numpy arrays including the fields below. - See also different parsers under `../dataloader` for more details. - Required fields: - - source_id: a numpy array of int or string of shape [batch_size]. - - height: a numpy array of int of shape [batch_size]. - - width: a numpy array of int of shape [batch_size]. - - num_detections: a numpy array of int of shape [batch_size]. - - boxes: a numpy array of float of shape [batch_size, K, 4]. - - classes: a numpy array of int of shape [batch_size, K]. - Optional fields: - - is_crowds: a numpy array of int of shape [batch_size, K]. If the - field is absent, it is assumed that this instance is not crowd. - - areas: a numy array of float of shape [batch_size, K]. If the - field is absent, the area is calculated using either boxes or - masks depending on which one is available. - - masks: a numpy array of float of shape - [batch_size, K, mask_height, mask_width], - - Raises: - ValueError: if the required prediction or groundtruth fields are not - present in the incoming `predictions` or `groundtruths`. - """ - for k in self._required_prediction_fields: - if k not in predictions: - raise ValueError( - 'Missing the required key `{}` in predictions!'.format(k)) - if self._need_rescale_bboxes: - self._process_predictions(predictions) - for k, v in six.iteritems(predictions): - if k not in self._predictions: - self._predictions[k] = [v] - else: - self._predictions[k].append(v) - - if not self._annotation_file: - assert groundtruths - for k in self._required_groundtruth_fields: - if k not in groundtruths: - raise ValueError( - 'Missing the required key `{}` in groundtruths!'.format(k)) - for k, v in six.iteritems(groundtruths): - if k not in self._groundtruths: - self._groundtruths[k] = [v] - else: - self._groundtruths[k].append(v) - - -class ShapeMaskCOCOEvaluator(COCOEvaluator): - """COCO evaluation metric class for ShapeMask.""" - - def __init__(self, mask_eval_class, **kwargs): - """Constructs COCO evaluation class. - - The class provides the interface to metrics_fn in TPUEstimator. The - _update_op() takes detections from each image and push them to - self.detections. The _evaluate() loads a JSON file in COCO annotation format - as the groundtruths and runs COCO evaluation. - - Args: - mask_eval_class: the set of classes for mask evaluation. - **kwargs: other keyword arguments passed to the parent class initializer. - """ - super(ShapeMaskCOCOEvaluator, self).__init__(**kwargs) - self._mask_eval_class = mask_eval_class - self._eval_categories = class_utils.coco_split_class_ids(mask_eval_class) - if mask_eval_class != 'all': - self._metric_names = [ - x.replace('mask', 'novel_mask') for x in self._metric_names - ] - - def evaluate(self): - """Evaluates with detections from all images with COCO API. - - Returns: - coco_metric: float numpy array with shape [24] representing the - coco-style evaluation metrics (box and mask). - """ - if not self._annotation_file: - gt_dataset = coco_utils.convert_groundtruths_to_coco_dataset( - self._groundtruths) - coco_gt = coco_utils.COCOWrapper( - eval_type=('mask' if self._include_mask else 'box'), - gt_dataset=gt_dataset) - else: - coco_gt = self._coco_gt - coco_predictions = coco_utils.convert_predictions_to_coco_annotations( - self._predictions) - coco_dt = coco_gt.loadRes(predictions=coco_predictions) - image_ids = [ann['image_id'] for ann in coco_predictions] - - coco_eval = cocoeval.COCOeval(coco_gt, coco_dt, iouType='bbox') - coco_eval.params.imgIds = image_ids - coco_eval.evaluate() - coco_eval.accumulate() - coco_eval.summarize() - coco_metrics = coco_eval.stats - - if self._include_mask: - mcoco_eval = cocoeval.COCOeval(coco_gt, coco_dt, iouType='segm') - mcoco_eval.params.imgIds = image_ids - mcoco_eval.evaluate() - mcoco_eval.accumulate() - mcoco_eval.summarize() - if self._mask_eval_class == 'all': - metrics = np.hstack((coco_metrics, mcoco_eval.stats)) - else: - mask_coco_metrics = mcoco_eval.category_stats - val_catg_idx = np.isin(mcoco_eval.params.catIds, - self._eval_categories) - # Gather the valid evaluation of the eval categories. - if np.any(val_catg_idx): - mean_val_metrics = [] - for mid in range(len(self._metric_names) // 2): - mean_val_metrics.append( - np.nanmean(mask_coco_metrics[mid][val_catg_idx])) - - mean_val_metrics = np.array(mean_val_metrics) - else: - mean_val_metrics = np.zeros(len(self._metric_names) // 2) - metrics = np.hstack((coco_metrics, mean_val_metrics)) - else: - metrics = coco_metrics - - # Cleans up the internal variables in order for a fresh eval next time. - self.reset() - - metrics_dict = {} - for i, name in enumerate(self._metric_names): - metrics_dict[name] = metrics[i].astype(np.float32) - return metrics_dict diff --git a/spaces/NCTCMumbai/NCTC/models/research/compression/entropy_coder/core/entropy_coder_single.py b/spaces/NCTCMumbai/NCTC/models/research/compression/entropy_coder/core/entropy_coder_single.py deleted file mode 100644 index 8a61b488b6bdd11e1cff4a2da672129240eb7240..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/compression/entropy_coder/core/entropy_coder_single.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright 2017 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Compute the additional compression ratio after entropy coding.""" - -import io -import os - -import numpy as np -import tensorflow as tf - -import config_helper - -# pylint: disable=unused-import -from entropy_coder.all_models import all_models -# pylint: enable=unused-import -from entropy_coder.model import model_factory - - -# Checkpoint used to restore the model parameters. -tf.app.flags.DEFINE_string('checkpoint', None, - """Model checkpoint.""") - -# Model selection and configuration. -tf.app.flags.DEFINE_string('model', None, """Underlying encoder model.""") -tf.app.flags.DEFINE_string('model_config', None, - """Model config protobuf given as text file.""") - -# File holding the binary codes. -tf.flags.DEFINE_string('input_codes', None, 'Location of binary code file.') - -FLAGS = tf.flags.FLAGS - - -def main(_): - if (FLAGS.input_codes is None or FLAGS.model is None): - print ('\nUsage: python entropy_coder_single.py --model=progressive ' - '--model_config=model_config.json' - '--iteration=15\n\n') - return - - #if FLAGS.iteration < -1 or FLAGS.iteration > 15: - # print ('\n--iteration must be between 0 and 15 inclusive, or -1 to infer ' - # 'from file.\n') - # return - #iteration = FLAGS.iteration - - if not tf.gfile.Exists(FLAGS.input_codes): - print('\nInput codes not found.\n') - return - - with tf.gfile.FastGFile(FLAGS.input_codes, 'rb') as code_file: - contents = code_file.read() - loaded_codes = np.load(io.BytesIO(contents)) - assert ['codes', 'shape'] not in loaded_codes.files - loaded_shape = loaded_codes['shape'] - loaded_array = loaded_codes['codes'] - - # Unpack and recover code shapes. - unpacked_codes = np.reshape(np.unpackbits(loaded_array) - [:np.prod(loaded_shape)], - loaded_shape) - - numpy_int_codes = unpacked_codes.transpose([1, 2, 3, 0, 4]) - numpy_int_codes = numpy_int_codes.reshape([numpy_int_codes.shape[0], - numpy_int_codes.shape[1], - numpy_int_codes.shape[2], - -1]) - numpy_codes = numpy_int_codes.astype(np.float32) * 2.0 - 1.0 - - with tf.Graph().as_default() as graph: - # TF tensor to hold the binary codes to losslessly compress. - batch_size = 1 - codes = tf.placeholder(tf.float32, shape=numpy_codes.shape) - - # Create the entropy coder model. - global_step = None - optimizer = None - model = model_factory.GetModelRegistry().CreateModel(FLAGS.model) - model_config_string = config_helper.GetConfigString(FLAGS.model_config) - model.Initialize(global_step, optimizer, model_config_string) - model.BuildGraph(codes) - - saver = tf.train.Saver(sharded=True, keep_checkpoint_every_n_hours=12.0) - - with tf.Session(graph=graph) as sess: - # Initialize local variables. - sess.run(tf.local_variables_initializer()) - - # Restore model variables. - saver.restore(sess, FLAGS.checkpoint) - - tf_tensors = { - 'code_length': model.average_code_length - } - feed_dict = {codes: numpy_codes} - np_tensors = sess.run(tf_tensors, feed_dict=feed_dict) - - print('Additional compression ratio: {}'.format( - np_tensors['code_length'])) - - -if __name__ == '__main__': - tf.app.run() diff --git a/spaces/NeonLion92/Chat-and-Battle-with-Open-LLMs-Neon92/style.css b/spaces/NeonLion92/Chat-and-Battle-with-Open-LLMs-Neon92/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/NeonLion92/Chat-and-Battle-with-Open-LLMs-Neon92/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/NeuralInternet/Text-to-Video_Playground/style.css b/spaces/NeuralInternet/Text-to-Video_Playground/style.css deleted file mode 100644 index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000 --- a/spaces/NeuralInternet/Text-to-Video_Playground/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; -} diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/evaluation/eval_sp.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/evaluation/eval_sp.py deleted file mode 100644 index 702c4980389624f788abc0b42cdf54757a52512f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/evaluation/eval_sp.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -""" -Signal processing-based evaluation using waveforms -""" - -import csv -import numpy as np -import os.path as op - -import torch -import tqdm -from tabulate import tabulate -import torchaudio - -from examples.speech_synthesis.utils import batch_mel_spectral_distortion -from fairseq.tasks.text_to_speech import batch_mel_cepstral_distortion - - -def load_eval_spec(path): - with open(path) as f: - reader = csv.DictReader(f, delimiter='\t') - samples = list(reader) - return samples - - -def eval_distortion(samples, distortion_fn, device="cuda"): - nmiss = 0 - results = [] - for sample in tqdm.tqdm(samples): - if not op.isfile(sample["ref"]) or not op.isfile(sample["syn"]): - nmiss += 1 - results.append(None) - continue - # assume single channel - yref, sr = torchaudio.load(sample["ref"]) - ysyn, _sr = torchaudio.load(sample["syn"]) - yref, ysyn = yref[0].to(device), ysyn[0].to(device) - assert sr == _sr, f"{sr} != {_sr}" - - distortion, extra = distortion_fn([yref], [ysyn], sr, None)[0] - _, _, _, _, _, pathmap = extra - nins = torch.sum(pathmap.sum(dim=1) - 1) # extra frames in syn - ndel = torch.sum(pathmap.sum(dim=0) - 1) # missing frames from syn - results.append( - (distortion.item(), # path distortion - pathmap.size(0), # yref num frames - pathmap.size(1), # ysyn num frames - pathmap.sum().item(), # path length - nins.item(), # insertion - ndel.item(), # deletion - ) - ) - return results - - -def eval_mel_cepstral_distortion(samples, device="cuda"): - return eval_distortion(samples, batch_mel_cepstral_distortion, device) - - -def eval_mel_spectral_distortion(samples, device="cuda"): - return eval_distortion(samples, batch_mel_spectral_distortion, device) - - -def print_results(results, show_bin): - results = np.array(list(filter(lambda x: x is not None, results))) - - np.set_printoptions(precision=3) - - def _print_result(results): - dist, dur_ref, dur_syn, dur_ali, nins, ndel = results.sum(axis=0) - res = { - "nutt": len(results), - "dist": dist, - "dur_ref": int(dur_ref), - "dur_syn": int(dur_syn), - "dur_ali": int(dur_ali), - "dist_per_ref_frm": dist/dur_ref, - "dist_per_syn_frm": dist/dur_syn, - "dist_per_ali_frm": dist/dur_ali, - "ins": nins/dur_ref, - "del": ndel/dur_ref, - } - print(tabulate( - [res.values()], - res.keys(), - floatfmt=".4f" - )) - - print(">>>> ALL") - _print_result(results) - - if show_bin: - edges = [0, 200, 400, 600, 800, 1000, 2000, 4000] - for i in range(1, len(edges)): - mask = np.logical_and(results[:, 1] >= edges[i-1], - results[:, 1] < edges[i]) - if not mask.any(): - continue - bin_results = results[mask] - print(f">>>> ({edges[i-1]}, {edges[i]})") - _print_result(bin_results) - - -def main(eval_spec, mcd, msd, show_bin): - samples = load_eval_spec(eval_spec) - device = "cpu" - if mcd: - print("===== Evaluate Mean Cepstral Distortion =====") - results = eval_mel_cepstral_distortion(samples, device) - print_results(results, show_bin) - if msd: - print("===== Evaluate Mean Spectral Distortion =====") - results = eval_mel_spectral_distortion(samples, device) - print_results(results, show_bin) - - -if __name__ == "__main__": - import argparse - parser = argparse.ArgumentParser() - parser.add_argument("eval_spec") - parser.add_argument("--mcd", action="store_true") - parser.add_argument("--msd", action="store_true") - parser.add_argument("--show-bin", action="store_true") - args = parser.parse_args() - - main(args.eval_spec, args.mcd, args.msd, args.show_bin) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/distributed/utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/distributed/utils.py deleted file mode 100644 index dbf318e7035603c1294eb45af7e98097df36289d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/distributed/utils.py +++ /dev/null @@ -1,826 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import io -import logging -import os -import pickle -import random -import socket -import struct -import subprocess -import warnings -from argparse import Namespace -from collections import OrderedDict -from dataclasses import dataclass -from typing import Any, Dict, List, Mapping, Optional -import sys -import time - -import torch -import torch.distributed as dist -from fairseq.dataclass.configs import DistributedTrainingConfig, FairseqConfig -from omegaconf import open_dict - -try: - import torch_xla.core.xla_model as xm -except ImportError: - xm = None - - -# Flag to indicate if we're using Megatron -# NOTE: this is a temporary hack until we move away from Megatron's model parallel init -_USE_MEGATRON = False - -# Whether to use XLA ops (e.g., on TPUs) instead of CUDA ops. -_USE_XLA = False - - -logger = logging.getLogger(__name__) - - -def is_master(cfg: DistributedTrainingConfig): - return cfg.distributed_rank == 0 - - -def infer_init_method(cfg: DistributedTrainingConfig, force_distributed=False): - if cfg.distributed_init_method is not None or cfg.tpu: - return - - num_pipelines_per_node = None - if cfg.pipeline_model_parallel: - num_pipeline_devices, num_pipelines_per_node = _pipeline_parallel_pre_init(cfg) - - if all( - key in os.environ - for key in ["MASTER_ADDR", "MASTER_PORT", "WORLD_SIZE", "RANK"] - ): - # support torch.distributed.launch - _infer_torch_distributed_launch_init(cfg) - elif cfg.distributed_port > 0: - # we can determine the init method automatically for Slurm - _infer_slurm_init(cfg, num_pipelines_per_node) - elif cfg.distributed_world_size > 1 or force_distributed: - # fallback for single node with multiple GPUs - _infer_single_node_init(cfg) - - if cfg.pipeline_model_parallel: - _pipeline_parallel_post_init(cfg, num_pipeline_devices, num_pipelines_per_node) - elif not cfg.distributed_no_spawn: - with open_dict(cfg): - cfg.distributed_num_procs = min( - torch.cuda.device_count(), cfg.distributed_world_size - ) - - -def _infer_torch_distributed_launch_init(cfg: DistributedTrainingConfig): - cfg.distributed_init_method = "env://" - cfg.distributed_world_size = int(os.environ["WORLD_SIZE"]) - cfg.distributed_rank = int(os.environ["RANK"]) - # processes are created by torch.distributed.launch - cfg.distributed_no_spawn = True - - -def _infer_slurm_init(cfg: DistributedTrainingConfig, num_pipelines_per_node): - node_list = os.environ.get("SLURM_STEP_NODELIST") - if node_list is None: - node_list = os.environ.get("SLURM_JOB_NODELIST") - if node_list is not None: - try: - hostnames = subprocess.check_output( - ["scontrol", "show", "hostnames", node_list] - ) - cfg.distributed_init_method = "tcp://{host}:{port}".format( - host=hostnames.split()[0].decode("utf-8"), - port=cfg.distributed_port, - ) - nnodes = int(os.environ.get("SLURM_NNODES")) - ntasks_per_node = os.environ.get("SLURM_NTASKS_PER_NODE") - if ntasks_per_node is not None: - ntasks_per_node = int(ntasks_per_node) - else: - ntasks = int(os.environ.get("SLURM_NTASKS")) - nnodes = int(os.environ.get("SLURM_NNODES")) - assert ntasks % nnodes == 0 - ntasks_per_node = int(ntasks / nnodes) - if ntasks_per_node == 1: - gpus_per_node = torch.cuda.device_count() - node_id = int(os.environ.get("SLURM_NODEID")) - cfg.distributed_rank = node_id * gpus_per_node - cfg.distributed_world_size = nnodes * gpus_per_node - elif cfg.pipeline_model_parallel: - assert ntasks_per_node == num_pipelines_per_node, ( - "SLURM --ntasks-per-node must match number of pipelines per " - "node (={})".format(num_pipelines_per_node) - ) - cfg.distributed_no_spawn = True - # For 4-way MP on nodes with 8 GPUs, ranks will be [0, 1] on - # the first node, [1, 2] on the second node, etc. This - # matches torch.distributed.launch. - node_id = int(os.environ.get("SLURM_NODEID")) - local_id = int(os.environ.get("SLURM_LOCALID")) - cfg.distributed_rank = node_id * num_pipelines_per_node + local_id - # In the above example, device_id will always be in [0, 1], - # which also matches torch.distributed.launch. - cfg.device_id = local_id - # We also want to set distributed_world_size to be the total - # number of pipelines across all nodes. - cfg.distributed_world_size = nnodes * num_pipelines_per_node - else: - assert ntasks_per_node == cfg.distributed_world_size // nnodes - cfg.distributed_no_spawn = True - cfg.distributed_rank = int(os.environ.get("SLURM_PROCID")) - cfg.device_id = int(os.environ.get("SLURM_LOCALID")) - except subprocess.CalledProcessError as e: # scontrol failed - raise e - except FileNotFoundError: # Slurm is not installed - pass - - -def _infer_single_node_init(cfg: DistributedTrainingConfig): - assert ( - cfg.distributed_world_size <= torch.cuda.device_count() - ), f"world size is {cfg.distributed_world_size} but have {torch.cuda.device_count()} available devices" - port = random.randint(10000, 20000) - cfg.distributed_init_method = "tcp://localhost:{port}".format(port=port) - - -def _pipeline_parallel_pre_init(cfg: DistributedTrainingConfig): - from fairseq import utils - - balance_exists = ( - cfg.pipeline_balance is not None - or cfg.pipeline_encoder_balance is not None - or cfg.pipeline_decoder_balance is not None - ) - devices_exist = ( - cfg.pipeline_devices is not None - or cfg.pipeline_encoder_devices is not None - or cfg.pipeline_decoder_devices is not None - ) - if not balance_exists: - raise ValueError( - "--pipeline-balance is currently required for pipeline model parallelism" - ) - if not devices_exist: - raise ValueError( - "--pipeline-devices is currently required for pipeline model parallelism" - ) - - cfg.pipeline_balance = utils.eval_str_list(cfg.pipeline_balance, type=int) - if cfg.pipeline_devices is not None: - cfg.pipeline_devices = utils.eval_str_list(cfg.pipeline_devices, type=int) - num_pipeline_devices = len(set(cfg.pipeline_devices)) - else: - cfg.pipeline_encoder_devices = utils.eval_str_list( - cfg.pipeline_encoder_devices, type=int - ) - cfg.pipeline_decoder_devices = utils.eval_str_list( - cfg.pipeline_decoder_devices, type=int - ) - num_pipeline_devices = len( - set(cfg.pipeline_encoder_devices + cfg.pipeline_decoder_devices) - ) - gpus_per_node = torch.cuda.device_count() - assert ( - gpus_per_node >= num_pipeline_devices - and gpus_per_node % num_pipeline_devices == 0 - ), ( - "the number of unique device IDs in --pipeline-devices must evenly divide " - "the number of GPUs per node (multi-node pipelining is not yet supported)" - ) - num_pipelines_per_node = gpus_per_node // num_pipeline_devices - return num_pipeline_devices, num_pipelines_per_node - - -def _pipeline_parallel_post_init( - cfg: DistributedTrainingConfig, num_pipeline_devices, num_pipelines_per_node -): - if not cfg.distributed_no_spawn: - # When distributed_no_spawn is False, we expect distributed_rank and - # distributed_world_size to be based on the total number of GPUs, so - # we need to correct them to be based on the number of pipelines. - assert cfg.distributed_world_size % num_pipeline_devices == 0 - cfg.distributed_world_size = ( - cfg.distributed_world_size // num_pipeline_devices - ) - # In the case of 4-way MP on nodes with 8 GPUs, we want - # distributed_rank to be the starting GPU index for each pipeline - # i.e., 0, 2, ... - gpus_per_node = torch.cuda.device_count() - assert cfg.distributed_rank % gpus_per_node == 0 - assert cfg.distributed_rank % num_pipeline_devices == 0 - - with open_dict(cfg): - cfg.distributed_rank = cfg.distributed_rank // num_pipeline_devices - # launch one process per pipeline - cfg.distributed_num_procs = num_pipelines_per_node - - # if we have 4-way MP on a node with 8 GPUs, we want device_ids to be 0 - # and 4, indicating the starting device IDs for each pipeline - cfg.device_id *= num_pipeline_devices - - if cfg.device_id > 0: - # if there's multiple pipelines on a node (e.g., 4-way MP on an 8 - # GPU node), we need to adjust pipeline_devices accordingly - logger.debug( - "setting CUDA device={} on rank {}".format( - cfg.device_id, cfg.distributed_rank - ) - ) - torch.cuda.set_device(cfg.device_id) - with open_dict(cfg): - cfg.pipeline_devices = [cfg.device_id + d for d in cfg.pipeline_devices] - logger.info( - "setting pipeline_devices={} on rank {}".format( - cfg.pipeline_devices, cfg.distributed_rank - ) - ) - - -def distributed_init(cfg: FairseqConfig): - if isinstance(cfg, Namespace): - from fairseq.dataclass.utils import convert_namespace_to_omegaconf - - cfg = convert_namespace_to_omegaconf(cfg) - - if not cfg.common.tpu: - if torch.distributed.is_available() and torch.distributed.is_initialized(): - warnings.warn( - "Distributed is already initialized, cannot initialize twice!" - ) - else: - logger.info( - "distributed init (rank {}): {}".format( - cfg.distributed_training.distributed_rank, - cfg.distributed_training.distributed_init_method, - ) - ) - logger.info('Start init') - max_time_wait = 600 - for i in range(max_time_wait): - try: - dist.init_process_group( - backend=cfg.distributed_training.distributed_backend, - init_method=cfg.distributed_training.distributed_init_method, - world_size=cfg.distributed_training.distributed_world_size, - rank=cfg.distributed_training.distributed_rank, - ) - logger.info( - "initialized host {} as rank {}".format( - socket.gethostname(), - cfg.distributed_training.distributed_rank, - ) - ) - if torch.distributed.is_initialized(): - print("single-machine distributed training is initialized.") - break - except ValueError: - # This is caused by TCPStore failure. - print('Retry: {}, with value error {}'.format( - i + 1, sys.exc_info()[0])) - time.sleep(5) - if i == max_time_wait - 1: - print('k8s resource wait too long time') - exit(-1) - except Exception: - print('Retry: {}, with value error {}'.format( - i + 1, sys.exc_info()[0])) - exit(-1) - # perform a dummy all-reduce to initialize the NCCL communicator - if torch.cuda.is_available(): - dist.all_reduce(torch.zeros(1).cuda()) - - cfg.distributed_training.distributed_rank = torch.distributed.get_rank() - else: - assert xm.xrt_world_size() == cfg.distributed_training.distributed_world_size - global _USE_XLA - _USE_XLA = True - cfg.distributed_training.device_id = xm.get_local_ordinal() - cfg.distributed_training.distributed_rank = xm.get_ordinal() - xm.rendezvous("distributed_init") # wait for all workers - - if is_master(cfg.distributed_training): - logging.getLogger().setLevel(logging.INFO) - else: - logging.getLogger().setLevel(logging.WARNING) - - if cfg.common.model_parallel_size > 1: - try: - from fairseq.model_parallel.megatron.mpu import ( - initialize_model_parallel, - model_parallel_cuda_manual_seed, - ) - except ImportError: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - global _USE_MEGATRON - _USE_MEGATRON = True - initialize_model_parallel(cfg.common.model_parallel_size) - model_parallel_cuda_manual_seed(cfg.common.seed) - model_part_number = get_model_parallel_rank() - cfg.checkpoint.checkpoint_suffix += "-model_part-{0}".format(model_part_number) - - if hasattr(cfg, "model") and getattr(cfg.model, "base_layers", 0) > 0: - cfg.checkpoint.checkpoint_suffix = f"-rank-{cfg.distributed_training.distributed_rank}" - - return cfg.distributed_training.distributed_rank - - -def distributed_main(i, main, cfg: FairseqConfig, kwargs): - cfg.distributed_training.device_id = i - if torch.cuda.is_available() and not cfg.common.cpu and not cfg.common.tpu: - torch.cuda.set_device(cfg.distributed_training.device_id) - if cfg.distributed_training.distributed_rank is None: # torch.multiprocessing.spawn - cfg.distributed_training.distributed_rank = kwargs.pop("start_rank", 0) + i - - cfg.distributed_training.distributed_rank = distributed_init(cfg) - - after_distributed_init_fn = kwargs.pop("after_distributed_init_fn", None) - if after_distributed_init_fn: - cfg = after_distributed_init_fn(cfg) - - main(cfg, **kwargs) - - if torch.distributed.is_initialized(): - torch.distributed.barrier(get_global_group()) - - -def call_main(cfg: FairseqConfig, main, **kwargs): - if cfg.distributed_training.distributed_init_method is None: - infer_init_method(cfg.distributed_training) - - if cfg.distributed_training.distributed_init_method is not None: - # distributed training - if not cfg.distributed_training.distributed_no_spawn: - start_rank = cfg.distributed_training.distributed_rank - cfg.distributed_training.distributed_rank = None # assign automatically - kwargs["start_rank"] = start_rank - torch.multiprocessing.spawn( - fn=distributed_main, - args=(main, cfg, kwargs), - nprocs=min( - torch.cuda.device_count(), - cfg.distributed_training.distributed_world_size, - ), - join=True, - ) - else: - distributed_main(cfg.distributed_training.device_id, main, cfg, kwargs) - elif cfg.common.tpu and cfg.distributed_training.distributed_world_size > 1: - import torch_xla.distributed.xla_multiprocessing as xmp - - torch.multiprocessing.set_sharing_strategy("file_system") - xmp.spawn( - fn=distributed_main, - args=(main, cfg, kwargs), - # tpu-comment: - # 8 devices in one TPU VM, is the max processes to be spawned. - # The rest is driven by xm.distributed.xla_dist - nprocs=min(cfg.distributed_training.distributed_world_size, 8), - ) - else: - # single GPU main - main(cfg, **kwargs) - - -def use_xla(): - global _USE_XLA - return _USE_XLA - - -def new_groups(grouped_ranks: List[List[int]]): - if use_xla(): - return ("tpu", grouped_ranks) - else: - groups = [dist.new_group(g) for g in grouped_ranks] - my_group_idx = _find_my_group_index(grouped_ranks) - return groups[my_group_idx] - - -def _find_my_group_index(grouped_ranks): - my_rank = get_global_rank() - for i, group in enumerate(grouped_ranks): - if my_rank in group: - return i - raise RuntimeError - - -def _find_my_group(grouped_ranks): - index = _find_my_group_index(grouped_ranks) - return grouped_ranks[index] - - -def get_rank(group): - if use_xla(): - assert group[0] == "tpu" - my_group = _find_my_group(group[1]) - return my_group.index(get_global_rank()) - else: - return dist.get_rank(group=group) - - -def get_world_size(group): - if use_xla(): - assert group[0] == "tpu" - my_group = _find_my_group(group[1]) - return len(my_group) - elif torch.distributed.is_initialized(): - return dist.get_world_size(group=group) - else: - return 1 - - -def get_global_group(): - if use_xla(): - return new_groups([list(range(get_global_world_size()))]) - elif torch.distributed.is_initialized(): - if not hasattr(get_global_group, "_global_group"): - # ideally we could use torch.distributed.group.WORLD, but it seems - # to cause random NCCL hangs in some cases - get_global_group._global_group = dist.new_group() - return get_global_group._global_group - else: - return None - - -def get_global_rank(): - if use_xla(): - return xm.get_ordinal() - elif torch.distributed.is_initialized(): - return torch.distributed.get_rank() - else: - return 0 - - -def get_global_world_size(): - if use_xla(): - return xm.xrt_world_size() - elif torch.distributed.is_initialized(): - return torch.distributed.get_world_size() - else: - return 1 - - -def get_data_parallel_group(): - """Get the data parallel group the caller rank belongs to.""" - global _USE_MEGATRON - if _USE_MEGATRON: - from fairseq.model_parallel.megatron import mpu - - return mpu.get_data_parallel_group() - else: - return get_global_group() - - -def get_data_parallel_rank(): - """Return my rank for the data parallel group.""" - return get_rank(get_data_parallel_group()) - - -def get_data_parallel_world_size(): - """Return world size for the data parallel group.""" - return get_world_size(get_data_parallel_group()) - - -def get_model_parallel_group(): - global _USE_MEGATRON - if _USE_MEGATRON: - from fairseq.model_parallel.megatron import mpu - - return mpu.get_model_parallel_group() - else: - return None - - -def get_model_parallel_rank(): - """Return my rank for the model parallel group.""" - return get_rank(get_model_parallel_group()) - - -def get_model_parallel_world_size(): - """Return world size for the model parallel group.""" - return get_world_size(get_model_parallel_group()) - - -def all_reduce(tensor, group, op="sum"): - if use_xla(): - assert isinstance(group, tuple) and group[0] == "tpu" - tensor = [tensor] # wrap in a list to make xm.all_reduce in-place - return xm.all_reduce(op, tensor, groups=group[1])[0] - else: - if op == "sum": - op = dist.ReduceOp.SUM - elif op == "max": - op = dist.ReduceOp.MAX - else: - raise NotImplementedError - dist.all_reduce(tensor, op=op, group=group) - return tensor - - -def broadcast(tensor, src, group): - if use_xla(): - # XLA doesn't support broadcast, hack it with all_reduce - if get_rank(group) != src: - tensor.zero_() - all_reduce(tensor, group) - else: - dist.broadcast(tensor, src=src, group=group) - - -def all_to_all(tensor, group): - """Perform an all-to-all operation on a 1D Tensor.""" - assert tensor.dim() == 1 - split_count = get_world_size(group=group) - assert tensor.numel() % split_count == 0 - if use_xla(): - assert isinstance(group, tuple) and group[0] == "tpu" - return xm.all_to_all( - tensor, - split_dimension=0, - concat_dimension=0, - split_count=split_count, - groups=group[1], - ) - else: - output = torch.zeros_like(tensor) - dist.all_to_all_single(output, tensor, group=group) - return output - - -def all_gather(tensor, group, return_tensor=False): - """Perform an all-gather operation.""" - if use_xla(): - result = xm.all_gather(tensor, groups=group[1]) - world_size = get_world_size(group=group) - result = result.view(world_size, *tensor.size()) - if return_tensor: - return result - else: - return [result[i] for i in range(world_size)] - else: - world_size = get_world_size(group=group) - rank = get_rank(group=group) - tensor_list = [ - tensor if i == rank else torch.empty_like(tensor) for i in range(world_size) - ] - dist.all_gather(tensor_list, tensor, group=group) - if return_tensor: - return torch.stack(tensor_list, dim=0) - else: - return tensor_list - - -def all_gather_list(data, group=None, max_size=16384): - """Gathers arbitrary data from all nodes into a list. - - Similar to :func:`~torch.distributed.all_gather` but for arbitrary Python - data. Note that *data* must be picklable and any CUDA tensors will be moved - to CPU and returned on CPU as well. - - Args: - data (Any): data from the local worker to be gathered on other workers - group: group of the collective - max_size (int, optional): maximum size of the data to be gathered - across workers - """ - from fairseq import utils - - if group is None: - group = get_global_group() - torch.distributed.barrier(group=group) - rank = get_rank(group=group) - world_size = get_world_size(group=group) - - buffer_size = max_size * world_size - if ( - not hasattr(all_gather_list, "_buffer") - or all_gather_list._buffer.numel() < buffer_size - ): - all_gather_list._buffer = torch.cuda.ByteTensor(buffer_size) - all_gather_list._cpu_buffer = torch.ByteTensor(max_size).pin_memory() - buffer = all_gather_list._buffer - buffer.zero_() - cpu_buffer = all_gather_list._cpu_buffer - - data = utils.move_to_cpu(data) - enc = pickle.dumps(data) - enc_size = len(enc) - header_size = 4 # size of header that contains the length of the encoded data - size = header_size + enc_size - if size > max_size: - raise ValueError( - "encoded data size ({}) exceeds max_size ({})".format(size, max_size) - ) - - header = struct.pack(">I", enc_size) - cpu_buffer[:size] = torch.ByteTensor(list(header + enc)) - start = rank * max_size - buffer[start : start + size].copy_(cpu_buffer[:size]) - - all_reduce(buffer, group=group) - - buffer = buffer.cpu() - try: - result = [] - for i in range(world_size): - out_buffer = buffer[i * max_size : (i + 1) * max_size] - (enc_size,) = struct.unpack(">I", bytes(out_buffer[:header_size].tolist())) - if enc_size > 0: - result.append( - pickle.loads( - bytes(out_buffer[header_size : header_size + enc_size].tolist()) - ) - ) - return result - except pickle.UnpicklingError: - raise Exception( - "Unable to unpickle data from other workers. all_gather_list requires all " - "workers to enter the function together, so this error usually indicates " - "that the workers have fallen out of sync somehow. Workers can fall out of " - "sync if one of them runs out of memory, or if there are other conditions " - "in your training script that can cause one worker to finish an epoch " - "while other workers are still iterating over their portions of the data. " - "Try rerunning with --ddp-backend=legacy_ddp and see if that helps." - ) - - -def all_reduce_dict(data: Mapping[str, Any], device, group) -> Dict[str, Any]: - """ - AllReduce a dictionary of values across workers. We separately - reduce items that are already on the device and items on CPU for - better performance. - - Args: - data (Mapping[str, Any]): dictionary of data to all-reduce, but - cannot be a nested dictionary - device (torch.device): device for the reduction - group: group of the collective - """ - data_keys = list(data.keys()) - - # We want to separately reduce items that are already on the - # device and items on CPU for performance reasons. - cpu_data = OrderedDict() - device_data = OrderedDict() - for k in data_keys: - t = data[k] - if not torch.is_tensor(t): - cpu_data[k] = torch.tensor(t, dtype=torch.double) - elif t.device.type != device.type: - cpu_data[k] = t.to(dtype=torch.double) - else: - device_data[k] = t.to(dtype=torch.double) - - def _all_reduce_dict(data: OrderedDict): - if len(data) == 0: - return data - buf = torch.cat([t.view(-1) for t in data.values()]).to(device=device) - all_reduce(buf, group=group) - split_buf = torch.split(buf, [t.numel() for t in data.values()]) - reduced_data = [t.view_as(orig) for t, orig in zip(split_buf, data.values())] - return OrderedDict(zip(data.keys(), reduced_data)) - - cpu_data = _all_reduce_dict(cpu_data) - device_data = _all_reduce_dict(device_data) - - def get_from_stack(key): - if key in cpu_data: - return cpu_data[key] - elif key in device_data: - return device_data[key] - raise KeyError - - return OrderedDict([(key, get_from_stack(key)) for key in data_keys]) - - -def broadcast_tensors( - tensors: Optional[List[torch.Tensor]], - src_rank: int, - group: object, - dist_device: Optional[torch.device] = None, -) -> List[torch.Tensor]: - """ - Broadcasts a list of tensors without other (non-src) ranks needing to know - the dtypes/shapes of the tensors. - """ - if dist_device is None: - if torch.distributed.get_backend(group) == "nccl": - dist_device = torch.device("cuda") - else: - dist_device = torch.device("cpu") - - # share metadata first to simplify transfer - is_src_rank = (get_rank(group) == src_rank) - if is_src_rank: - metadata = [ - {"size": t.size(), "dtype": t.dtype, "device": t.device} for t in tensors - ] - metadata = _broadcast_object_slow(metadata, src_rank, group, dist_device) - else: - metadata = _broadcast_object_slow(None, src_rank, group, dist_device) - - out_tensors = [] - for i, meta in enumerate(metadata): - if is_src_rank: - tensor = tensors[i] - broadcast(tensors[i].to(dist_device), src=src_rank, group=group) - else: - tensor = torch.zeros( - [meta["size"].numel()], dtype=meta["dtype"], device=dist_device - ) - broadcast(tensor, src=src_rank, group=group) - tensor = tensor.view(meta["size"]).to(meta["device"]) - out_tensors.append(tensor) - return out_tensors - - -def broadcast_object( - obj: Any, - src_rank: int, - group: object, - dist_device: Optional[torch.device] = None, -) -> Any: - """Broadcast an arbitrary Python object to other workers.""" - if dist_device is None: - if torch.distributed.get_backend(group) == "nccl": - dist_device = torch.device("cuda") - else: - dist_device = torch.device("cpu") - - if get_rank(group) == src_rank: - # split the tensors from the non-tensors so we can broadcast them - # directly, avoiding unnecessary serialization/deserialization - tensors = [] - obj = _split_tensors_from_obj(obj, tensors) - obj = _broadcast_object_slow(obj, src_rank, group, dist_device) - tensors = broadcast_tensors(tensors, src_rank, group, dist_device) - else: - obj = _broadcast_object_slow(None, src_rank, group, dist_device) - tensors = broadcast_tensors(None, src_rank, group, dist_device) - return _put_tensors_in_obj(obj, tensors) - - -def _broadcast_object_slow( - obj: Any, src_rank: int, group: object, dist_device: torch.device, -) -> Any: - if get_rank(group) == src_rank: - # Emit data - buffer = io.BytesIO() - torch.save(obj, buffer) - buffer = torch.ByteTensor(buffer.getbuffer()).to(dist_device) - length = torch.LongTensor([len(buffer)]).to(dist_device) - broadcast(length, src=src_rank, group=group) - broadcast(buffer, src=src_rank, group=group) - else: - # Fetch from the source - length = torch.LongTensor([0]).to(dist_device) - broadcast(length, src=src_rank, group=group) - buffer = torch.ByteTensor(int(length.item())).to(dist_device) - broadcast(buffer, src=src_rank, group=group) - buffer = io.BytesIO(buffer.cpu().numpy()) - obj = torch.load(buffer, map_location="cpu") - return obj - - -@dataclass(frozen=True) -class _TensorPlaceholder: - index: int - - -def _split_tensors_from_obj(obj: Any, tensors: List[torch.Tensor]) -> Any: - if torch.is_tensor(obj): - placeholder = _TensorPlaceholder(index=len(tensors)) - tensors.append(obj) - return placeholder - elif isinstance(obj, dict): - return {k: _split_tensors_from_obj(v, tensors) for k, v in obj.items()} - elif isinstance(obj, list): - return [_split_tensors_from_obj(v, tensors) for v in obj] - elif isinstance(obj, tuple): - return tuple(_split_tensors_from_obj(v, tensors) for v in obj) - elif isinstance(obj, set): - return {_split_tensors_from_obj(v, tensors) for v in obj} - else: - return obj - - -def _put_tensors_in_obj(obj: Any, tensors: List[torch.Tensor]) -> Any: - if isinstance(obj, _TensorPlaceholder): - return tensors[obj.index] - elif isinstance(obj, dict): - return {k: _put_tensors_in_obj(v, tensors) for k, v in obj.items()} - elif isinstance(obj, list): - return [_put_tensors_in_obj(v, tensors) for v in obj] - elif isinstance(obj, tuple): - return tuple(_put_tensors_in_obj(v, tensors) for v in obj) - elif isinstance(obj, set): - return {_put_tensors_in_obj(v, tensors) for v in obj} - else: - return obj diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/__init__.py deleted file mode 100644 index 9a46b012c573a76e00e489307720fc3fa462c296..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/__init__.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import argparse -import importlib -import os - -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import merge_with_parent -from hydra.core.config_store import ConfigStore - -from .fairseq_task import FairseqTask, LegacyFairseqTask # noqa - - -# register dataclass -TASK_DATACLASS_REGISTRY = {} -TASK_REGISTRY = {} -TASK_CLASS_NAMES = set() - - -def setup_task(cfg: FairseqDataclass, **kwargs): - task = None - task_name = getattr(cfg, "task", None) - - if isinstance(task_name, str): - # legacy tasks - task = TASK_REGISTRY[task_name] - if task_name in TASK_DATACLASS_REGISTRY: - dc = TASK_DATACLASS_REGISTRY[task_name] - cfg = dc.from_namespace(cfg) - else: - task_name = getattr(cfg, "_name", None) - - if task_name and task_name in TASK_DATACLASS_REGISTRY: - dc = TASK_DATACLASS_REGISTRY[task_name] - cfg = merge_with_parent(dc(), cfg) - task = TASK_REGISTRY[task_name] - - assert ( - task is not None - ), f"Could not infer task type from {cfg}. Available argparse tasks: {TASK_REGISTRY.keys()}. Available hydra tasks: {TASK_DATACLASS_REGISTRY.keys()}" - - return task.setup_task(cfg, **kwargs) - - -def register_task(name, dataclass=None): - """ - New tasks can be added to fairseq with the - :func:`~fairseq.tasks.register_task` function decorator. - - For example:: - - @register_task('classification') - class ClassificationTask(FairseqTask): - (...) - - .. note:: - - All Tasks must implement the :class:`~fairseq.tasks.FairseqTask` - interface. - - Args: - name (str): the name of the task - """ - - def register_task_cls(cls): - if name in TASK_REGISTRY: - raise ValueError("Cannot register duplicate task ({})".format(name)) - if not issubclass(cls, FairseqTask): - raise ValueError( - "Task ({}: {}) must extend FairseqTask".format(name, cls.__name__) - ) - if cls.__name__ in TASK_CLASS_NAMES: - raise ValueError( - "Cannot register task with duplicate class name ({})".format( - cls.__name__ - ) - ) - TASK_REGISTRY[name] = cls - TASK_CLASS_NAMES.add(cls.__name__) - - if dataclass is not None and not issubclass(dataclass, FairseqDataclass): - raise ValueError( - "Dataclass {} must extend FairseqDataclass".format(dataclass) - ) - - cls.__dataclass = dataclass - if dataclass is not None: - TASK_DATACLASS_REGISTRY[name] = dataclass - - cs = ConfigStore.instance() - node = dataclass() - node._name = name - cs.store(name=name, group="task", node=node, provider="fairseq") - - return cls - - return register_task_cls - - -def get_task(name): - return TASK_REGISTRY[name] - - -def import_tasks(tasks_dir, namespace): - for file in os.listdir(tasks_dir): - path = os.path.join(tasks_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - task_name = file[: file.find(".py")] if file.endswith(".py") else file - importlib.import_module(namespace + "." + task_name) - - # expose `task_parser` for sphinx - if task_name in TASK_REGISTRY: - parser = argparse.ArgumentParser(add_help=False) - group_task = parser.add_argument_group("Task name") - # fmt: off - group_task.add_argument('--task', metavar=task_name, - help='Enable this task with: ``--task=' + task_name + '``') - # fmt: on - group_args = parser.add_argument_group( - "Additional command-line arguments" - ) - TASK_REGISTRY[task_name].add_args(group_args) - globals()[task_name + "_parser"] = parser - - -# automatically import any Python files in the tasks/ directory -tasks_dir = os.path.dirname(__file__) -import_tasks(tasks_dir, "fairseq.tasks") diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/distributed_fairseq_model.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/distributed_fairseq_model.py deleted file mode 100644 index 5eda2276404ca686be124901674ddfe36bd6dfd1..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/distributed_fairseq_model.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import signal -import threading - -import torch -import torch.nn as nn -from torch.nn.parallel import DistributedDataParallel - -from fairseq.distributed import ( - DistributedTimeoutWrapper, - LegacyDistributedDataParallel, - ModuleProxyWrapper, - TPUDistributedDataParallel, -) - - -logger = logging.getLogger(__name__) - - -_GOSSIP_DISABLED = False -try: - import gossip -except ImportError: - _GOSSIP_DISABLED = True - - -def DistributedFairseqModel(args, model, process_group, device): - """ - Wrap a *model* to support distributed data parallel training. - - This is similar to the built-in DistributedDataParallel, but allows - additional configuration of the DistributedDataParallel class to - use, and also provides easier access to the wrapped model by - forwarding requests for missing attributes to the wrapped model. - - Args: - args (argparse.Namespace): fairseq args - model (BaseFairseqModel): model to wrap - process_group: the c10d process group to be used for distributed data - parallel all-reduction. - device: device to move model to - """ - assert isinstance(model, nn.Module) - if args.tpu: - wrapped_model = TPUDistributedDataParallel( - module=model.to(device), - process_group=process_group, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend in {"c10d", "pytorch_ddp"}: - wrapped_model = DistributedDataParallel( - module=model.to(device), - device_ids=[args.device_id], - output_device=args.device_id, - broadcast_buffers=args.broadcast_buffers, - bucket_cap_mb=args.bucket_cap_mb, - process_group=process_group, - find_unused_parameters=args.find_unused_parameters, - gradient_as_bucket_view=args.gradient_as_bucket_view, - ) - if args.ddp_comm_hook == "fp16": - logger.info("enable fp16 communication hook in DDP") - try: - from torch.distributed.algorithms.ddp_comm_hooks import ( - register_ddp_comm_hook, - DDPCommHookType, - ) - except: - logger.error( - "Could not import from torch.distributed.algorithms.ddp_comm_hooks; you may need to update your pytorch version" - ) - raise - - register_ddp_comm_hook(DDPCommHookType.FP16_COMPRESS, wrapped_model) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend in {"no_c10d", "legacy_ddp"}: - wrapped_model = LegacyDistributedDataParallel( - module=model.to(device), - buffer_size=2 ** 28, - process_group=process_group, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend == "slow_mo": - if _GOSSIP_DISABLED: - raise ImportError( - "Cannot find gossip library. Please install from: " - "github.com/facebookresearch/stochastic_gradient_push" - ) - - # The values of slowmo_momentum below were obtained by tuning on the - # En-De 16 dataset by training the transformer_wmt_en_de_large model - if args.slowmo_momentum is None: - if args.distributed_world_size <= 16: - args.slowmo_momentum = 0.0 - elif args.distributed_world_size <= 32: - args.slowmo_momentum = 0.2 - elif args.distributed_world_size <= 64: - args.slowmo_momentum = 0.5 - else: - args.slowmo_momentum = 0.6 - - wrapped_model = gossip.GossipDataParallel( - module=model.to(device), - device_ids=[args.device_id], - output_device=args.device_id, - broadcast_buffers=args.broadcast_buffers, - nprocs_per_node=args.nprocs_per_node, - slowmo_momentum=args.slowmo_momentum, - localsgd=(args.slowmo_algorithm == "LocalSGD"), - localsgd_frequency=args.localsgd_frequency, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend == "fully_sharded": - try: - from fairscale.nn.data_parallel import FullyShardedDataParallel as FSDP - except ImportError: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - assert isinstance(model, FSDP), "expected model to already be wrapped in FSDP" - wrapped_model = model - if args.memory_efficient_fp16: - wrapped_model = wrapped_model.half() - if not args.cpu_offload: - wrapped_model = wrapped_model.to(device=device) - else: - raise ValueError("Unknown --ddp-backend: " + args.ddp_backend) - - # kill hung distributed jobs after a timeout - if getattr(args, "heartbeat_timeout", -1) > 0: - wrapped_model = DistributedTimeoutWrapper( - wrapped_model, timeout=getattr(args, "heartbeat_timeout", -1) - ) - - return wrapped_model diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/fast_noisy_channel/noisy_channel_translation.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/fast_noisy_channel/noisy_channel_translation.py deleted file mode 100644 index b74bdfd456f9b7c546ce528173c77431b4f57ac1..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/fast_noisy_channel/noisy_channel_translation.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.tasks.translation import TranslationTask -from fairseq.tasks.language_modeling import LanguageModelingTask -from fairseq import checkpoint_utils -import argparse -from fairseq.tasks import register_task -import torch - - -@register_task("noisy_channel_translation") -class NoisyChannelTranslation(TranslationTask): - """ - Rescore the top k candidates from each beam using noisy channel modeling - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - TranslationTask.add_args(parser) - # fmt: off - parser.add_argument('--channel-model', metavar='FILE', - help='path to P(S|T) model. P(S|T) and P(T|S) must share source and target dictionaries.') - parser.add_argument('--combine-method', default='lm_only', - choices=['lm_only', 'noisy_channel'], - help="""method for combining direct and channel model scores. - lm_only: decode with P(T|S)P(T) - noisy_channel: decode with 1/t P(T|S) + 1/s(P(S|T)P(T))""") - parser.add_argument('--normalize-lm-scores-by-tgt-len', action='store_true', default=False, - help='normalize lm score by target length instead of source length') - parser.add_argument('--channel-scoring-type', default='log_norm', choices=['unnormalized', 'log_norm', 'k2_separate', 'src_vocab', 'src_vocab_batched'], - help="Normalize bw scores with log softmax or return bw scores without log softmax") - parser.add_argument('--top-k-vocab', default=0, type=int, - help='top k vocab IDs to use with `src_vocab` in channel model scoring') - parser.add_argument('--k2', default=50, type=int, - help='the top k2 candidates to rescore with the noisy channel model for each beam') - parser.add_argument('--ch-wt', default=1, type=float, - help='weight for the channel model') - parser.add_argument('--lm-model', metavar='FILE', - help='path to lm model file, to model P(T). P(T) must share the same vocab as the direct model on the target side') - parser.add_argument('--lm-data', metavar='FILE', - help='path to lm model training data for target language, used to properly load LM with correct dictionary') - parser.add_argument('--lm-wt', default=1, type=float, - help='the weight of the lm in joint decoding') - # fmt: on - - def build_generator( - self, models, args, seq_gen_cls=None, extra_gen_cls_kwargs=None - ): - if getattr(args, "score_reference", False): - raise NotImplementedError() - else: - from .noisy_channel_sequence_generator import NoisyChannelSequenceGenerator - use_cuda = torch.cuda.is_available() and not self.args.cpu - assert self.args.lm_model is not None, '--lm-model required for noisy channel generation!' - assert self.args.lm_data is not None, '--lm-data required for noisy channel generation to map between LM and bitext vocabs' - if self.args.channel_model is not None: - import copy - ch_args_task = copy.deepcopy(self.args) - tmp = ch_args_task.source_lang - ch_args_task.source_lang = ch_args_task.target_lang - ch_args_task.target_lang = tmp - ch_args_task._name = 'translation' - channel_task = TranslationTask.setup_task(ch_args_task) - - arg_dict = {} - arg_dict['task'] = 'language_modeling' - arg_dict['sample_break_mode'] = 'eos' - arg_dict['data'] = self.args.lm_data - arg_dict['output_dictionary_size'] = -1 - lm_args = argparse.Namespace(**arg_dict) - lm_task = LanguageModelingTask.setup_task(lm_args) - lm_dict = lm_task.output_dictionary - - if self.args.channel_model is not None: - channel_models, _ = checkpoint_utils.load_model_ensemble(self.args.channel_model.split(':'), task=channel_task) - - for model in channel_models: - model.make_generation_fast_( - beamable_mm_beam_size=None if args.no_beamable_mm else args.beam, - need_attn=args.print_alignment, - ) - if self.args.fp16: - model.half() - if use_cuda: - model.cuda() - else: - channel_models = None - - lm_models, _ = checkpoint_utils.load_model_ensemble(self.args.lm_model.split(':'), task=lm_task) - - for model in lm_models: - model.make_generation_fast_( - beamable_mm_beam_size=None if args.no_beamable_mm else args.beam, - need_attn=args.print_alignment, - ) - if self.args.fp16: - model.half() - if use_cuda: - model.cuda() - return NoisyChannelSequenceGenerator( - combine_method=self.args.combine_method, - tgt_dict=self.target_dictionary, - src_dict=self.source_dictionary, - beam_size=getattr(args, 'beam', 5), - max_len_a=getattr(args, 'max_len_a', 0), - max_len_b=getattr(args, 'max_len_b', 200), - min_len=getattr(args, 'min_len', 1), - len_penalty=getattr(args, 'lenpen', 1), - unk_penalty=getattr(args, 'unkpen', 0), - temperature=getattr(args, 'temperature', 1.), - match_source_len=getattr(args, 'match_source_len', False), - no_repeat_ngram_size=getattr(args, 'no_repeat_ngram_size', 0), - normalize_scores=(not getattr(args, 'unnormalized', False)), - channel_models=channel_models, - k2=getattr(self.args, 'k2', 50), - ch_weight=getattr(self.args, 'ch_wt', 1), - channel_scoring_type=self.args.channel_scoring_type, - top_k_vocab=self.args.top_k_vocab, - lm_models=lm_models, - lm_dict=lm_dict, - lm_weight=getattr(self.args, 'lm_wt', 1), - normalize_lm_scores_by_tgt_len=getattr(self.args, 'normalize_lm_scores_by_tgt_len', False), - ) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/transformer.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/transformer.py deleted file mode 100644 index 6b330ef1b7f7a506e7e8176f20a0e722b5fd5149..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/model_parallel/models/transformer.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch.nn as nn -from fairseq.model_parallel.modules import ( - ModelParallelTransformerDecoderLayer, - ModelParallelTransformerEncoderLayer, -) -from fairseq.models import register_model -from fairseq.models.transformer import ( - TransformerDecoder, - TransformerEncoder, - TransformerModel, -) - - -try: - from fairseq.model_parallel.megatron.mpu import ( - copy_to_model_parallel_region, - gather_from_model_parallel_region, - VocabParallelEmbedding, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -logger = logging.getLogger(__name__) - - -@register_model("model_parallel_transformer") -class ModelParallelTransformerModel(TransformerModel): - """ - Model parallel Transformer model. - """ - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim, path=None): - if not has_megatron_submodule: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - dictionary.pad_to_multiple_(args.model_parallel_size * 8) - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - - def _vocab_init(tensor, **kwargs): - nn.init.normal_(tensor, mean=0, std=num_embeddings ** -0.5) - nn.init.constant_(tensor[1], 0) - - emb = VocabParallelEmbedding( - num_embeddings, embed_dim, padding_idx, init_method=_vocab_init - ) - # if provided, load from preloaded dictionaries - if path: - raise NotImplementedError( - "Loading of embedding from path is not supported for model parallel" - ) - return emb - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return ModelParallelTransformerEncoder(args, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - return ModelParallelTransformerDecoder( - args, - tgt_dict, - embed_tokens, - no_encoder_attn=getattr(args, "no_cross_attention", False), - ) - - -class ModelParallelTransformerEncoder(TransformerEncoder): - """ - Model parallel Transformer encoder consisting of *args.encoder_layers* layers. Each layer - is a :class:`ModelParallelTransformerEncoderLayer`. - """ - - def __init__(self, args, dictionary, embed_tokens): - super().__init__(args, dictionary, embed_tokens) - - if args.no_final_layer_norm: - self.layer_norm = None - - def build_encoder_layer(self, args): - return ModelParallelTransformerEncoderLayer(args) - - -class ModelParallelTransformerDecoder(TransformerDecoder): - """ - Model Parallel Transformer decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`ModelParallelTransformerDecoderLayer`. - """ - - def build_decoder_layer(self, args, no_encoder_attn=False): - return ModelParallelTransformerDecoderLayer(args, no_encoder_attn) - - def output_layer(self, features, **kwargs): - """Project features to the vocabulary size.""" - if not self.share_input_output_embed: - raise NotImplementedError( - "Model parallel training currently requires --share-decoder-input-output-embed" - ) - - features = copy_to_model_parallel_region(features) - - # project back to size of vocabulary - x = self.output_projection(features) - - if getattr(self.args, "criterion") != "vocab_parallel_cross_entropy": - x = gather_from_model_parallel_region(x).contiguous() - return x diff --git a/spaces/ORI-Muchim/ONFIRETTS/mel_processing.py b/spaces/ORI-Muchim/ONFIRETTS/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/ONFIRETTS/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/Osborn-bh/ChatGLM3-6B-Osborn/tool_using/openai_api_demo.py b/spaces/Osborn-bh/ChatGLM3-6B-Osborn/tool_using/openai_api_demo.py deleted file mode 100644 index 7502ec9bc7b73e570ffc0ff7c4206f558cf85498..0000000000000000000000000000000000000000 --- a/spaces/Osborn-bh/ChatGLM3-6B-Osborn/tool_using/openai_api_demo.py +++ /dev/null @@ -1,57 +0,0 @@ -import json - -import openai -from loguru import logger - -from tool_register import get_tools, dispatch_tool - -openai.api_base = "http://localhost:8000/v1" -openai.api_key = "xxx" - - -tools = get_tools() -system_info = { - "role": "system", - "content": "Answer the following questions as best as you can. You have access to the following tools:", - "tools": list(tools.values()), -} - - -def main(): - messages = [ - system_info, - { - "role": "user", - "content": "帮我查询北京的天气怎么样", - } - ] - response = openai.ChatCompletion.create( - model="chatglm3", - messages=messages, - temperature=0, - return_function_call=True - ) - function_call = json.loads(response.choices[0].message.content) - logger.info(f"Function Call Response: {function_call}") - - tool_response = dispatch_tool(function_call["name"], function_call["parameters"]) - logger.info(f"Tool Call Response: {tool_response}") - - messages = response.choices[0].history # 获取历史对话信息 - messages.append( - { - "role": "observation", - "content": tool_response, # 调用函数返回结果 - } - ) - - response = openai.ChatCompletion.create( - model="chatglm3", - messages=messages, - temperature=0, - ) - logger.info(response.choices[0].message.content) - - -if __name__ == "__main__": - main() diff --git a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/utils/smoothest_attention.py b/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/utils/smoothest_attention.py deleted file mode 100644 index 3f2c660aea935226e99a908a91c4bd0f6f706196..0000000000000000000000000000000000000000 --- a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/utils/smoothest_attention.py +++ /dev/null @@ -1,42 +0,0 @@ -import torch -import copy - -# we want to take 0.2 of the pixel and 0.7 of the mean of the pixels around it 100 times -# we will take a size between the current pixel and the pixels around it -def smooth_attention(attention: torch.Tensor, iters: int = 1000, threshold: float = 0.1, scale: float = 0.2, size: int = 3): - - # squeeze the attention - attention = copy.deepcopy(attention.squeeze()) - - # make 100 iterations - for _ in range(iters): - - # initialize the difference - difference = torch.full(attention.shape, torch.inf) - - # iterate over the pixels of the attention - for i in range(attention.shape[0]): - - for j in range(attention.shape[1]): - - # recuperate the pixel - pixel = attention[i, j] - - # recuperate the mean of the pixels around it - mean = attention[max(0, i - size): min(attention.shape[0], i + size), max(0, j - size): min(attention.shape[1], j + size)].mean() - - # update the attention - attention[i, j] = (1 - scale) * pixel + scale * mean - - # recuperate the difference - difference[i, j] = abs(pixel - mean) - - # compare each difference with the threshold - if (difference < threshold).all(): break - - # unsqueeze the attention - attention = attention.unsqueeze(-1) - - # return the attention - return attention - diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/utils/weight_init.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/utils/weight_init.py deleted file mode 100644 index 287a1d0bffe26e023029d48634d9b761deda7ba4..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/utils/weight_init.py +++ /dev/null @@ -1,684 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import math -import warnings - -import numpy as np -import torch -import torch.nn as nn -from torch import Tensor - -from annotator.uniformer.mmcv.utils import Registry, build_from_cfg, get_logger, print_log - -INITIALIZERS = Registry('initializer') - - -def update_init_info(module, init_info): - """Update the `_params_init_info` in the module if the value of parameters - are changed. - - Args: - module (obj:`nn.Module`): The module of PyTorch with a user-defined - attribute `_params_init_info` which records the initialization - information. - init_info (str): The string that describes the initialization. - """ - assert hasattr( - module, - '_params_init_info'), f'Can not find `_params_init_info` in {module}' - for name, param in module.named_parameters(): - - assert param in module._params_init_info, ( - f'Find a new :obj:`Parameter` ' - f'named `{name}` during executing the ' - f'`init_weights` of ' - f'`{module.__class__.__name__}`. ' - f'Please do not add or ' - f'replace parameters during executing ' - f'the `init_weights`. ') - - # The parameter has been changed during executing the - # `init_weights` of module - mean_value = param.data.mean() - if module._params_init_info[param]['tmp_mean_value'] != mean_value: - module._params_init_info[param]['init_info'] = init_info - module._params_init_info[param]['tmp_mean_value'] = mean_value - - -def constant_init(module, val, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.constant_(module.weight, val) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def xavier_init(module, gain=1, bias=0, distribution='normal'): - assert distribution in ['uniform', 'normal'] - if hasattr(module, 'weight') and module.weight is not None: - if distribution == 'uniform': - nn.init.xavier_uniform_(module.weight, gain=gain) - else: - nn.init.xavier_normal_(module.weight, gain=gain) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def normal_init(module, mean=0, std=1, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.normal_(module.weight, mean, std) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def trunc_normal_init(module: nn.Module, - mean: float = 0, - std: float = 1, - a: float = -2, - b: float = 2, - bias: float = 0) -> None: - if hasattr(module, 'weight') and module.weight is not None: - trunc_normal_(module.weight, mean, std, a, b) # type: ignore - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) # type: ignore - - -def uniform_init(module, a=0, b=1, bias=0): - if hasattr(module, 'weight') and module.weight is not None: - nn.init.uniform_(module.weight, a, b) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def kaiming_init(module, - a=0, - mode='fan_out', - nonlinearity='relu', - bias=0, - distribution='normal'): - assert distribution in ['uniform', 'normal'] - if hasattr(module, 'weight') and module.weight is not None: - if distribution == 'uniform': - nn.init.kaiming_uniform_( - module.weight, a=a, mode=mode, nonlinearity=nonlinearity) - else: - nn.init.kaiming_normal_( - module.weight, a=a, mode=mode, nonlinearity=nonlinearity) - if hasattr(module, 'bias') and module.bias is not None: - nn.init.constant_(module.bias, bias) - - -def caffe2_xavier_init(module, bias=0): - # `XavierFill` in Caffe2 corresponds to `kaiming_uniform_` in PyTorch - # Acknowledgment to FAIR's internal code - kaiming_init( - module, - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - bias=bias, - distribution='uniform') - - -def bias_init_with_prob(prior_prob): - """initialize conv/fc bias value according to a given probability value.""" - bias_init = float(-np.log((1 - prior_prob) / prior_prob)) - return bias_init - - -def _get_bases_name(m): - return [b.__name__ for b in m.__class__.__bases__] - - -class BaseInit(object): - - def __init__(self, *, bias=0, bias_prob=None, layer=None): - self.wholemodule = False - if not isinstance(bias, (int, float)): - raise TypeError(f'bias must be a number, but got a {type(bias)}') - - if bias_prob is not None: - if not isinstance(bias_prob, float): - raise TypeError(f'bias_prob type must be float, \ - but got {type(bias_prob)}') - - if layer is not None: - if not isinstance(layer, (str, list)): - raise TypeError(f'layer must be a str or a list of str, \ - but got a {type(layer)}') - else: - layer = [] - - if bias_prob is not None: - self.bias = bias_init_with_prob(bias_prob) - else: - self.bias = bias - self.layer = [layer] if isinstance(layer, str) else layer - - def _get_init_info(self): - info = f'{self.__class__.__name__}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Constant') -class ConstantInit(BaseInit): - """Initialize module parameters with constant values. - - Args: - val (int | float): the value to fill the weights in the module with - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, val, **kwargs): - super().__init__(**kwargs) - self.val = val - - def __call__(self, module): - - def init(m): - if self.wholemodule: - constant_init(m, self.val, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - constant_init(m, self.val, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: val={self.val}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Xavier') -class XavierInit(BaseInit): - r"""Initialize module parameters with values according to the method - described in `Understanding the difficulty of training deep feedforward - neural networks - Glorot, X. & Bengio, Y. (2010). - `_ - - Args: - gain (int | float): an optional scaling factor. Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - distribution (str): distribution either be ``'normal'`` - or ``'uniform'``. Defaults to ``'normal'``. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, gain=1, distribution='normal', **kwargs): - super().__init__(**kwargs) - self.gain = gain - self.distribution = distribution - - def __call__(self, module): - - def init(m): - if self.wholemodule: - xavier_init(m, self.gain, self.bias, self.distribution) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - xavier_init(m, self.gain, self.bias, self.distribution) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: gain={self.gain}, ' \ - f'distribution={self.distribution}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Normal') -class NormalInit(BaseInit): - r"""Initialize module parameters with the values drawn from the normal - distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`. - - Args: - mean (int | float):the mean of the normal distribution. Defaults to 0. - std (int | float): the standard deviation of the normal distribution. - Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - - """ - - def __init__(self, mean=0, std=1, **kwargs): - super().__init__(**kwargs) - self.mean = mean - self.std = std - - def __call__(self, module): - - def init(m): - if self.wholemodule: - normal_init(m, self.mean, self.std, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - normal_init(m, self.mean, self.std, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: mean={self.mean},' \ - f' std={self.std}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='TruncNormal') -class TruncNormalInit(BaseInit): - r"""Initialize module parameters with the values drawn from the normal - distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` with values - outside :math:`[a, b]`. - - Args: - mean (float): the mean of the normal distribution. Defaults to 0. - std (float): the standard deviation of the normal distribution. - Defaults to 1. - a (float): The minimum cutoff value. - b ( float): The maximum cutoff value. - bias (float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - - """ - - def __init__(self, - mean: float = 0, - std: float = 1, - a: float = -2, - b: float = 2, - **kwargs) -> None: - super().__init__(**kwargs) - self.mean = mean - self.std = std - self.a = a - self.b = b - - def __call__(self, module: nn.Module) -> None: - - def init(m): - if self.wholemodule: - trunc_normal_init(m, self.mean, self.std, self.a, self.b, - self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - trunc_normal_init(m, self.mean, self.std, self.a, self.b, - self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a}, b={self.b},' \ - f' mean={self.mean}, std={self.std}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Uniform') -class UniformInit(BaseInit): - r"""Initialize module parameters with values drawn from the uniform - distribution :math:`\mathcal{U}(a, b)`. - - Args: - a (int | float): the lower bound of the uniform distribution. - Defaults to 0. - b (int | float): the upper bound of the uniform distribution. - Defaults to 1. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, a=0, b=1, **kwargs): - super().__init__(**kwargs) - self.a = a - self.b = b - - def __call__(self, module): - - def init(m): - if self.wholemodule: - uniform_init(m, self.a, self.b, self.bias) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - uniform_init(m, self.a, self.b, self.bias) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a},' \ - f' b={self.b}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Kaiming') -class KaimingInit(BaseInit): - r"""Initialize module parameters with the values according to the method - described in `Delving deep into rectifiers: Surpassing human-level - performance on ImageNet classification - He, K. et al. (2015). - `_ - - Args: - a (int | float): the negative slope of the rectifier used after this - layer (only used with ``'leaky_relu'``). Defaults to 0. - mode (str): either ``'fan_in'`` or ``'fan_out'``. Choosing - ``'fan_in'`` preserves the magnitude of the variance of the weights - in the forward pass. Choosing ``'fan_out'`` preserves the - magnitudes in the backwards pass. Defaults to ``'fan_out'``. - nonlinearity (str): the non-linear function (`nn.functional` name), - recommended to use only with ``'relu'`` or ``'leaky_relu'`` . - Defaults to 'relu'. - bias (int | float): the value to fill the bias. Defaults to 0. - bias_prob (float, optional): the probability for bias initialization. - Defaults to None. - distribution (str): distribution either be ``'normal'`` or - ``'uniform'``. Defaults to ``'normal'``. - layer (str | list[str], optional): the layer will be initialized. - Defaults to None. - """ - - def __init__(self, - a=0, - mode='fan_out', - nonlinearity='relu', - distribution='normal', - **kwargs): - super().__init__(**kwargs) - self.a = a - self.mode = mode - self.nonlinearity = nonlinearity - self.distribution = distribution - - def __call__(self, module): - - def init(m): - if self.wholemodule: - kaiming_init(m, self.a, self.mode, self.nonlinearity, - self.bias, self.distribution) - else: - layername = m.__class__.__name__ - basesname = _get_bases_name(m) - if len(set(self.layer) & set([layername] + basesname)): - kaiming_init(m, self.a, self.mode, self.nonlinearity, - self.bias, self.distribution) - - module.apply(init) - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: a={self.a}, mode={self.mode}, ' \ - f'nonlinearity={self.nonlinearity}, ' \ - f'distribution ={self.distribution}, bias={self.bias}' - return info - - -@INITIALIZERS.register_module(name='Caffe2Xavier') -class Caffe2XavierInit(KaimingInit): - # `XavierFill` in Caffe2 corresponds to `kaiming_uniform_` in PyTorch - # Acknowledgment to FAIR's internal code - def __init__(self, **kwargs): - super().__init__( - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - distribution='uniform', - **kwargs) - - def __call__(self, module): - super().__call__(module) - - -@INITIALIZERS.register_module(name='Pretrained') -class PretrainedInit(object): - """Initialize module by loading a pretrained model. - - Args: - checkpoint (str): the checkpoint file of the pretrained model should - be load. - prefix (str, optional): the prefix of a sub-module in the pretrained - model. it is for loading a part of the pretrained model to - initialize. For example, if we would like to only load the - backbone of a detector model, we can set ``prefix='backbone.'``. - Defaults to None. - map_location (str): map tensors into proper locations. - """ - - def __init__(self, checkpoint, prefix=None, map_location=None): - self.checkpoint = checkpoint - self.prefix = prefix - self.map_location = map_location - - def __call__(self, module): - from annotator.uniformer.mmcv.runner import (_load_checkpoint_with_prefix, load_checkpoint, - load_state_dict) - logger = get_logger('mmcv') - if self.prefix is None: - print_log(f'load model from: {self.checkpoint}', logger=logger) - load_checkpoint( - module, - self.checkpoint, - map_location=self.map_location, - strict=False, - logger=logger) - else: - print_log( - f'load {self.prefix} in model from: {self.checkpoint}', - logger=logger) - state_dict = _load_checkpoint_with_prefix( - self.prefix, self.checkpoint, map_location=self.map_location) - load_state_dict(module, state_dict, strict=False, logger=logger) - - if hasattr(module, '_params_init_info'): - update_init_info(module, init_info=self._get_init_info()) - - def _get_init_info(self): - info = f'{self.__class__.__name__}: load from {self.checkpoint}' - return info - - -def _initialize(module, cfg, wholemodule=False): - func = build_from_cfg(cfg, INITIALIZERS) - # wholemodule flag is for override mode, there is no layer key in override - # and initializer will give init values for the whole module with the name - # in override. - func.wholemodule = wholemodule - func(module) - - -def _initialize_override(module, override, cfg): - if not isinstance(override, (dict, list)): - raise TypeError(f'override must be a dict or a list of dict, \ - but got {type(override)}') - - override = [override] if isinstance(override, dict) else override - - for override_ in override: - - cp_override = copy.deepcopy(override_) - name = cp_override.pop('name', None) - if name is None: - raise ValueError('`override` must contain the key "name",' - f'but got {cp_override}') - # if override only has name key, it means use args in init_cfg - if not cp_override: - cp_override.update(cfg) - # if override has name key and other args except type key, it will - # raise error - elif 'type' not in cp_override.keys(): - raise ValueError( - f'`override` need "type" key, but got {cp_override}') - - if hasattr(module, name): - _initialize(getattr(module, name), cp_override, wholemodule=True) - else: - raise RuntimeError(f'module did not have attribute {name}, ' - f'but init_cfg is {cp_override}.') - - -def initialize(module, init_cfg): - """Initialize a module. - - Args: - module (``torch.nn.Module``): the module will be initialized. - init_cfg (dict | list[dict]): initialization configuration dict to - define initializer. OpenMMLab has implemented 6 initializers - including ``Constant``, ``Xavier``, ``Normal``, ``Uniform``, - ``Kaiming``, and ``Pretrained``. - Example: - >>> module = nn.Linear(2, 3, bias=True) - >>> init_cfg = dict(type='Constant', layer='Linear', val =1 , bias =2) - >>> initialize(module, init_cfg) - - >>> module = nn.Sequential(nn.Conv1d(3, 1, 3), nn.Linear(1,2)) - >>> # define key ``'layer'`` for initializing layer with different - >>> # configuration - >>> init_cfg = [dict(type='Constant', layer='Conv1d', val=1), - dict(type='Constant', layer='Linear', val=2)] - >>> initialize(module, init_cfg) - - >>> # define key``'override'`` to initialize some specific part in - >>> # module - >>> class FooNet(nn.Module): - >>> def __init__(self): - >>> super().__init__() - >>> self.feat = nn.Conv2d(3, 16, 3) - >>> self.reg = nn.Conv2d(16, 10, 3) - >>> self.cls = nn.Conv2d(16, 5, 3) - >>> model = FooNet() - >>> init_cfg = dict(type='Constant', val=1, bias=2, layer='Conv2d', - >>> override=dict(type='Constant', name='reg', val=3, bias=4)) - >>> initialize(model, init_cfg) - - >>> model = ResNet(depth=50) - >>> # Initialize weights with the pretrained model. - >>> init_cfg = dict(type='Pretrained', - checkpoint='torchvision://resnet50') - >>> initialize(model, init_cfg) - - >>> # Initialize weights of a sub-module with the specific part of - >>> # a pretrained model by using "prefix". - >>> url = 'http://download.openmmlab.com/mmdetection/v2.0/retinanet/'\ - >>> 'retinanet_r50_fpn_1x_coco/'\ - >>> 'retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth' - >>> init_cfg = dict(type='Pretrained', - checkpoint=url, prefix='backbone.') - """ - if not isinstance(init_cfg, (dict, list)): - raise TypeError(f'init_cfg must be a dict or a list of dict, \ - but got {type(init_cfg)}') - - if isinstance(init_cfg, dict): - init_cfg = [init_cfg] - - for cfg in init_cfg: - # should deeply copy the original config because cfg may be used by - # other modules, e.g., one init_cfg shared by multiple bottleneck - # blocks, the expected cfg will be changed after pop and will change - # the initialization behavior of other modules - cp_cfg = copy.deepcopy(cfg) - override = cp_cfg.pop('override', None) - _initialize(module, cp_cfg) - - if override is not None: - cp_cfg.pop('layer', None) - _initialize_override(module, override, cp_cfg) - else: - # All attributes in module have same initialization. - pass - - -def _no_grad_trunc_normal_(tensor: Tensor, mean: float, std: float, a: float, - b: float) -> Tensor: - # Method based on - # https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - # Modified from - # https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. ' - 'The distribution of values may be incorrect.', - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - lower = norm_cdf((a - mean) / std) - upper = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [lower, upper], then translate - # to [2lower-1, 2upper-1]. - tensor.uniform_(2 * lower - 1, 2 * upper - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor: Tensor, - mean: float = 0., - std: float = 1., - a: float = -2., - b: float = 2.) -> Tensor: - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - - Modified from - https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py - - Args: - tensor (``torch.Tensor``): an n-dimensional `torch.Tensor`. - mean (float): the mean of the normal distribution. - std (float): the standard deviation of the normal distribution. - a (float): the minimum cutoff value. - b (float): the maximum cutoff value. - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/vcs/versioncontrol.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/vcs/versioncontrol.py deleted file mode 100644 index 02bbf68e7ad3ce14f191af24260312e817e12df7..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/vcs/versioncontrol.py +++ /dev/null @@ -1,705 +0,0 @@ -"""Handles all VCS (version control) support""" - -import logging -import os -import shutil -import sys -import urllib.parse -from typing import ( - TYPE_CHECKING, - Any, - Dict, - Iterable, - Iterator, - List, - Mapping, - Optional, - Tuple, - Type, - Union, -) - -from pip._internal.cli.spinners import SpinnerInterface -from pip._internal.exceptions import BadCommand, InstallationError -from pip._internal.utils.misc import ( - HiddenText, - ask_path_exists, - backup_dir, - display_path, - hide_url, - hide_value, - is_installable_dir, - rmtree, -) -from pip._internal.utils.subprocess import ( - CommandArgs, - call_subprocess, - format_command_args, - make_command, -) -from pip._internal.utils.urls import get_url_scheme - -if TYPE_CHECKING: - # Literal was introduced in Python 3.8. - # - # TODO: Remove `if TYPE_CHECKING` when dropping support for Python 3.7. - from typing import Literal - - -__all__ = ["vcs"] - - -logger = logging.getLogger(__name__) - -AuthInfo = Tuple[Optional[str], Optional[str]] - - -def is_url(name: str) -> bool: - """ - Return true if the name looks like a URL. - """ - scheme = get_url_scheme(name) - if scheme is None: - return False - return scheme in ["http", "https", "file", "ftp"] + vcs.all_schemes - - -def make_vcs_requirement_url( - repo_url: str, rev: str, project_name: str, subdir: Optional[str] = None -) -> str: - """ - Return the URL for a VCS requirement. - - Args: - repo_url: the remote VCS url, with any needed VCS prefix (e.g. "git+"). - project_name: the (unescaped) project name. - """ - egg_project_name = project_name.replace("-", "_") - req = f"{repo_url}@{rev}#egg={egg_project_name}" - if subdir: - req += f"&subdirectory={subdir}" - - return req - - -def find_path_to_project_root_from_repo_root( - location: str, repo_root: str -) -> Optional[str]: - """ - Find the the Python project's root by searching up the filesystem from - `location`. Return the path to project root relative to `repo_root`. - Return None if the project root is `repo_root`, or cannot be found. - """ - # find project root. - orig_location = location - while not is_installable_dir(location): - last_location = location - location = os.path.dirname(location) - if location == last_location: - # We've traversed up to the root of the filesystem without - # finding a Python project. - logger.warning( - "Could not find a Python project for directory %s (tried all " - "parent directories)", - orig_location, - ) - return None - - if os.path.samefile(repo_root, location): - return None - - return os.path.relpath(location, repo_root) - - -class RemoteNotFoundError(Exception): - pass - - -class RemoteNotValidError(Exception): - def __init__(self, url: str): - super().__init__(url) - self.url = url - - -class RevOptions: - - """ - Encapsulates a VCS-specific revision to install, along with any VCS - install options. - - Instances of this class should be treated as if immutable. - """ - - def __init__( - self, - vc_class: Type["VersionControl"], - rev: Optional[str] = None, - extra_args: Optional[CommandArgs] = None, - ) -> None: - """ - Args: - vc_class: a VersionControl subclass. - rev: the name of the revision to install. - extra_args: a list of extra options. - """ - if extra_args is None: - extra_args = [] - - self.extra_args = extra_args - self.rev = rev - self.vc_class = vc_class - self.branch_name: Optional[str] = None - - def __repr__(self) -> str: - return f"" - - @property - def arg_rev(self) -> Optional[str]: - if self.rev is None: - return self.vc_class.default_arg_rev - - return self.rev - - def to_args(self) -> CommandArgs: - """ - Return the VCS-specific command arguments. - """ - args: CommandArgs = [] - rev = self.arg_rev - if rev is not None: - args += self.vc_class.get_base_rev_args(rev) - args += self.extra_args - - return args - - def to_display(self) -> str: - if not self.rev: - return "" - - return f" (to revision {self.rev})" - - def make_new(self, rev: str) -> "RevOptions": - """ - Make a copy of the current instance, but with a new rev. - - Args: - rev: the name of the revision for the new object. - """ - return self.vc_class.make_rev_options(rev, extra_args=self.extra_args) - - -class VcsSupport: - _registry: Dict[str, "VersionControl"] = {} - schemes = ["ssh", "git", "hg", "bzr", "sftp", "svn"] - - def __init__(self) -> None: - # Register more schemes with urlparse for various version control - # systems - urllib.parse.uses_netloc.extend(self.schemes) - super().__init__() - - def __iter__(self) -> Iterator[str]: - return self._registry.__iter__() - - @property - def backends(self) -> List["VersionControl"]: - return list(self._registry.values()) - - @property - def dirnames(self) -> List[str]: - return [backend.dirname for backend in self.backends] - - @property - def all_schemes(self) -> List[str]: - schemes: List[str] = [] - for backend in self.backends: - schemes.extend(backend.schemes) - return schemes - - def register(self, cls: Type["VersionControl"]) -> None: - if not hasattr(cls, "name"): - logger.warning("Cannot register VCS %s", cls.__name__) - return - if cls.name not in self._registry: - self._registry[cls.name] = cls() - logger.debug("Registered VCS backend: %s", cls.name) - - def unregister(self, name: str) -> None: - if name in self._registry: - del self._registry[name] - - def get_backend_for_dir(self, location: str) -> Optional["VersionControl"]: - """ - Return a VersionControl object if a repository of that type is found - at the given directory. - """ - vcs_backends = {} - for vcs_backend in self._registry.values(): - repo_path = vcs_backend.get_repository_root(location) - if not repo_path: - continue - logger.debug("Determine that %s uses VCS: %s", location, vcs_backend.name) - vcs_backends[repo_path] = vcs_backend - - if not vcs_backends: - return None - - # Choose the VCS in the inner-most directory. Since all repository - # roots found here would be either `location` or one of its - # parents, the longest path should have the most path components, - # i.e. the backend representing the inner-most repository. - inner_most_repo_path = max(vcs_backends, key=len) - return vcs_backends[inner_most_repo_path] - - def get_backend_for_scheme(self, scheme: str) -> Optional["VersionControl"]: - """ - Return a VersionControl object or None. - """ - for vcs_backend in self._registry.values(): - if scheme in vcs_backend.schemes: - return vcs_backend - return None - - def get_backend(self, name: str) -> Optional["VersionControl"]: - """ - Return a VersionControl object or None. - """ - name = name.lower() - return self._registry.get(name) - - -vcs = VcsSupport() - - -class VersionControl: - name = "" - dirname = "" - repo_name = "" - # List of supported schemes for this Version Control - schemes: Tuple[str, ...] = () - # Iterable of environment variable names to pass to call_subprocess(). - unset_environ: Tuple[str, ...] = () - default_arg_rev: Optional[str] = None - - @classmethod - def should_add_vcs_url_prefix(cls, remote_url: str) -> bool: - """ - Return whether the vcs prefix (e.g. "git+") should be added to a - repository's remote url when used in a requirement. - """ - return not remote_url.lower().startswith(f"{cls.name}:") - - @classmethod - def get_subdirectory(cls, location: str) -> Optional[str]: - """ - Return the path to Python project root, relative to the repo root. - Return None if the project root is in the repo root. - """ - return None - - @classmethod - def get_requirement_revision(cls, repo_dir: str) -> str: - """ - Return the revision string that should be used in a requirement. - """ - return cls.get_revision(repo_dir) - - @classmethod - def get_src_requirement(cls, repo_dir: str, project_name: str) -> str: - """ - Return the requirement string to use to redownload the files - currently at the given repository directory. - - Args: - project_name: the (unescaped) project name. - - The return value has a form similar to the following: - - {repository_url}@{revision}#egg={project_name} - """ - repo_url = cls.get_remote_url(repo_dir) - - if cls.should_add_vcs_url_prefix(repo_url): - repo_url = f"{cls.name}+{repo_url}" - - revision = cls.get_requirement_revision(repo_dir) - subdir = cls.get_subdirectory(repo_dir) - req = make_vcs_requirement_url(repo_url, revision, project_name, subdir=subdir) - - return req - - @staticmethod - def get_base_rev_args(rev: str) -> List[str]: - """ - Return the base revision arguments for a vcs command. - - Args: - rev: the name of a revision to install. Cannot be None. - """ - raise NotImplementedError - - def is_immutable_rev_checkout(self, url: str, dest: str) -> bool: - """ - Return true if the commit hash checked out at dest matches - the revision in url. - - Always return False, if the VCS does not support immutable commit - hashes. - - This method does not check if there are local uncommitted changes - in dest after checkout, as pip currently has no use case for that. - """ - return False - - @classmethod - def make_rev_options( - cls, rev: Optional[str] = None, extra_args: Optional[CommandArgs] = None - ) -> RevOptions: - """ - Return a RevOptions object. - - Args: - rev: the name of a revision to install. - extra_args: a list of extra options. - """ - return RevOptions(cls, rev, extra_args=extra_args) - - @classmethod - def _is_local_repository(cls, repo: str) -> bool: - """ - posix absolute paths start with os.path.sep, - win32 ones start with drive (like c:\\folder) - """ - drive, tail = os.path.splitdrive(repo) - return repo.startswith(os.path.sep) or bool(drive) - - @classmethod - def get_netloc_and_auth( - cls, netloc: str, scheme: str - ) -> Tuple[str, Tuple[Optional[str], Optional[str]]]: - """ - Parse the repository URL's netloc, and return the new netloc to use - along with auth information. - - Args: - netloc: the original repository URL netloc. - scheme: the repository URL's scheme without the vcs prefix. - - This is mainly for the Subversion class to override, so that auth - information can be provided via the --username and --password options - instead of through the URL. For other subclasses like Git without - such an option, auth information must stay in the URL. - - Returns: (netloc, (username, password)). - """ - return netloc, (None, None) - - @classmethod - def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]: - """ - Parse the repository URL to use, and return the URL, revision, - and auth info to use. - - Returns: (url, rev, (username, password)). - """ - scheme, netloc, path, query, frag = urllib.parse.urlsplit(url) - if "+" not in scheme: - raise ValueError( - "Sorry, {!r} is a malformed VCS url. " - "The format is +://, " - "e.g. svn+http://myrepo/svn/MyApp#egg=MyApp".format(url) - ) - # Remove the vcs prefix. - scheme = scheme.split("+", 1)[1] - netloc, user_pass = cls.get_netloc_and_auth(netloc, scheme) - rev = None - if "@" in path: - path, rev = path.rsplit("@", 1) - if not rev: - raise InstallationError( - "The URL {!r} has an empty revision (after @) " - "which is not supported. Include a revision after @ " - "or remove @ from the URL.".format(url) - ) - url = urllib.parse.urlunsplit((scheme, netloc, path, query, "")) - return url, rev, user_pass - - @staticmethod - def make_rev_args( - username: Optional[str], password: Optional[HiddenText] - ) -> CommandArgs: - """ - Return the RevOptions "extra arguments" to use in obtain(). - """ - return [] - - def get_url_rev_options(self, url: HiddenText) -> Tuple[HiddenText, RevOptions]: - """ - Return the URL and RevOptions object to use in obtain(), - as a tuple (url, rev_options). - """ - secret_url, rev, user_pass = self.get_url_rev_and_auth(url.secret) - username, secret_password = user_pass - password: Optional[HiddenText] = None - if secret_password is not None: - password = hide_value(secret_password) - extra_args = self.make_rev_args(username, password) - rev_options = self.make_rev_options(rev, extra_args=extra_args) - - return hide_url(secret_url), rev_options - - @staticmethod - def normalize_url(url: str) -> str: - """ - Normalize a URL for comparison by unquoting it and removing any - trailing slash. - """ - return urllib.parse.unquote(url).rstrip("/") - - @classmethod - def compare_urls(cls, url1: str, url2: str) -> bool: - """ - Compare two repo URLs for identity, ignoring incidental differences. - """ - return cls.normalize_url(url1) == cls.normalize_url(url2) - - def fetch_new( - self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int - ) -> None: - """ - Fetch a revision from a repository, in the case that this is the - first fetch from the repository. - - Args: - dest: the directory to fetch the repository to. - rev_options: a RevOptions object. - verbosity: verbosity level. - """ - raise NotImplementedError - - def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - """ - Switch the repo at ``dest`` to point to ``URL``. - - Args: - rev_options: a RevOptions object. - """ - raise NotImplementedError - - def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - """ - Update an already-existing repo to the given ``rev_options``. - - Args: - rev_options: a RevOptions object. - """ - raise NotImplementedError - - @classmethod - def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool: - """ - Return whether the id of the current commit equals the given name. - - Args: - dest: the repository directory. - name: a string name. - """ - raise NotImplementedError - - def obtain(self, dest: str, url: HiddenText, verbosity: int) -> None: - """ - Install or update in editable mode the package represented by this - VersionControl object. - - :param dest: the repository directory in which to install or update. - :param url: the repository URL starting with a vcs prefix. - :param verbosity: verbosity level. - """ - url, rev_options = self.get_url_rev_options(url) - - if not os.path.exists(dest): - self.fetch_new(dest, url, rev_options, verbosity=verbosity) - return - - rev_display = rev_options.to_display() - if self.is_repository_directory(dest): - existing_url = self.get_remote_url(dest) - if self.compare_urls(existing_url, url.secret): - logger.debug( - "%s in %s exists, and has correct URL (%s)", - self.repo_name.title(), - display_path(dest), - url, - ) - if not self.is_commit_id_equal(dest, rev_options.rev): - logger.info( - "Updating %s %s%s", - display_path(dest), - self.repo_name, - rev_display, - ) - self.update(dest, url, rev_options) - else: - logger.info("Skipping because already up-to-date.") - return - - logger.warning( - "%s %s in %s exists with URL %s", - self.name, - self.repo_name, - display_path(dest), - existing_url, - ) - prompt = ("(s)witch, (i)gnore, (w)ipe, (b)ackup ", ("s", "i", "w", "b")) - else: - logger.warning( - "Directory %s already exists, and is not a %s %s.", - dest, - self.name, - self.repo_name, - ) - # https://github.com/python/mypy/issues/1174 - prompt = ("(i)gnore, (w)ipe, (b)ackup ", ("i", "w", "b")) # type: ignore - - logger.warning( - "The plan is to install the %s repository %s", - self.name, - url, - ) - response = ask_path_exists("What to do? {}".format(prompt[0]), prompt[1]) - - if response == "a": - sys.exit(-1) - - if response == "w": - logger.warning("Deleting %s", display_path(dest)) - rmtree(dest) - self.fetch_new(dest, url, rev_options, verbosity=verbosity) - return - - if response == "b": - dest_dir = backup_dir(dest) - logger.warning("Backing up %s to %s", display_path(dest), dest_dir) - shutil.move(dest, dest_dir) - self.fetch_new(dest, url, rev_options, verbosity=verbosity) - return - - # Do nothing if the response is "i". - if response == "s": - logger.info( - "Switching %s %s to %s%s", - self.repo_name, - display_path(dest), - url, - rev_display, - ) - self.switch(dest, url, rev_options) - - def unpack(self, location: str, url: HiddenText, verbosity: int) -> None: - """ - Clean up current location and download the url repository - (and vcs infos) into location - - :param url: the repository URL starting with a vcs prefix. - :param verbosity: verbosity level. - """ - if os.path.exists(location): - rmtree(location) - self.obtain(location, url=url, verbosity=verbosity) - - @classmethod - def get_remote_url(cls, location: str) -> str: - """ - Return the url used at location - - Raises RemoteNotFoundError if the repository does not have a remote - url configured. - """ - raise NotImplementedError - - @classmethod - def get_revision(cls, location: str) -> str: - """ - Return the current commit id of the files at the given location. - """ - raise NotImplementedError - - @classmethod - def run_command( - cls, - cmd: Union[List[str], CommandArgs], - show_stdout: bool = True, - cwd: Optional[str] = None, - on_returncode: 'Literal["raise", "warn", "ignore"]' = "raise", - extra_ok_returncodes: Optional[Iterable[int]] = None, - command_desc: Optional[str] = None, - extra_environ: Optional[Mapping[str, Any]] = None, - spinner: Optional[SpinnerInterface] = None, - log_failed_cmd: bool = True, - stdout_only: bool = False, - ) -> str: - """ - Run a VCS subcommand - This is simply a wrapper around call_subprocess that adds the VCS - command name, and checks that the VCS is available - """ - cmd = make_command(cls.name, *cmd) - if command_desc is None: - command_desc = format_command_args(cmd) - try: - return call_subprocess( - cmd, - show_stdout, - cwd, - on_returncode=on_returncode, - extra_ok_returncodes=extra_ok_returncodes, - command_desc=command_desc, - extra_environ=extra_environ, - unset_environ=cls.unset_environ, - spinner=spinner, - log_failed_cmd=log_failed_cmd, - stdout_only=stdout_only, - ) - except FileNotFoundError: - # errno.ENOENT = no such file or directory - # In other words, the VCS executable isn't available - raise BadCommand( - f"Cannot find command {cls.name!r} - do you have " - f"{cls.name!r} installed and in your PATH?" - ) - except PermissionError: - # errno.EACCES = Permission denied - # This error occurs, for instance, when the command is installed - # only for another user. So, the current user don't have - # permission to call the other user command. - raise BadCommand( - f"No permission to execute {cls.name!r} - install it " - f"locally, globally (ask admin), or check your PATH. " - f"See possible solutions at " - f"https://pip.pypa.io/en/latest/reference/pip_freeze/" - f"#fixing-permission-denied." - ) - - @classmethod - def is_repository_directory(cls, path: str) -> bool: - """ - Return whether a directory path is a repository directory. - """ - logger.debug("Checking in %s for %s (%s)...", path, cls.dirname, cls.name) - return os.path.exists(os.path.join(path, cls.dirname)) - - @classmethod - def get_repository_root(cls, location: str) -> Optional[str]: - """ - Return the "root" (top-level) directory controlled by the vcs, - or `None` if the directory is not in any. - - It is meant to be overridden to implement smarter detection - mechanisms for specific vcs. - - This can do more than is_repository_directory() alone. For - example, the Git override checks that Git is actually available. - """ - if cls.is_repository_directory(location): - return location - return None diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/cachecontrol/_cmd.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/cachecontrol/_cmd.py deleted file mode 100644 index 4266b5ee92a24b5e0ef65689a1b94a98bb4a9b56..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/cachecontrol/_cmd.py +++ /dev/null @@ -1,61 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -import logging - -from pip._vendor import requests - -from pip._vendor.cachecontrol.adapter import CacheControlAdapter -from pip._vendor.cachecontrol.cache import DictCache -from pip._vendor.cachecontrol.controller import logger - -from argparse import ArgumentParser - - -def setup_logging(): - logger.setLevel(logging.DEBUG) - handler = logging.StreamHandler() - logger.addHandler(handler) - - -def get_session(): - adapter = CacheControlAdapter( - DictCache(), cache_etags=True, serializer=None, heuristic=None - ) - sess = requests.Session() - sess.mount("http://", adapter) - sess.mount("https://", adapter) - - sess.cache_controller = adapter.controller - return sess - - -def get_args(): - parser = ArgumentParser() - parser.add_argument("url", help="The URL to try and cache") - return parser.parse_args() - - -def main(args=None): - args = get_args() - sess = get_session() - - # Make a request to get a response - resp = sess.get(args.url) - - # Turn on logging - setup_logging() - - # try setting the cache - sess.cache_controller.cache_response(resp.request, resp.raw) - - # Now try to get it - if sess.cache_controller.cached_request(resp.request): - print("Cached!") - else: - print("Not cached :(") - - -if __name__ == "__main__": - main() diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/cmd.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/cmd.py deleted file mode 100644 index 68a9267c65babd799cec04213c20ad4f3289e109..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/cmd.py +++ /dev/null @@ -1,436 +0,0 @@ -"""distutils.cmd - -Provides the Command class, the base class for the command classes -in the distutils.command package. -""" - -import sys -import os -import re -from distutils.errors import DistutilsOptionError -from distutils import util, dir_util, file_util, archive_util, dep_util -from distutils import log - - -class Command: - """Abstract base class for defining command classes, the "worker bees" - of the Distutils. A useful analogy for command classes is to think of - them as subroutines with local variables called "options". The options - are "declared" in 'initialize_options()' and "defined" (given their - final values, aka "finalized") in 'finalize_options()', both of which - must be defined by every command class. The distinction between the - two is necessary because option values might come from the outside - world (command line, config file, ...), and any options dependent on - other options must be computed *after* these outside influences have - been processed -- hence 'finalize_options()'. The "body" of the - subroutine, where it does all its work based on the values of its - options, is the 'run()' method, which must also be implemented by every - command class. - """ - - # 'sub_commands' formalizes the notion of a "family" of commands, - # eg. "install" as the parent with sub-commands "install_lib", - # "install_headers", etc. The parent of a family of commands - # defines 'sub_commands' as a class attribute; it's a list of - # (command_name : string, predicate : unbound_method | string | None) - # tuples, where 'predicate' is a method of the parent command that - # determines whether the corresponding command is applicable in the - # current situation. (Eg. we "install_headers" is only applicable if - # we have any C header files to install.) If 'predicate' is None, - # that command is always applicable. - # - # 'sub_commands' is usually defined at the *end* of a class, because - # predicates can be unbound methods, so they must already have been - # defined. The canonical example is the "install" command. - sub_commands = [] - - # -- Creation/initialization methods ------------------------------- - - def __init__(self, dist): - """Create and initialize a new Command object. Most importantly, - invokes the 'initialize_options()' method, which is the real - initializer and depends on the actual command being - instantiated. - """ - # late import because of mutual dependence between these classes - from distutils.dist import Distribution - - if not isinstance(dist, Distribution): - raise TypeError("dist must be a Distribution instance") - if self.__class__ is Command: - raise RuntimeError("Command is an abstract class") - - self.distribution = dist - self.initialize_options() - - # Per-command versions of the global flags, so that the user can - # customize Distutils' behaviour command-by-command and let some - # commands fall back on the Distribution's behaviour. None means - # "not defined, check self.distribution's copy", while 0 or 1 mean - # false and true (duh). Note that this means figuring out the real - # value of each flag is a touch complicated -- hence "self._dry_run" - # will be handled by __getattr__, below. - # XXX This needs to be fixed. - self._dry_run = None - - # verbose is largely ignored, but needs to be set for - # backwards compatibility (I think)? - self.verbose = dist.verbose - - # Some commands define a 'self.force' option to ignore file - # timestamps, but methods defined *here* assume that - # 'self.force' exists for all commands. So define it here - # just to be safe. - self.force = None - - # The 'help' flag is just used for command-line parsing, so - # none of that complicated bureaucracy is needed. - self.help = 0 - - # 'finalized' records whether or not 'finalize_options()' has been - # called. 'finalize_options()' itself should not pay attention to - # this flag: it is the business of 'ensure_finalized()', which - # always calls 'finalize_options()', to respect/update it. - self.finalized = 0 - - # XXX A more explicit way to customize dry_run would be better. - def __getattr__(self, attr): - if attr == 'dry_run': - myval = getattr(self, "_" + attr) - if myval is None: - return getattr(self.distribution, attr) - else: - return myval - else: - raise AttributeError(attr) - - def ensure_finalized(self): - if not self.finalized: - self.finalize_options() - self.finalized = 1 - - # Subclasses must define: - # initialize_options() - # provide default values for all options; may be customized by - # setup script, by options from config file(s), or by command-line - # options - # finalize_options() - # decide on the final values for all options; this is called - # after all possible intervention from the outside world - # (command-line, option file, etc.) has been processed - # run() - # run the command: do whatever it is we're here to do, - # controlled by the command's various option values - - def initialize_options(self): - """Set default values for all the options that this command - supports. Note that these defaults may be overridden by other - commands, by the setup script, by config files, or by the - command-line. Thus, this is not the place to code dependencies - between options; generally, 'initialize_options()' implementations - are just a bunch of "self.foo = None" assignments. - - This method must be implemented by all command classes. - """ - raise RuntimeError( - "abstract method -- subclass %s must override" % self.__class__ - ) - - def finalize_options(self): - """Set final values for all the options that this command supports. - This is always called as late as possible, ie. after any option - assignments from the command-line or from other commands have been - done. Thus, this is the place to code option dependencies: if - 'foo' depends on 'bar', then it is safe to set 'foo' from 'bar' as - long as 'foo' still has the same value it was assigned in - 'initialize_options()'. - - This method must be implemented by all command classes. - """ - raise RuntimeError( - "abstract method -- subclass %s must override" % self.__class__ - ) - - def dump_options(self, header=None, indent=""): - from distutils.fancy_getopt import longopt_xlate - - if header is None: - header = "command options for '%s':" % self.get_command_name() - self.announce(indent + header, level=log.INFO) - indent = indent + " " - for (option, _, _) in self.user_options: - option = option.translate(longopt_xlate) - if option[-1] == "=": - option = option[:-1] - value = getattr(self, option) - self.announce(indent + "{} = {}".format(option, value), level=log.INFO) - - def run(self): - """A command's raison d'etre: carry out the action it exists to - perform, controlled by the options initialized in - 'initialize_options()', customized by other commands, the setup - script, the command-line, and config files, and finalized in - 'finalize_options()'. All terminal output and filesystem - interaction should be done by 'run()'. - - This method must be implemented by all command classes. - """ - raise RuntimeError( - "abstract method -- subclass %s must override" % self.__class__ - ) - - def announce(self, msg, level=1): - """If the current verbosity level is of greater than or equal to - 'level' print 'msg' to stdout. - """ - log.log(level, msg) - - def debug_print(self, msg): - """Print 'msg' to stdout if the global DEBUG (taken from the - DISTUTILS_DEBUG environment variable) flag is true. - """ - from distutils.debug import DEBUG - - if DEBUG: - print(msg) - sys.stdout.flush() - - # -- Option validation methods ------------------------------------- - # (these are very handy in writing the 'finalize_options()' method) - # - # NB. the general philosophy here is to ensure that a particular option - # value meets certain type and value constraints. If not, we try to - # force it into conformance (eg. if we expect a list but have a string, - # split the string on comma and/or whitespace). If we can't force the - # option into conformance, raise DistutilsOptionError. Thus, command - # classes need do nothing more than (eg.) - # self.ensure_string_list('foo') - # and they can be guaranteed that thereafter, self.foo will be - # a list of strings. - - def _ensure_stringlike(self, option, what, default=None): - val = getattr(self, option) - if val is None: - setattr(self, option, default) - return default - elif not isinstance(val, str): - raise DistutilsOptionError( - "'{}' must be a {} (got `{}`)".format(option, what, val) - ) - return val - - def ensure_string(self, option, default=None): - """Ensure that 'option' is a string; if not defined, set it to - 'default'. - """ - self._ensure_stringlike(option, "string", default) - - def ensure_string_list(self, option): - r"""Ensure that 'option' is a list of strings. If 'option' is - currently a string, we split it either on /,\s*/ or /\s+/, so - "foo bar baz", "foo,bar,baz", and "foo, bar baz" all become - ["foo", "bar", "baz"]. - """ - val = getattr(self, option) - if val is None: - return - elif isinstance(val, str): - setattr(self, option, re.split(r',\s*|\s+', val)) - else: - if isinstance(val, list): - ok = all(isinstance(v, str) for v in val) - else: - ok = False - if not ok: - raise DistutilsOptionError( - "'{}' must be a list of strings (got {!r})".format(option, val) - ) - - def _ensure_tested_string(self, option, tester, what, error_fmt, default=None): - val = self._ensure_stringlike(option, what, default) - if val is not None and not tester(val): - raise DistutilsOptionError( - ("error in '%s' option: " + error_fmt) % (option, val) - ) - - def ensure_filename(self, option): - """Ensure that 'option' is the name of an existing file.""" - self._ensure_tested_string( - option, os.path.isfile, "filename", "'%s' does not exist or is not a file" - ) - - def ensure_dirname(self, option): - self._ensure_tested_string( - option, - os.path.isdir, - "directory name", - "'%s' does not exist or is not a directory", - ) - - # -- Convenience methods for commands ------------------------------ - - def get_command_name(self): - if hasattr(self, 'command_name'): - return self.command_name - else: - return self.__class__.__name__ - - def set_undefined_options(self, src_cmd, *option_pairs): - """Set the values of any "undefined" options from corresponding - option values in some other command object. "Undefined" here means - "is None", which is the convention used to indicate that an option - has not been changed between 'initialize_options()' and - 'finalize_options()'. Usually called from 'finalize_options()' for - options that depend on some other command rather than another - option of the same command. 'src_cmd' is the other command from - which option values will be taken (a command object will be created - for it if necessary); the remaining arguments are - '(src_option,dst_option)' tuples which mean "take the value of - 'src_option' in the 'src_cmd' command object, and copy it to - 'dst_option' in the current command object". - """ - # Option_pairs: list of (src_option, dst_option) tuples - src_cmd_obj = self.distribution.get_command_obj(src_cmd) - src_cmd_obj.ensure_finalized() - for (src_option, dst_option) in option_pairs: - if getattr(self, dst_option) is None: - setattr(self, dst_option, getattr(src_cmd_obj, src_option)) - - def get_finalized_command(self, command, create=1): - """Wrapper around Distribution's 'get_command_obj()' method: find - (create if necessary and 'create' is true) the command object for - 'command', call its 'ensure_finalized()' method, and return the - finalized command object. - """ - cmd_obj = self.distribution.get_command_obj(command, create) - cmd_obj.ensure_finalized() - return cmd_obj - - # XXX rename to 'get_reinitialized_command()'? (should do the - # same in dist.py, if so) - def reinitialize_command(self, command, reinit_subcommands=0): - return self.distribution.reinitialize_command(command, reinit_subcommands) - - def run_command(self, command): - """Run some other command: uses the 'run_command()' method of - Distribution, which creates and finalizes the command object if - necessary and then invokes its 'run()' method. - """ - self.distribution.run_command(command) - - def get_sub_commands(self): - """Determine the sub-commands that are relevant in the current - distribution (ie., that need to be run). This is based on the - 'sub_commands' class attribute: each tuple in that list may include - a method that we call to determine if the subcommand needs to be - run for the current distribution. Return a list of command names. - """ - commands = [] - for (cmd_name, method) in self.sub_commands: - if method is None or method(self): - commands.append(cmd_name) - return commands - - # -- External world manipulation ----------------------------------- - - def warn(self, msg): - log.warn("warning: %s: %s\n", self.get_command_name(), msg) - - def execute(self, func, args, msg=None, level=1): - util.execute(func, args, msg, dry_run=self.dry_run) - - def mkpath(self, name, mode=0o777): - dir_util.mkpath(name, mode, dry_run=self.dry_run) - - def copy_file( - self, infile, outfile, preserve_mode=1, preserve_times=1, link=None, level=1 - ): - """Copy a file respecting verbose, dry-run and force flags. (The - former two default to whatever is in the Distribution object, and - the latter defaults to false for commands that don't define it.)""" - return file_util.copy_file( - infile, - outfile, - preserve_mode, - preserve_times, - not self.force, - link, - dry_run=self.dry_run, - ) - - def copy_tree( - self, - infile, - outfile, - preserve_mode=1, - preserve_times=1, - preserve_symlinks=0, - level=1, - ): - """Copy an entire directory tree respecting verbose, dry-run, - and force flags. - """ - return dir_util.copy_tree( - infile, - outfile, - preserve_mode, - preserve_times, - preserve_symlinks, - not self.force, - dry_run=self.dry_run, - ) - - def move_file(self, src, dst, level=1): - """Move a file respecting dry-run flag.""" - return file_util.move_file(src, dst, dry_run=self.dry_run) - - def spawn(self, cmd, search_path=1, level=1): - """Spawn an external command respecting dry-run flag.""" - from distutils.spawn import spawn - - spawn(cmd, search_path, dry_run=self.dry_run) - - def make_archive( - self, base_name, format, root_dir=None, base_dir=None, owner=None, group=None - ): - return archive_util.make_archive( - base_name, - format, - root_dir, - base_dir, - dry_run=self.dry_run, - owner=owner, - group=group, - ) - - def make_file( - self, infiles, outfile, func, args, exec_msg=None, skip_msg=None, level=1 - ): - """Special case of 'execute()' for operations that process one or - more input files and generate one output file. Works just like - 'execute()', except the operation is skipped and a different - message printed if 'outfile' already exists and is newer than all - files listed in 'infiles'. If the command defined 'self.force', - and it is true, then the command is unconditionally run -- does no - timestamp checks. - """ - if skip_msg is None: - skip_msg = "skipping %s (inputs unchanged)" % outfile - - # Allow 'infiles' to be a single string - if isinstance(infiles, str): - infiles = (infiles,) - elif not isinstance(infiles, (list, tuple)): - raise TypeError("'infiles' must be a string, or a list or tuple of strings") - - if exec_msg is None: - exec_msg = "generating {} from {}".format(outfile, ', '.join(infiles)) - - # If 'outfile' must be regenerated (either because it doesn't - # exist, is out-of-date, or the 'force' flag is true) then - # perform the action that presumably regenerates it - if self.force or dep_util.newer_group(infiles, outfile): - self.execute(func, args, exec_msg, level) - # Otherwise, print the "skip" message - else: - log.debug(skip_msg) diff --git a/spaces/Reself/StableVideo/ldm/modules/midas/midas/blocks.py b/spaces/Reself/StableVideo/ldm/modules/midas/midas/blocks.py deleted file mode 100644 index 2145d18fa98060a618536d9a64fe6589e9be4f78..0000000000000000000000000000000000000000 --- a/spaces/Reself/StableVideo/ldm/modules/midas/midas/blocks.py +++ /dev/null @@ -1,342 +0,0 @@ -import torch -import torch.nn as nn - -from .vit import ( - _make_pretrained_vitb_rn50_384, - _make_pretrained_vitl16_384, - _make_pretrained_vitb16_384, - forward_vit, -) - -def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, use_vit_only=False, use_readout="ignore",): - if backbone == "vitl16_384": - pretrained = _make_pretrained_vitl16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [256, 512, 1024, 1024], features, groups=groups, expand=expand - ) # ViT-L/16 - 85.0% Top1 (backbone) - elif backbone == "vitb_rn50_384": - pretrained = _make_pretrained_vitb_rn50_384( - use_pretrained, - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) - scratch = _make_scratch( - [256, 512, 768, 768], features, groups=groups, expand=expand - ) # ViT-H/16 - 85.0% Top1 (backbone) - elif backbone == "vitb16_384": - pretrained = _make_pretrained_vitb16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [96, 192, 384, 768], features, groups=groups, expand=expand - ) # ViT-B/16 - 84.6% Top1 (backbone) - elif backbone == "resnext101_wsl": - pretrained = _make_pretrained_resnext101_wsl(use_pretrained) - scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3 - elif backbone == "efficientnet_lite3": - pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable) - scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3 - else: - print(f"Backbone '{backbone}' not implemented") - assert False - - return pretrained, scratch - - -def _make_scratch(in_shape, out_shape, groups=1, expand=False): - scratch = nn.Module() - - out_shape1 = out_shape - out_shape2 = out_shape - out_shape3 = out_shape - out_shape4 = out_shape - if expand==True: - out_shape1 = out_shape - out_shape2 = out_shape*2 - out_shape3 = out_shape*4 - out_shape4 = out_shape*8 - - scratch.layer1_rn = nn.Conv2d( - in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer2_rn = nn.Conv2d( - in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer3_rn = nn.Conv2d( - in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer4_rn = nn.Conv2d( - in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - - return scratch - - -def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False): - efficientnet = torch.hub.load( - "rwightman/gen-efficientnet-pytorch", - "tf_efficientnet_lite3", - pretrained=use_pretrained, - exportable=exportable - ) - return _make_efficientnet_backbone(efficientnet) - - -def _make_efficientnet_backbone(effnet): - pretrained = nn.Module() - - pretrained.layer1 = nn.Sequential( - effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2] - ) - pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3]) - pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5]) - pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9]) - - return pretrained - - -def _make_resnet_backbone(resnet): - pretrained = nn.Module() - pretrained.layer1 = nn.Sequential( - resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1 - ) - - pretrained.layer2 = resnet.layer2 - pretrained.layer3 = resnet.layer3 - pretrained.layer4 = resnet.layer4 - - return pretrained - - -def _make_pretrained_resnext101_wsl(use_pretrained): - resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl") - return _make_resnet_backbone(resnet) - - - -class Interpolate(nn.Module): - """Interpolation module. - """ - - def __init__(self, scale_factor, mode, align_corners=False): - """Init. - - Args: - scale_factor (float): scaling - mode (str): interpolation mode - """ - super(Interpolate, self).__init__() - - self.interp = nn.functional.interpolate - self.scale_factor = scale_factor - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: interpolated data - """ - - x = self.interp( - x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners - ) - - return x - - -class ResidualConvUnit(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - out = self.relu(x) - out = self.conv1(out) - out = self.relu(out) - out = self.conv2(out) - - return out + x - - -class FeatureFusionBlock(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock, self).__init__() - - self.resConfUnit1 = ResidualConvUnit(features) - self.resConfUnit2 = ResidualConvUnit(features) - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - output += self.resConfUnit1(xs[1]) - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=True - ) - - return output - - - - -class ResidualConvUnit_custom(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features, activation, bn): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.bn = bn - - self.groups=1 - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - if self.bn==True: - self.bn1 = nn.BatchNorm2d(features) - self.bn2 = nn.BatchNorm2d(features) - - self.activation = activation - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - - out = self.activation(x) - out = self.conv1(out) - if self.bn==True: - out = self.bn1(out) - - out = self.activation(out) - out = self.conv2(out) - if self.bn==True: - out = self.bn2(out) - - if self.groups > 1: - out = self.conv_merge(out) - - return self.skip_add.add(out, x) - - # return out + x - - -class FeatureFusionBlock_custom(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock_custom, self).__init__() - - self.deconv = deconv - self.align_corners = align_corners - - self.groups=1 - - self.expand = expand - out_features = features - if self.expand==True: - out_features = features//2 - - self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1) - - self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn) - self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn) - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - res = self.resConfUnit1(xs[1]) - output = self.skip_add.add(output, res) - # output += res - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=self.align_corners - ) - - output = self.out_conv(output) - - return output - diff --git a/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/dataset.py b/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/dataset.py deleted file mode 100644 index cfd01a174978d97180a897e40cb59ecadec1d12e..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/dataset.py +++ /dev/null @@ -1,183 +0,0 @@ -import os -import random - -import numpy as np -import torch -import torch.utils.data -from tqdm import tqdm - -from . import spec_utils - - -class VocalRemoverValidationSet(torch.utils.data.Dataset): - def __init__(self, patch_list): - self.patch_list = patch_list - - def __len__(self): - return len(self.patch_list) - - def __getitem__(self, idx): - path = self.patch_list[idx] - data = np.load(path) - - X, y = data["X"], data["y"] - - X_mag = np.abs(X) - y_mag = np.abs(y) - - return X_mag, y_mag - - -def make_pair(mix_dir, inst_dir): - input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"] - - X_list = sorted( - [ - os.path.join(mix_dir, fname) - for fname in os.listdir(mix_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - y_list = sorted( - [ - os.path.join(inst_dir, fname) - for fname in os.listdir(inst_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - - filelist = list(zip(X_list, y_list)) - - return filelist - - -def train_val_split(dataset_dir, split_mode, val_rate, val_filelist): - if split_mode == "random": - filelist = make_pair( - os.path.join(dataset_dir, "mixtures"), - os.path.join(dataset_dir, "instruments"), - ) - - random.shuffle(filelist) - - if len(val_filelist) == 0: - val_size = int(len(filelist) * val_rate) - train_filelist = filelist[:-val_size] - val_filelist = filelist[-val_size:] - else: - train_filelist = [ - pair for pair in filelist if list(pair) not in val_filelist - ] - elif split_mode == "subdirs": - if len(val_filelist) != 0: - raise ValueError( - "The `val_filelist` option is not available in `subdirs` mode" - ) - - train_filelist = make_pair( - os.path.join(dataset_dir, "training/mixtures"), - os.path.join(dataset_dir, "training/instruments"), - ) - - val_filelist = make_pair( - os.path.join(dataset_dir, "validation/mixtures"), - os.path.join(dataset_dir, "validation/instruments"), - ) - - return train_filelist, val_filelist - - -def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha): - perm = np.random.permutation(len(X)) - for i, idx in enumerate(tqdm(perm)): - if np.random.uniform() < reduction_rate: - y[idx] = spec_utils.reduce_vocal_aggressively( - X[idx], y[idx], reduction_mask - ) - - if np.random.uniform() < 0.5: - # swap channel - X[idx] = X[idx, ::-1] - y[idx] = y[idx, ::-1] - if np.random.uniform() < 0.02: - # mono - X[idx] = X[idx].mean(axis=0, keepdims=True) - y[idx] = y[idx].mean(axis=0, keepdims=True) - if np.random.uniform() < 0.02: - # inst - X[idx] = y[idx] - - if np.random.uniform() < mixup_rate and i < len(perm) - 1: - lam = np.random.beta(mixup_alpha, mixup_alpha) - X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]] - y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]] - - return X, y - - -def make_padding(width, cropsize, offset): - left = offset - roi_size = cropsize - left * 2 - if roi_size == 0: - roi_size = cropsize - right = roi_size - (width % roi_size) + left - - return left, right, roi_size - - -def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset): - len_dataset = patches * len(filelist) - - X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches) - ends = starts + cropsize - for j in range(patches): - idx = i * patches + j - X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]] - y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]] - - return X_dataset, y_dataset - - -def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset): - patch_list = [] - patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format( - cropsize, sr, hop_length, n_fft, offset - ) - os.makedirs(patch_dir, exist_ok=True) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - basename = os.path.splitext(os.path.basename(X_path))[0] - - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - len_dataset = int(np.ceil(X.shape[2] / roi_size)) - for j in range(len_dataset): - outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j)) - start = j * roi_size - if not os.path.exists(outpath): - np.savez( - outpath, - X=X_pad[:, :, start : start + cropsize], - y=y_pad[:, :, start : start + cropsize], - ) - patch_list.append(outpath) - - return VocalRemoverValidationSet(patch_list) diff --git a/spaces/RichardMB1217/blip/models/__init__.py b/spaces/RichardMB1217/blip/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Riksarkivet/htr_demo/src/htr_pipeline/utils/order_of_object.py b/spaces/Riksarkivet/htr_demo/src/htr_pipeline/utils/order_of_object.py deleted file mode 100644 index d99f9daa333ac5af6674284c4818727a11cda1f3..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/src/htr_pipeline/utils/order_of_object.py +++ /dev/null @@ -1,92 +0,0 @@ -import numpy as np -import pandas as pd - - -class OrderObject: - def __init__(self): - pass - - def order_lines(self, line_image, line_spacing_factor=0.5): - bounding_boxes = line_image.pred_instances.bboxes.tolist() - center_points = [(box[1] + box[3]) / 2 for box in bounding_boxes] - horizontal_positions = [(box[0] + box[2]) / 2 for box in bounding_boxes] - - # Calculate the threshold distance - threshold_distance = self._calculate_threshold_distance(bounding_boxes, line_spacing_factor) - - # Sort the indices based on vertical center points and horizontal positions - indices = list(range(len(bounding_boxes))) - indices.sort( - key=lambda i: ( - center_points[i] // threshold_distance, - horizontal_positions[i], - ) - ) - - # Order text lines - return indices - - def _calculate_threshold_distance(self, bounding_boxes, line_spacing_factor=0.5): - # Calculate the average height of the text lines - total_height = sum(box[3] - box[1] for box in bounding_boxes) - average_height = total_height / len(bounding_boxes) - - # Calculate the threshold distance, Set a factor for the threshold distance (adjust as needed) - threshold_distance = average_height * line_spacing_factor - - # Return the threshold distance - return threshold_distance - - def order_regions_marginalia(self, region_image, margin_ratio=0.2, histogram_bins=50, histogram_dip_ratio=0.5): - bounding_boxes = region_image.pred_instances.bboxes.tolist() - img_width = region_image.metainfo["ori_shape"][1] - - regions = [[i, x[0], x[1], x[0] + x[2], x[1] + x[3]] for i, x in enumerate(bounding_boxes)] - - # Create a pandas DataFrame from the regions - df = pd.DataFrame(regions, columns=["region_id", "x_min", "y_min", "x_max", "y_max"]) - - # Calculate the centroids of the bounding boxes - df["centroid_x"] = (df["x_min"] + df["x_max"]) / 2 - df["centroid_y"] = (df["y_min"] + df["y_max"]) / 2 - - # Calculate a histogram of the x-coordinates of the centroids - histogram, bin_edges = np.histogram(df["centroid_x"], bins=histogram_bins) - - # Determine if there's a significant dip in the histogram, which would suggest a two-page layout - is_two_pages = np.min(histogram) < np.max(histogram) * histogram_dip_ratio - - if is_two_pages: - # Determine which page each region is on - page_width = int(img_width / 2) - df["page"] = (df["centroid_x"] > page_width).astype(int) - - # Determine if the region is in the margin - margin_width = page_width * margin_ratio - df["is_margin"] = ((df["page"] == 0) & (df["centroid_x"] < margin_width)) | ( - (df["page"] == 1) & (df["centroid_x"] > img_width - margin_width) - ) - else: - df["page"] = 0 - df["is_margin"] = (df["centroid_x"] < img_width * margin_ratio) | ( - df["centroid_x"] > img_width - page_width * margin_ratio - ) - - # Define a custom sorting function - sort_regions = lambda row: ( - row["page"], - row["is_margin"], - row["centroid_y"], - row["centroid_x"], - ) - - # Sort the DataFrame using the custom function - df["sort_key"] = df.apply(sort_regions, axis=1) - df = df.sort_values("sort_key") - - # Return the ordered regions - return df["region_id"].tolist() - - -if __name__ == "__main__": - pass diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/anchor/builder.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/anchor/builder.py deleted file mode 100644 index d79b448ebca9f2b21d455046623172c48c5c3ef0..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/anchor/builder.py +++ /dev/null @@ -1,7 +0,0 @@ -from mmcv.utils import Registry, build_from_cfg - -ANCHOR_GENERATORS = Registry('Anchor generator') - - -def build_anchor_generator(cfg, default_args=None): - return build_from_cfg(cfg, ANCHOR_GENERATORS, default_args) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/utils.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/utils.py deleted file mode 100644 index 4756d7fcefd7cda1294c2662b4ca3e90c0a8e124..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/utils.py +++ /dev/null @@ -1,100 +0,0 @@ -import functools - -import mmcv -import torch.nn.functional as F - - -def reduce_loss(loss, reduction): - """Reduce loss as specified. - - Args: - loss (Tensor): Elementwise loss tensor. - reduction (str): Options are "none", "mean" and "sum". - - Return: - Tensor: Reduced loss tensor. - """ - reduction_enum = F._Reduction.get_enum(reduction) - # none: 0, elementwise_mean:1, sum: 2 - if reduction_enum == 0: - return loss - elif reduction_enum == 1: - return loss.mean() - elif reduction_enum == 2: - return loss.sum() - - -@mmcv.jit(derivate=True, coderize=True) -def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None): - """Apply element-wise weight and reduce loss. - - Args: - loss (Tensor): Element-wise loss. - weight (Tensor): Element-wise weights. - reduction (str): Same as built-in losses of PyTorch. - avg_factor (float): Avarage factor when computing the mean of losses. - - Returns: - Tensor: Processed loss values. - """ - # if weight is specified, apply element-wise weight - if weight is not None: - loss = loss * weight - - # if avg_factor is not specified, just reduce the loss - if avg_factor is None: - loss = reduce_loss(loss, reduction) - else: - # if reduction is mean, then average the loss by avg_factor - if reduction == 'mean': - loss = loss.sum() / avg_factor - # if reduction is 'none', then do nothing, otherwise raise an error - elif reduction != 'none': - raise ValueError('avg_factor can not be used with reduction="sum"') - return loss - - -def weighted_loss(loss_func): - """Create a weighted version of a given loss function. - - To use this decorator, the loss function must have the signature like - `loss_func(pred, target, **kwargs)`. The function only needs to compute - element-wise loss without any reduction. This decorator will add weight - and reduction arguments to the function. The decorated function will have - the signature like `loss_func(pred, target, weight=None, reduction='mean', - avg_factor=None, **kwargs)`. - - :Example: - - >>> import torch - >>> @weighted_loss - >>> def l1_loss(pred, target): - >>> return (pred - target).abs() - - >>> pred = torch.Tensor([0, 2, 3]) - >>> target = torch.Tensor([1, 1, 1]) - >>> weight = torch.Tensor([1, 0, 1]) - - >>> l1_loss(pred, target) - tensor(1.3333) - >>> l1_loss(pred, target, weight) - tensor(1.) - >>> l1_loss(pred, target, reduction='none') - tensor([1., 1., 2.]) - >>> l1_loss(pred, target, weight, avg_factor=2) - tensor(1.5000) - """ - - @functools.wraps(loss_func) - def wrapper(pred, - target, - weight=None, - reduction='mean', - avg_factor=None, - **kwargs): - # get element-wise loss - loss = loss_func(pred, target, **kwargs) - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - return wrapper diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/backbones/res2net.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/backbones/res2net.py deleted file mode 100644 index 7901b7f2fa29741d72328bdbdbf92fc4d5c5f847..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/backbones/res2net.py +++ /dev/null @@ -1,351 +0,0 @@ -import math - -import torch -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import (build_conv_layer, build_norm_layer, constant_init, - kaiming_init) -from mmcv.runner import load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm - -from mmdet.utils import get_root_logger -from ..builder import BACKBONES -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottle2neck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - scales=4, - base_width=26, - base_channels=64, - stage_type='normal', - **kwargs): - """Bottle2neck block for Res2Net. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottle2neck, self).__init__(inplanes, planes, **kwargs) - assert scales > 1, 'Res2Net degenerates to ResNet when scales = 1.' - width = int(math.floor(self.planes * (base_width / base_channels))) - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width * scales, postfix=1) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width * scales, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - - if stage_type == 'stage' and self.conv2_stride != 1: - self.pool = nn.AvgPool2d( - kernel_size=3, stride=self.conv2_stride, padding=1) - convs = [] - bns = [] - - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - for i in range(scales - 1): - convs.append( - build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - bias=False)) - bns.append( - build_norm_layer(self.norm_cfg, width, postfix=i + 1)[1]) - self.convs = nn.ModuleList(convs) - self.bns = nn.ModuleList(bns) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - for i in range(scales - 1): - convs.append( - build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - bias=False)) - bns.append( - build_norm_layer(self.norm_cfg, width, postfix=i + 1)[1]) - self.convs = nn.ModuleList(convs) - self.bns = nn.ModuleList(bns) - - self.conv3 = build_conv_layer( - self.conv_cfg, - width * scales, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.stage_type = stage_type - self.scales = scales - self.width = width - delattr(self, 'conv2') - delattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - spx = torch.split(out, self.width, 1) - sp = self.convs[0](spx[0].contiguous()) - sp = self.relu(self.bns[0](sp)) - out = sp - for i in range(1, self.scales - 1): - if self.stage_type == 'stage': - sp = spx[i] - else: - sp = sp + spx[i] - sp = self.convs[i](sp.contiguous()) - sp = self.relu(self.bns[i](sp)) - out = torch.cat((out, sp), 1) - - if self.stage_type == 'normal' or self.conv2_stride == 1: - out = torch.cat((out, spx[self.scales - 1]), 1) - elif self.stage_type == 'stage': - out = torch.cat((out, self.pool(spx[self.scales - 1])), 1) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Res2Layer(nn.Sequential): - """Res2Layer to build Res2Net style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottle2neck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - scales (int): Scales used in Res2Net. Default: 4 - base_width (int): Basic width of each scale. Default: 26 - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=True, - conv_cfg=None, - norm_cfg=dict(type='BN'), - scales=4, - base_width=26, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False), - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=1, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1], - ) - - layers = [] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - scales=scales, - base_width=base_width, - stage_type='stage', - **kwargs)) - inplanes = planes * block.expansion - for i in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - scales=scales, - base_width=base_width, - **kwargs)) - super(Res2Layer, self).__init__(*layers) - - -@BACKBONES.register_module() -class Res2Net(ResNet): - """Res2Net backbone. - - Args: - scales (int): Scales used in Res2Net. Default: 4 - base_width (int): Basic width of each scale. Default: 26 - depth (int): Depth of res2net, from {50, 101, 152}. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Res2net stages. Default: 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottle2neck. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - position (str, required): Position inside block to insert - plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'. - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages'. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from mmdet.models import Res2Net - >>> import torch - >>> self = Res2Net(depth=50, scales=4, base_width=26) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 256, 8, 8) - (1, 512, 4, 4) - (1, 1024, 2, 2) - (1, 2048, 1, 1) - """ - - arch_settings = { - 50: (Bottle2neck, (3, 4, 6, 3)), - 101: (Bottle2neck, (3, 4, 23, 3)), - 152: (Bottle2neck, (3, 8, 36, 3)) - } - - def __init__(self, - scales=4, - base_width=26, - style='pytorch', - deep_stem=True, - avg_down=True, - **kwargs): - self.scales = scales - self.base_width = base_width - super(Res2Net, self).__init__( - style='pytorch', deep_stem=True, avg_down=True, **kwargs) - - def make_res_layer(self, **kwargs): - return Res2Layer( - scales=self.scales, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - if self.dcn is not None: - for m in self.modules(): - if isinstance(m, Bottle2neck): - # dcn in Res2Net bottle2neck is in ModuleList - for n in m.convs: - if hasattr(n, 'conv_offset'): - constant_init(n.conv_offset, 0) - - if self.zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottle2neck): - constant_init(m.norm3, 0) - else: - raise TypeError('pretrained must be a str or None') diff --git a/spaces/SemanticTypography/Word-As-Image/app.py b/spaces/SemanticTypography/Word-As-Image/app.py deleted file mode 100644 index 0fd910be20f58a85c9fb35323345b9f616ce0b6d..0000000000000000000000000000000000000000 --- a/spaces/SemanticTypography/Word-As-Image/app.py +++ /dev/null @@ -1,365 +0,0 @@ -import gradio as gr -import os -import argparse -from easydict import EasyDict as edict -import yaml -import os.path as osp -import random -import numpy.random as npr -import sys - -sys.path.append('/home/user/app/code') - -# set up diffvg -os.system('git clone https://github.com/BachiLi/diffvg.git') -os.chdir('diffvg') -os.system('git submodule update --init --recursive') -os.system('python setup.py install --user') -sys.path.append("/home/user/.local/lib/python3.8/site-packages/diffvg-0.0.1-py3.8-linux-x86_64.egg") - -os.chdir('/home/user/app') - -import torch -from diffusers import StableDiffusionPipeline - - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - -model = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", - torch_dtype=torch.float16, use_auth_token=os.environ['HF_TOKEN']).to(device) - -from typing import Mapping -from tqdm import tqdm -import torch -from torch.optim.lr_scheduler import LambdaLR -import pydiffvg -import save_svg -from losses import SDSLoss, ToneLoss, ConformalLoss -from utils import ( - edict_2_dict, - update, - check_and_create_dir, - get_data_augs, - save_image, - preprocess, - learning_rate_decay, - combine_word) -import warnings - -TITLE="""

      Word-As-Image for Semantic Typography

      """ -TITLE2="""

      SIGGRAPH 2023 - Honorable Mention Award

      """ -DESCRIPTION="""A demo for [Word-As-Image for Semantic Typography](https://wordasimage.github.io/Word-As-Image-Page/). By using Word-as-Image, a visual representation of the meaning of the word is created while maintaining legibility of the text and font style. -Please select a semantic concept word and a letter you wish to generate, it will take ~5 minutes to perform 500 iterations.""" - -DESCRIPTION += '\n

      This demo is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

      ' - -if (SPACE_ID := os.getenv('SPACE_ID')) is not None: - DESCRIPTION += f'\n

      For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. Duplicate Space

      ' - - -warnings.filterwarnings("ignore") - -pydiffvg.set_print_timing(False) -gamma = 1.0 - - -def set_config(semantic_concept, word, letter, font_name, num_steps): - - cfg_d = edict() - cfg_d.config = "code/config/base.yaml" - cfg_d.experiment = "demo" - - with open(cfg_d.config, 'r') as f: - cfg_full = yaml.load(f, Loader=yaml.FullLoader) - - cfg_key = cfg_d.experiment - cfgs = [cfg_d] - while cfg_key: - cfgs.append(cfg_full[cfg_key]) - cfg_key = cfgs[-1].get('parent_config', 'baseline') - - cfg = edict() - for options in reversed(cfgs): - update(cfg, options) - del cfgs - - cfg.semantic_concept = semantic_concept - cfg.word = word - cfg.optimized_letter = letter - cfg.font = font_name - cfg.seed = 0 - cfg.num_iter = num_steps - - if ' ' in cfg.word: - raise gr.Error(f'should be only one word') - cfg.caption = f"a {cfg.semantic_concept}. {cfg.prompt_suffix}" - cfg.log_dir = f"output/{cfg.experiment}_{cfg.word}" - if cfg.optimized_letter in cfg.word: - cfg.optimized_letter = cfg.optimized_letter - else: - raise gr.Error(f'letter should be in word') - - cfg.letter = f"{cfg.font}_{cfg.optimized_letter}_scaled" - cfg.target = f"code/data/init/{cfg.letter}" - - # set experiment dir - signature = f"{cfg.letter}_concept_{cfg.semantic_concept}_seed_{cfg.seed}" - cfg.experiment_dir = \ - osp.join(cfg.log_dir, cfg.font, signature) - configfile = osp.join(cfg.experiment_dir, 'config.yaml') - - # create experiment dir and save config - check_and_create_dir(configfile) - with open(osp.join(configfile), 'w') as f: - yaml.dump(edict_2_dict(cfg), f) - - if cfg.seed is not None: - random.seed(cfg.seed) - npr.seed(cfg.seed) - torch.manual_seed(cfg.seed) - torch.backends.cudnn.benchmark = False - else: - assert False - return cfg - - -def init_shapes(svg_path, trainable: Mapping[str, bool]): - svg = f'{svg_path}.svg' - canvas_width, canvas_height, shapes_init, shape_groups_init = pydiffvg.svg_to_scene(svg) - - parameters = edict() - - # path points - if trainable.point: - parameters.point = [] - for path in shapes_init: - path.points.requires_grad = True - parameters.point.append(path.points) - - return shapes_init, shape_groups_init, parameters - - -def run_main_ex(semantic_concept, word, letter, font_name, num_steps): - return list(next(run_main_app(semantic_concept, word, letter, font_name, num_steps, 1))) - -def run_main_app(semantic_concept, word, letter, font_name, num_steps, example=0): - - cfg = set_config(semantic_concept, word, letter, font_name, num_steps) - - pydiffvg.set_use_gpu(torch.cuda.is_available()) - - print("preprocessing") - preprocess(cfg.font, cfg.word, cfg.optimized_letter, cfg.level_of_cc) - filename_init = os.path.join("code/data/init/", f"{cfg.font}_{cfg.word}_scaled.svg").replace(" ", "_") - if not example: - yield gr.update(value=filename_init,visible=True),gr.update(visible=False),gr.update(visible=False) - - sds_loss = SDSLoss(cfg, device, model) - - h, w = cfg.render_size, cfg.render_size - - data_augs = get_data_augs(cfg.cut_size) - - render = pydiffvg.RenderFunction.apply - - # initialize shape - print('initializing shape') - shapes, shape_groups, parameters = init_shapes(svg_path=cfg.target, trainable=cfg.trainable) - - scene_args = pydiffvg.RenderFunction.serialize_scene(w, h, shapes, shape_groups) - img_init = render(w, h, 2, 2, 0, None, *scene_args) - img_init = img_init[:, :, 3:4] * img_init[:, :, :3] + \ - torch.ones(img_init.shape[0], img_init.shape[1], 3, device=device) * (1 - img_init[:, :, 3:4]) - img_init = img_init[:, :, :3] - - tone_loss = ToneLoss(cfg) - tone_loss.set_image_init(img_init) - - num_iter = cfg.num_iter - pg = [{'params': parameters["point"], 'lr': cfg.lr_base["point"]}] - optim = torch.optim.Adam(pg, betas=(0.9, 0.9), eps=1e-6) - - conformal_loss = ConformalLoss(parameters, device, cfg.optimized_letter, shape_groups) - - lr_lambda = lambda step: learning_rate_decay(step, cfg.lr.lr_init, cfg.lr.lr_final, num_iter, - lr_delay_steps=cfg.lr.lr_delay_steps, - lr_delay_mult=cfg.lr.lr_delay_mult) / cfg.lr.lr_init - - scheduler = LambdaLR(optim, lr_lambda=lr_lambda, last_epoch=-1) # lr.base * lrlambda_f - - print("start training") - # training loop - t_range = tqdm(range(num_iter)) - for step in t_range: - optim.zero_grad() - - # render image - scene_args = pydiffvg.RenderFunction.serialize_scene(w, h, shapes, shape_groups) - img = render(w, h, 2, 2, step, None, *scene_args) - - # compose image with white background - img = img[:, :, 3:4] * img[:, :, :3] + torch.ones(img.shape[0], img.shape[1], 3, device=device) * ( - 1 - img[:, :, 3:4]) - img = img[:, :, :3] - - filename = os.path.join( - cfg.experiment_dir, "video-svg", f"iter{step:04d}.svg") - check_and_create_dir(filename) - save_svg.save_svg(filename, w, h, shapes, shape_groups) - if not example: - yield gr.update(visible=True),gr.update(value=filename, label=f'iters: {step} / {num_iter}', visible=True),gr.update(visible=False) - - x = img.unsqueeze(0).permute(0, 3, 1, 2) # HWC -> NCHW - x = x.repeat(cfg.batch_size, 1, 1, 1) - x_aug = data_augs.forward(x) - - # compute diffusion loss per pixel - loss = sds_loss(x_aug) - - tone_loss_res = tone_loss(x, step) - loss = loss + tone_loss_res - - loss_angles = conformal_loss() - loss_angles = cfg.loss.conformal.angeles_w * loss_angles - loss = loss + loss_angles - - loss.backward() - optim.step() - scheduler.step() - - - filename = os.path.join( - cfg.experiment_dir, "output-svg", "output.svg") - check_and_create_dir(filename) - save_svg.save_svg( - filename, w, h, shapes, shape_groups) - - combine_word(cfg.word, cfg.optimized_letter, cfg.font, cfg.experiment_dir) - - image = os.path.join(cfg.experiment_dir,f"{cfg.font}_{cfg.word}_{cfg.optimized_letter}.svg") - yield gr.update(value=filename_init,visible=True),gr.update(visible=False),gr.update(value=image,visible=True) - - -with gr.Blocks() as demo: - - gr.HTML(TITLE) - gr.HTML(TITLE2) - gr.Markdown(DESCRIPTION) - - with gr.Row(): - with gr.Column(): - - semantic_concept = gr.Text( - label='Semantic Concept', - max_lines=1, - placeholder= - 'Enter a semantic concept. For example: BUNNY' - ) - - word = gr.Text( - label='Word', - max_lines=1, - placeholder= - 'Enter a word. For example: BUNNY' - ) - - letter = gr.Text( - label='Letter', - max_lines=1, - placeholder= - 'Choose a letter in the word to optimize. For example: Y' - ) - - num_steps = gr.Slider(label='Optimization Iterations', - minimum=0, - maximum=500, - step=10, - value=500) - - font_name = gr.Text(value=None,visible=False,label="Font Name") - gallery = gr.Gallery(value=[(os.path.join("images","KaushanScript-Regular.png"),"KaushanScript-Regular"), (os.path.join("images","IndieFlower-Regular.png"),"IndieFlower-Regular"),(os.path.join("images","Quicksand.png"),"Quicksand"), - (os.path.join("images","Saira-Regular.png"),"Saira-Regular"), (os.path.join("images","LuckiestGuy-Regular.png"),"LuckiestGuy-Regular"),(os.path.join("images","DeliusUnicase-Regular.png"),"DeliusUnicase-Regular"), - (os.path.join("images","Noteworthy-Bold.png"),"Noteworthy-Bold"), (os.path.join("images","HobeauxRococeaux-Sherman.png"),"HobeauxRococeaux-Sherman")],label="Font Name").style(grid=4) - - def on_select(evt: gr.SelectData): - return evt.value - - gallery.select(fn=on_select, inputs=None, outputs=font_name) - - run = gr.Button('Generate') - - with gr.Column(): - result0 = gr.Image(type="filepath", label="Initial Word").style(height=333) - result1 = gr.Image(type="filepath", label="Optimization Process").style(height=110) - result2 = gr.Image(type="filepath", label="Final Result",visible=False).style(height=333) - - - with gr.Row(): - # examples - examples = [ - [ - "BUNNY", - "BUNNY", - "Y", - "KaushanScript-Regular", - 500 - ], - [ - "LION", - "LION", - "O", - "Quicksand", - 500 - ], - [ - "FROG", - "FROG", - "G", - "IndieFlower-Regular", - 500 - ], - [ - "CAT", - "CAT", - "C", - "LuckiestGuy-Regular", - 500 - ], - ] - demo.queue(max_size=10, concurrency_count=2) - gr.Examples(examples=examples, - inputs=[ - semantic_concept, - word, - letter, - font_name, - num_steps - ], - outputs=[ - result0, - result1, - result2 - ], - fn=run_main_ex, - cache_examples=True) - - - # inputs - inputs = [ - semantic_concept, - word, - letter, - font_name, - num_steps - ] - - outputs = [ - result0, - result1, - result2 - ] - - run.click(fn=run_main_app, inputs=inputs, outputs=outputs, queue=True) - - -demo.launch(share=False) \ No newline at end of file diff --git a/spaces/Serg4451D/DALLE2STANDARD/README.md b/spaces/Serg4451D/DALLE2STANDARD/README.md deleted file mode 100644 index 0f56976071f5c338b1a3533522b0285c039c04d2..0000000000000000000000000000000000000000 --- a/spaces/Serg4451D/DALLE2STANDARD/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: DALLE2STANDARD -emoji: 🚀 -colorFrom: indigo -colorTo: indigo -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SeyedAli/Persian-Speech-Emotion-Detection/modeling_outputs.py b/spaces/SeyedAli/Persian-Speech-Emotion-Detection/modeling_outputs.py deleted file mode 100644 index ae5f581ef3f3a2aebe8eed9976498fe1d78b2e30..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Persian-Speech-Emotion-Detection/modeling_outputs.py +++ /dev/null @@ -1,12 +0,0 @@ -from dataclasses import dataclass -from typing import Optional, Tuple -import torch -from transformers.file_utils import ModelOutput - - -@dataclass -class SpeechClassifierOutput(ModelOutput): - loss: Optional[torch.FloatTensor] = None - logits: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None diff --git a/spaces/SeyedAli/Persian-Text-NER/README.md b/spaces/SeyedAli/Persian-Text-NER/README.md deleted file mode 100644 index fd14ad32b4109ef8fde852d0d8d5362bf54b2130..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Persian-Text-NER/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Persian Text NER -emoji: 📝 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Shrey-Patel/background-remover/README.md b/spaces/Shrey-Patel/background-remover/README.md deleted file mode 100644 index a4d7243f4e22f90521eba0b370cdbe064c83dbde..0000000000000000000000000000000000000000 --- a/spaces/Shrey-Patel/background-remover/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Background Remover -emoji: ⚡ -colorFrom: purple -colorTo: red -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sudhir87/Intervupro.ai/app.py b/spaces/Sudhir87/Intervupro.ai/app.py deleted file mode 100644 index 3ae3822a670a6615d167a97544bb8b056cfa4e59..0000000000000000000000000000000000000000 --- a/spaces/Sudhir87/Intervupro.ai/app.py +++ /dev/null @@ -1,263 +0,0 @@ - -import openai -import streamlit as st -import fitz - -import streamlit as st - -def developer_details(): - st.sidebar.markdown("# Developer Details") - - developers = [ - { - "name": "Sudhir Sharma", - "role": "B.Tech CSE - IIT Bhilai 2024", - "email": "sudhirsharma@iitbhilai.ac.in", - "github": "https://github.com/Sudhir878786", - "linkedin": "https://www.linkedin.com/in/sudhirsharma87/", - "avatar": "https://avatars.githubusercontent.com/u/92601949?v=4", # Replace with actual image URL - }, - # Add more developers if needed - ] - - for developer in developers: - st.sidebar.markdown(f"## {developer['name']}") - st.sidebar.image(developer['avatar'], width=150) - st.sidebar.write(f"**Role:** {developer['role']}") - st.sidebar.write(f"**Email:** {developer['email']}") - st.sidebar.write(f"**GitHub:** {developer['github']}") - st.sidebar.write(f"**LinkedIn:** {developer['linkedin']}") - st.sidebar.markdown("---") - -# Call the function to display the container in the sidebar -developer_details() - - -def prompt1(company,position,round,intervier_level): - prompt=f""" - I want you to act as a {intervier_level} for {round} interviews on {company} company for the position of {position}. You'll suggest the characteristics of this job interview and the characteristics of the {company}'s interview. - The return format should bullet point be like this: - Characteristics of this job interview: - - name: detail explanation - Characteristics of {company}: - - name: detail explanation - """ - return prompt - -def followup1(company,position,round,bullet_point): - prompt=f""" - Can you talk the detail about {bullet_point} for {round} interviews on {company} company for the position of {position}? - The return format should bullet point be like this: - - name: detail explanation - """ - return prompt - -def prompt2(position): - prompt=f""" - I want you to act as a talent recruiter for the position of {position}'s interviews. You'll suggest the characteristics of this job interview for both behavior requirement and technical requirement. - The return format should bullet point be like this: - Characteristics of this behavior requirement: - - name: detail explanation - Characteristics of technical requirement: - - name: detail explanation - """ - return prompt - -def followup2(position,bullet_point): - prompt=f""" - Can you talk the detail about {bullet_point} for the interview of {position}? - The return format should bullet point be like this: - - name: detail impovement advice - """ - return prompt - -def prompt3(position,resume,dis_num,ad_num): - prompt=f""" - I want you to act as a talent recruiter for hiring {position}. I will give you a resume and you'll suggest what are {dis_num}+ disadvantages and {ad_num}+ advantages of the {position} position. Please remember, the return should be highly related to the {position} position. - The return format should bullet point be like this: - Disadvatage: - - name: Suggestions for improvements to make the resume more appropriate for the {position} position and link to specific sentences on the resume. - Advantage: - - name: use one sentence to explain why it is advantageous for the {position} position. - - Here is the resume: - {resume}. - """ - return prompt - -def followup3(position,resume,bullet_point): - prompt=f""" - Can you talk the detail about {bullet_point} for the position of {position} based on {resume}? - The return format should bullet point be like this: - - name: detail impovement advice - """ - return prompt - - -def ask(prompt): - rsp = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", "content": prompt}, - ] - ) - - output=rsp.choices[0].message.content - return output - -#################### - -st.title('IntervuPro.Ai') -st.subheader('A tool using to help people prepare their interview by using gpt3.5') - -# environment setup -key=st.text_input('Please input your OpenAI key') -openai.api_key=key - -option = st.selectbox( - 'How would you like to prepare for the interview?', - ('Prepare for a specific interview', - 'Understand the requirement of a specific position', - 'Analyze resume', - )) - -if "submit" not in st.session_state: - st.session_state["submit"] = False - -st.write('') -###### option 1 -if option=='Prepare for a specific interview': - col1, col2= st.columns(2) - with col1: - company=st.text_input('Input your company','Amazon') - with col2: - position=st.text_input('Input your position','Data engineer') - - col3, col4 = st.columns(2) - with col3: - intervier_level=st.text_input('Input your interviewer level: ','Talent Recuriter') - with col4: - round=st.radio('Select your round: ',('Phone Screen','Behavior Interview','Technical Interview', 'General')) - if round=='General': - round='' - - st.write('Click button again if you want to regenerate a new ansewer.') - submit_button=st.button('Submit') - - if st.session_state.get('button') != True: - st.session_state['button'] = submit_button - if st.session_state['button'] == True: - prompt=prompt1(company,position,round,intervier_level) - output=ask(prompt) - st.write(output) - - followup_time=0 - followup='' - while True: - output_list=output.split('\n') - indexes = [i for i, word in enumerate(output_list) if '- ' in word] - new_list = [output_list[i] for i in indexes] - Cq=[i.split(':')[0].strip('- ') for i in new_list] - Cq = ['None']+ Cq - - followup_radio = st.radio('I want to follow up:', tuple(Cq),key='0') - if followup_radio: - followup_time +=1 - if followup_radio == 'None': - break - else: - if followup_radio != 'None': - followup = followup1(company, position, round, followup_radio) - output = ask(followup) - st.write(output) - if followup_time>5: - break - - -###### option 2 -if option =='Understand the requirement of a specific position': - position=st.text_input('Input your position','Data engineer') - - st.write('Click button again if you want to regenerate a new ansewer.') - - #submit=st.checkbox('submit') - submit=st.button('submit') - - if submit: - prompt=prompt2(position) - output=ask(prompt) - st.write(output) - - followup='' - output_list=output.split('\n') - indexes = [i for i, word in enumerate(output_list) if '- ' in word] - new_list = [output_list[i] for i in indexes] - Cq=[i.split(':')[0].strip('- ') for i in new_list] - Cq = ['None']+ Cq - - - followup_radio = st.radio('I want to follow up:', tuple(Cq)) - if followup_radio!='None': - followup = followup2(position,followup_radio) - op = ask(followup) - st.write(op) - - -###### option 3 - -if option =='Analyze resume': - - # col31, col32,col33= st.columns(3) - # with col31: - # position=st.text_input('Input your position','data engineer') - # with col32: - # dis_num=st.text_input('Input your dis_num','6') - # with col33: - # ad_num=st.text_input('Input your ad_num','4') - - position=st.text_input('Input your position','data engineer') - dis_num=6 - ad_num=4 - - uploaded_file = st.file_uploader("Upload your resume", type=["pdf"]) - if uploaded_file: - - if "submit" not in st.session_state: - st.session_state["submit"] = False - - doc = fitz.open(stream=uploaded_file.read(), filetype="pdf") - resume = "" - for page in doc: - resume += page.get_text() - - st.write('Click button again if you want to regenerate a new ansewer.') - submit_button=st.button('Submit') - - if st.session_state.get('button') != True: - st.session_state['button'] = submit_button - if st.session_state['button'] == True: - prompt=prompt3(position,resume,dis_num,ad_num) - output=ask(prompt) - st.write(output) - - followup_time=0 - while True: - output_list=output.split('\n') - output_list= [element for element in output_list if element != ''] - - ind = [i for i, word in enumerate(output_list) if 'Advantage:' in word] - Cdis_list=output_list[1:int(dis_num)+1] - Cdis=[i.split(':')[0].strip('- ') for i in Cdis_list] - Cdis = ['None']+ Cdis - - followup_radio = st.radio('I want to follow up:', tuple(Cdis),key=followup_time) - followup_time +=1 - if followup_radio == 'None': - break - else: - followup = followup3(position,resume,followup_radio) - output = ask(followup) - st.write(output) - if followup_time>4: - break - diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/Hdf5StubImagePlugin.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/Hdf5StubImagePlugin.py deleted file mode 100644 index bba05ed65a72c6b859f1722cefd0c75a59c43a37..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/Hdf5StubImagePlugin.py +++ /dev/null @@ -1,73 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# HDF5 stub adapter -# -# Copyright (c) 2000-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from . import Image, ImageFile - -_handler = None - - -def register_handler(handler): - """ - Install application-specific HDF5 image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -# -------------------------------------------------------------------- -# Image adapter - - -def _accept(prefix): - return prefix[:8] == b"\x89HDF\r\n\x1a\n" - - -class HDF5StubImageFile(ImageFile.StubImageFile): - format = "HDF5" - format_description = "HDF5" - - def _open(self): - offset = self.fp.tell() - - if not _accept(self.fp.read(8)): - msg = "Not an HDF file" - raise SyntaxError(msg) - - self.fp.seek(offset) - - # make something up - self.mode = "F" - self._size = 1, 1 - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - -def _save(im, fp, filename): - if _handler is None or not hasattr(_handler, "save"): - msg = "HDF5 save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(HDF5StubImageFile.format, HDF5StubImageFile, _accept) -Image.register_save(HDF5StubImageFile.format, _save) - -Image.register_extensions(HDF5StubImageFile.format, [".h5", ".hdf"]) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/benchmark.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/benchmark.py deleted file mode 100644 index bfd650582c83cd032b4fe76303517cdfd9a2a8b4..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/benchmark.py +++ /dev/null @@ -1,225 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import numpy as np -from itertools import count -from typing import List, Tuple -import torch -import tqdm -from fvcore.common.timer import Timer - -from annotator.oneformer.detectron2.utils import comm - -from .build import build_batch_data_loader -from .common import DatasetFromList, MapDataset -from .samplers import TrainingSampler - -logger = logging.getLogger(__name__) - - -class _EmptyMapDataset(torch.utils.data.Dataset): - """ - Map anything to emptiness. - """ - - def __init__(self, dataset): - self.ds = dataset - - def __len__(self): - return len(self.ds) - - def __getitem__(self, idx): - _ = self.ds[idx] - return [0] - - -def iter_benchmark( - iterator, num_iter: int, warmup: int = 5, max_time_seconds: float = 60 -) -> Tuple[float, List[float]]: - """ - Benchmark an iterator/iterable for `num_iter` iterations with an extra - `warmup` iterations of warmup. - End early if `max_time_seconds` time is spent on iterations. - - Returns: - float: average time (seconds) per iteration - list[float]: time spent on each iteration. Sometimes useful for further analysis. - """ - num_iter, warmup = int(num_iter), int(warmup) - - iterator = iter(iterator) - for _ in range(warmup): - next(iterator) - timer = Timer() - all_times = [] - for curr_iter in tqdm.trange(num_iter): - start = timer.seconds() - if start > max_time_seconds: - num_iter = curr_iter - break - next(iterator) - all_times.append(timer.seconds() - start) - avg = timer.seconds() / num_iter - return avg, all_times - - -class DataLoaderBenchmark: - """ - Some common benchmarks that help understand perf bottleneck of a standard dataloader - made of dataset, mapper and sampler. - """ - - def __init__( - self, - dataset, - *, - mapper, - sampler=None, - total_batch_size, - num_workers=0, - max_time_seconds: int = 90, - ): - """ - Args: - max_time_seconds (int): maximum time to spent for each benchmark - other args: same as in `build.py:build_detection_train_loader` - """ - if isinstance(dataset, list): - dataset = DatasetFromList(dataset, copy=False, serialize=True) - if sampler is None: - sampler = TrainingSampler(len(dataset)) - - self.dataset = dataset - self.mapper = mapper - self.sampler = sampler - self.total_batch_size = total_batch_size - self.num_workers = num_workers - self.per_gpu_batch_size = self.total_batch_size // comm.get_world_size() - - self.max_time_seconds = max_time_seconds - - def _benchmark(self, iterator, num_iter, warmup, msg=None): - avg, all_times = iter_benchmark(iterator, num_iter, warmup, self.max_time_seconds) - if msg is not None: - self._log_time(msg, avg, all_times) - return avg, all_times - - def _log_time(self, msg, avg, all_times, distributed=False): - percentiles = [np.percentile(all_times, k, interpolation="nearest") for k in [1, 5, 95, 99]] - if not distributed: - logger.info( - f"{msg}: avg={1.0/avg:.1f} it/s, " - f"p1={percentiles[0]:.2g}s, p5={percentiles[1]:.2g}s, " - f"p95={percentiles[2]:.2g}s, p99={percentiles[3]:.2g}s." - ) - return - avg_per_gpu = comm.all_gather(avg) - percentiles_per_gpu = comm.all_gather(percentiles) - if comm.get_rank() > 0: - return - for idx, avg, percentiles in zip(count(), avg_per_gpu, percentiles_per_gpu): - logger.info( - f"GPU{idx} {msg}: avg={1.0/avg:.1f} it/s, " - f"p1={percentiles[0]:.2g}s, p5={percentiles[1]:.2g}s, " - f"p95={percentiles[2]:.2g}s, p99={percentiles[3]:.2g}s." - ) - - def benchmark_dataset(self, num_iter, warmup=5): - """ - Benchmark the speed of taking raw samples from the dataset. - """ - - def loader(): - while True: - for k in self.sampler: - yield self.dataset[k] - - self._benchmark(loader(), num_iter, warmup, "Dataset Alone") - - def benchmark_mapper(self, num_iter, warmup=5): - """ - Benchmark the speed of taking raw samples from the dataset and map - them in a single process. - """ - - def loader(): - while True: - for k in self.sampler: - yield self.mapper(self.dataset[k]) - - self._benchmark(loader(), num_iter, warmup, "Single Process Mapper (sec/sample)") - - def benchmark_workers(self, num_iter, warmup=10): - """ - Benchmark the dataloader by tuning num_workers to [0, 1, self.num_workers]. - """ - candidates = [0, 1] - if self.num_workers not in candidates: - candidates.append(self.num_workers) - - dataset = MapDataset(self.dataset, self.mapper) - for n in candidates: - loader = build_batch_data_loader( - dataset, - self.sampler, - self.total_batch_size, - num_workers=n, - ) - self._benchmark( - iter(loader), - num_iter * max(n, 1), - warmup * max(n, 1), - f"DataLoader ({n} workers, bs={self.per_gpu_batch_size})", - ) - del loader - - def benchmark_IPC(self, num_iter, warmup=10): - """ - Benchmark the dataloader where each worker outputs nothing. This - eliminates the IPC overhead compared to the regular dataloader. - - PyTorch multiprocessing's IPC only optimizes for torch tensors. - Large numpy arrays or other data structure may incur large IPC overhead. - """ - n = self.num_workers - dataset = _EmptyMapDataset(MapDataset(self.dataset, self.mapper)) - loader = build_batch_data_loader( - dataset, self.sampler, self.total_batch_size, num_workers=n - ) - self._benchmark( - iter(loader), - num_iter * max(n, 1), - warmup * max(n, 1), - f"DataLoader ({n} workers, bs={self.per_gpu_batch_size}) w/o comm", - ) - - def benchmark_distributed(self, num_iter, warmup=10): - """ - Benchmark the dataloader in each distributed worker, and log results of - all workers. This helps understand the final performance as well as - the variances among workers. - - It also prints startup time (first iter) of the dataloader. - """ - gpu = comm.get_world_size() - dataset = MapDataset(self.dataset, self.mapper) - n = self.num_workers - loader = build_batch_data_loader( - dataset, self.sampler, self.total_batch_size, num_workers=n - ) - - timer = Timer() - loader = iter(loader) - next(loader) - startup_time = timer.seconds() - logger.info("Dataloader startup time: {:.2f} seconds".format(startup_time)) - - comm.synchronize() - - avg, all_times = self._benchmark(loader, num_iter * max(n, 1), warmup * max(n, 1)) - del loader - self._log_time( - f"DataLoader ({gpu} GPUs x {n} workers, total bs={self.total_batch_size})", - avg, - all_times, - True, - ) diff --git a/spaces/TEnngal/bingo/src/app/loading.css b/spaces/TEnngal/bingo/src/app/loading.css deleted file mode 100644 index eaaab6a86a228334c4eca3c5368ae6f0f593d405..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/app/loading.css +++ /dev/null @@ -1,68 +0,0 @@ -::-webkit-scrollbar { - width: 10px; - height: 10px; - display: none; -} - -::-webkit-scrollbar-button:start:decrement, -::-webkit-scrollbar-button:end:increment { - height: 30px; - background-color: transparent; -} - -::-webkit-scrollbar-track-piece { - background-color: #3b3b3b; - -webkit-border-radius: 16px; -} - -::-webkit-scrollbar-thumb:vertical { - height: 50px; - background-color: #666; - border: 1px solid #eee; - -webkit-border-radius: 6px; -} - -/* loading start */ -.loading-spinner { - display: flex; - justify-content: center; - align-items: center; - height: 100vh; - opacity: 1; - transition: opacity .8s ease-out; -} - -.loading-spinner.hidden { - opacity: 0; -} - -.loading-spinner>div { - width: 30px; - height: 30px; - background: linear-gradient(90deg, #2870EA 10.79%, #1B4AEF 87.08%); - - border-radius: 100%; - display: inline-block; - animation: sk-bouncedelay 1.4s infinite ease-in-out both; -} - -.loading-spinner .bounce1 { - animation-delay: -0.32s; -} - -.loading-spinner .bounce2 { - animation-delay: -0.16s; -} - -@keyframes sk-bouncedelay { - - 0%, - 80%, - 100% { - transform: scale(0); - } - - 40% { - transform: scale(1.0); - } -} diff --git a/spaces/TEnngal/bingo/src/components/ui/input.tsx b/spaces/TEnngal/bingo/src/components/ui/input.tsx deleted file mode 100644 index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = 'Input' - -export { Input } diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/version.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/version.py deleted file mode 100644 index ec253c414474677d3a5977511cfe901bfb786740..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/version.py +++ /dev/null @@ -1,6 +0,0 @@ -from ._importlib import metadata - -try: - __version__ = metadata.version('setuptools') or '0.dev0+unknown' -except Exception: - __version__ = '0.dev0+unknown' diff --git a/spaces/TandCAcceptMe/face-swap-docker/roop/ui_tk.py b/spaces/TandCAcceptMe/face-swap-docker/roop/ui_tk.py deleted file mode 100644 index b19b894e5b6c7bafd0fad1a8369afc328c1c9b1d..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/roop/ui_tk.py +++ /dev/null @@ -1,429 +0,0 @@ -import os -import customtkinter as ctk -import webbrowser -import cv2 -import roop.globals -import roop.metadata - -from typing import Callable, Tuple -from PIL import Image, ImageOps -from roop.face_helper import get_many_faces, get_one_face, extract_face_images -from roop.capturer import get_video_frame, get_video_frame_total -from roop.utilities import is_image, is_video, resolve_relative_path, open_with_default_app, compute_cosine_distance, has_extension - -ROOT = None -ROOT_HEIGHT = 550 -ROOT_WIDTH = 600 - -PREVIEW = None -PREVIEW_MAX_HEIGHT = 700 -PREVIEW_MAX_WIDTH = 1200 -IMAGE_BUTTON_WIDTH = 200 -IMAGE_BUTTON_HEIGHT = 200 - -RECENT_DIRECTORY_SOURCE = None -RECENT_DIRECTORY_TARGET = None -RECENT_DIRECTORY_OUTPUT = None - -preview_label = None -preview_slider = None -source_label = None -target_label = None -status_label = None -FACE_BUTTONS = [] -INPUT_FACES_DATA = None -OUTPUT_FACES_DATA = None - - -def init(start: Callable, destroy: Callable) -> ctk.CTk: - global ROOT, PREVIEW, FACE_SELECT - - ROOT = create_root(start, destroy) - PREVIEW = create_preview(ROOT) - FACE_SELECT = create_select_faces_win(ROOT) - return ROOT - - - -def create_root(start: Callable, destroy: Callable) -> ctk.CTk: - global source_button, target_button, status_label - - ctk.deactivate_automatic_dpi_awareness() - ctk.set_appearance_mode('system') - ctk.set_default_color_theme(resolve_relative_path('ui.json')) - root = ctk.CTk() - root.minsize(ROOT_WIDTH, ROOT_HEIGHT) - root.title(f'{roop.metadata.name} {roop.metadata.version}') - root.configure() - root.protocol('WM_DELETE_WINDOW', lambda: destroy()) - - base_x1 = 0.075 - base_x2 = 0.575 - base_y = 0.635 - - source_button = ctk.CTkButton(root, text='Select image with face(s)', width=IMAGE_BUTTON_WIDTH, height=IMAGE_BUTTON_HEIGHT, compound='top', anchor='center', command=lambda: select_source_path()) - source_button.place(relx=base_x1, rely=0.05) - - target_button = ctk.CTkButton(root, text='Select target image/video', width=IMAGE_BUTTON_WIDTH, height=IMAGE_BUTTON_HEIGHT, compound='top', anchor='center', command=lambda: select_target_path()) - target_button.place(relx=base_x2, rely=0.05) - - enhance_label = ctk.CTkLabel(root, text='Select face enhancement engine', anchor='w') - enhance_label.place(relx=base_x1, rely=0.49) - enhance_label.configure(text_color=ctk.ThemeManager.theme.get('RoopDonate').get('text_color')) - - enhancer_cb = ctk.CTkComboBox(root, values=["None", "Codeformer", "DMDNet (unavailable)", "GFPGAN"], width=IMAGE_BUTTON_WIDTH, command=select_enhancer) - enhancer_cb.set("None") - enhancer_cb.place(relx=base_x1, rely=0.532) - - keep_fps_value = ctk.BooleanVar(value=roop.globals.keep_fps) - keep_fps_checkbox = ctk.CTkSwitch(root, text='Keep fps', variable=keep_fps_value, command=lambda: setattr(roop.globals, 'keep_fps', not roop.globals.keep_fps)) - keep_fps_checkbox.place(relx=base_x1, rely=base_y) - - keep_frames_value = ctk.BooleanVar(value=roop.globals.keep_frames) - keep_frames_switch = ctk.CTkSwitch(root, text='Keep frames', variable=keep_frames_value, command=lambda: setattr(roop.globals, 'keep_frames', keep_frames_value.get())) - keep_frames_switch.place(relx=base_x1, rely=0.68) - - skip_audio_value = ctk.BooleanVar(value=roop.globals.skip_audio) - skip_audio_switch = ctk.CTkSwitch(root, text='Skip audio', variable=skip_audio_value, command=lambda: setattr(roop.globals, 'skip_audio', skip_audio_value.get())) - skip_audio_switch.place(relx=base_x2, rely=base_y) - - many_faces_value = ctk.BooleanVar(value=roop.globals.many_faces) - many_faces_switch = ctk.CTkSwitch(root, text='Many faces', variable=many_faces_value, command=lambda: setattr(roop.globals, 'many_faces', many_faces_value.get())) - many_faces_switch.place(relx=base_x2, rely=0.68) - - use_batch_value = ctk.BooleanVar(value=roop.globals.use_batch) - use_batch_switch = ctk.CTkSwitch(root, text='Batch process folder', variable=use_batch_value, command=lambda: setattr(roop.globals, 'use_batch', use_batch_value.get())) - use_batch_switch.place(relx=base_x1, rely=0.725) - - - - base_y = 0.84 - - start_button = ctk.CTkButton(root, text='Start', command=lambda: select_output_path(start)) - start_button.place(relx=base_x1, rely=base_y, relwidth=0.15, relheight=0.05) - - stop_button = ctk.CTkButton(root, text='Destroy', command=lambda: destroy()) - stop_button.place(relx=0.35, rely=base_y, relwidth=0.15, relheight=0.05) - - preview_button = ctk.CTkButton(root, text='Preview', command=lambda: toggle_preview()) - preview_button.place(relx=0.55, rely=base_y, relwidth=0.15, relheight=0.05) - - result_button = ctk.CTkButton(root, text='Show Result', command=lambda: show_result()) - result_button.place(relx=0.75, rely=base_y, relwidth=0.15, relheight=0.05) - - status_label = ctk.CTkLabel(root, text=None, justify='center') - status_label.place(relx=base_x1, rely=0.9, relwidth=0.8) - - donate_label = ctk.CTkLabel(root, text='Visit the Github Page', justify='center', cursor='hand2') - donate_label.place(relx=0.1, rely=0.95, relwidth=0.8) - donate_label.configure(text_color=ctk.ThemeManager.theme.get('RoopDonate').get('text_color')) - donate_label.bind(' - -

      -
      - {panel === 'camera-mode' &&
      -
      -
      -
      -
      -
      -
      -
      } -
- - ) -} diff --git a/spaces/paufeldman/vv/meshSubplot.py b/spaces/paufeldman/vv/meshSubplot.py deleted file mode 100644 index 9d15d8dc733934efcbd8d21b4417cf3813ca7700..0000000000000000000000000000000000000000 --- a/spaces/paufeldman/vv/meshSubplot.py +++ /dev/null @@ -1,111 +0,0 @@ -import uuid -from ipywidgets import Output, HBox -import meshplot as mp -rendertype = 'JUPYTER' -import numpy as np -import networkx as nx - -class Subplot(): - def __init__(self, data, view, s): - if data == None: - self.rows = [] - self.hboxes = [] - else: - self.rows = data.rows - if s[0] != 1 or s[1] != 1: - if data == None: # Intialize subplot array - cnt = 0 - for r in range(s[0]): - row = [] - for c in range(s[1]): - row.append(Output()) - cnt += 1 - self.rows.append(row) - - for r in self.rows: - hbox = HBox(r) - if rendertype == "JUPYTER": - display(hbox) - self.hboxes.append(hbox) - - out = self.rows[int(s[2]/s[1])][s[2]%s[1]] - if rendertype == "JUPYTER": - with out: - display(view._renderer) - self.rows[int(s[2]/s[1])][s[2]%s[1]] = view - - def save(self, filename=""): - if filename == "": - uid = str(uuid.uuid4()) + ".html" - else: - filename = filename.replace(".html", "") - uid = filename + '.html' - - s = "" - imports = True - for r in self.rows: - for v in r: - s1 = v.to_html(imports=imports, html_frame=False) - s = s + s1 - imports = False - - s = "\n\n" + s + "\n\n" - with open(uid, "w") as f: - f.write(s) - print("Plot saved to file %s."%uid) - - def to_html(self, imports=True, html_frame=True): - s = "" - for r in self.rows: - for v in r: - s1 = v.to_html(imports=imports, html_frame=html_frame) - s = s + s1 - imports = False - - return s - -def subplot(f, c = 'red', uv=None, n=None, shading={}, s=[1, 1, 0], data=None, **kwargs): - - shading={'point_size':0.05, "point_color": c, "line_color": c, "width":400, "height":400} - view = mp.Viewer(settings = {"width": 500, "height": 500, "antialias": True, "scale": 1.5, "background": "#ffffff", - "fov": 30}) - - #obj = view.add_points(np.array([ f.nodes[v]['posicion'] for v in f.nodes]), shading=shading) - obj = view.add_points( np.array([f.nodes[v]['posicion'] for v in f.nodes if f.nodes[v]['root'] == True]), shading={'point_size':.02, 'point_color':'red'}) - if len(np.array([ f.nodes[v]['posicion'] for v in f.nodes if f.nodes[v]['root'] == False])) != 0: - obj = view.add_points( np.array([f.nodes[v]['posicion'] for v in f.nodes if f.nodes[v]['root'] == False]), shading={'point_size':.02, 'point_color':'black'}) - for arista in f.edges: - obj = view.add_lines( f.nodes[arista[0]]['posicion'], f.nodes[arista[1]]['posicion'], shading = shading) - - subplot = Subplot(data, view, s) - return subplot - -def plotTree( root, dec ): - graph = nx.Graph() - root.toGraph( graph, 0, dec, 0) - edges=nx.get_edge_attributes(graph,'procesada') - - p = mp.plot( np.array([ graph.nodes[v]['posicion'] for v in graph.nodes if graph.nodes[v]['root'] == True]), shading={'point_size':0.05, 'point_color':'red'}, return_plot=True) - if len(np.array([ graph.nodes[v]['posicion'] for v in graph.nodes if graph.nodes[v]['root'] == False])) != 0: - p.add_points( np.array([graph.nodes[v]['posicion'] for v in graph.nodes if graph.nodes[v]['root'] == False]), shading={'point_size':.05, 'point_color':'black'}) - for arista in graph.edges: - p.add_lines( graph.nodes[arista[0]]['posicion'], graph.nodes[arista[1]]['posicion']) - - return - - -def sTree( root, dec, s, c, d=None): - "plot trees next to each other" - graph = nx.Graph() - root.toGraph( graph, 0, dec, 0) - - if d: - subplot(graph, c=c, s=s, data = d) - else: - - d = subplot(graph, c=c, s=s) - - return d - - - diff --git a/spaces/pikto/Elite-freegpt-webui/g4f/Provider/Providers/Aichat.py b/spaces/pikto/Elite-freegpt-webui/g4f/Provider/Providers/Aichat.py deleted file mode 100644 index d78375ce7e62b634c82e163c693a5557b8e2f860..0000000000000000000000000000000000000000 --- a/spaces/pikto/Elite-freegpt-webui/g4f/Provider/Providers/Aichat.py +++ /dev/null @@ -1,35 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://hteyun.com' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - } - data = { - 'model': model, - 'temperature': 0.7, - 'presence_penalty': 0, - 'messages': messages, - } - response = requests.post(url + '/api/chat-stream', - json=data, stream=True) - - if stream: - for chunk in response.iter_content(chunk_size=None): - chunk = chunk.decode('utf-8') - if chunk.strip(): - message = json.loads(chunk)['choices'][0]['message']['content'] - yield message - else: - message = response.json()['choices'][0]['message']['content'] - yield message - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/metadata.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/metadata.py deleted file mode 100644 index e76a60c395eb62d5f05d7248cf67210cdd10740d..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/metadata.py +++ /dev/null @@ -1,408 +0,0 @@ -import email.feedparser -import email.header -import email.message -import email.parser -import email.policy -import sys -import typing -from typing import Dict, List, Optional, Tuple, Union, cast - -if sys.version_info >= (3, 8): # pragma: no cover - from typing import TypedDict -else: # pragma: no cover - if typing.TYPE_CHECKING: - from typing_extensions import TypedDict - else: - try: - from typing_extensions import TypedDict - except ImportError: - - class TypedDict: - def __init_subclass__(*_args, **_kwargs): - pass - - -# The RawMetadata class attempts to make as few assumptions about the underlying -# serialization formats as possible. The idea is that as long as a serialization -# formats offer some very basic primitives in *some* way then we can support -# serializing to and from that format. -class RawMetadata(TypedDict, total=False): - """A dictionary of raw core metadata. - - Each field in core metadata maps to a key of this dictionary (when data is - provided). The key is lower-case and underscores are used instead of dashes - compared to the equivalent core metadata field. Any core metadata field that - can be specified multiple times or can hold multiple values in a single - field have a key with a plural name. - - Core metadata fields that can be specified multiple times are stored as a - list or dict depending on which is appropriate for the field. Any fields - which hold multiple values in a single field are stored as a list. - - """ - - # Metadata 1.0 - PEP 241 - metadata_version: str - name: str - version: str - platforms: List[str] - summary: str - description: str - keywords: List[str] - home_page: str - author: str - author_email: str - license: str - - # Metadata 1.1 - PEP 314 - supported_platforms: List[str] - download_url: str - classifiers: List[str] - requires: List[str] - provides: List[str] - obsoletes: List[str] - - # Metadata 1.2 - PEP 345 - maintainer: str - maintainer_email: str - requires_dist: List[str] - provides_dist: List[str] - obsoletes_dist: List[str] - requires_python: str - requires_external: List[str] - project_urls: Dict[str, str] - - # Metadata 2.0 - # PEP 426 attempted to completely revamp the metadata format - # but got stuck without ever being able to build consensus on - # it and ultimately ended up withdrawn. - # - # However, a number of tools had started emiting METADATA with - # `2.0` Metadata-Version, so for historical reasons, this version - # was skipped. - - # Metadata 2.1 - PEP 566 - description_content_type: str - provides_extra: List[str] - - # Metadata 2.2 - PEP 643 - dynamic: List[str] - - # Metadata 2.3 - PEP 685 - # No new fields were added in PEP 685, just some edge case were - # tightened up to provide better interoptability. - - -_STRING_FIELDS = { - "author", - "author_email", - "description", - "description_content_type", - "download_url", - "home_page", - "license", - "maintainer", - "maintainer_email", - "metadata_version", - "name", - "requires_python", - "summary", - "version", -} - -_LIST_STRING_FIELDS = { - "classifiers", - "dynamic", - "obsoletes", - "obsoletes_dist", - "platforms", - "provides", - "provides_dist", - "provides_extra", - "requires", - "requires_dist", - "requires_external", - "supported_platforms", -} - - -def _parse_keywords(data: str) -> List[str]: - """Split a string of comma-separate keyboards into a list of keywords.""" - return [k.strip() for k in data.split(",")] - - -def _parse_project_urls(data: List[str]) -> Dict[str, str]: - """Parse a list of label/URL string pairings separated by a comma.""" - urls = {} - for pair in data: - # Our logic is slightly tricky here as we want to try and do - # *something* reasonable with malformed data. - # - # The main thing that we have to worry about, is data that does - # not have a ',' at all to split the label from the Value. There - # isn't a singular right answer here, and we will fail validation - # later on (if the caller is validating) so it doesn't *really* - # matter, but since the missing value has to be an empty str - # and our return value is dict[str, str], if we let the key - # be the missing value, then they'd have multiple '' values that - # overwrite each other in a accumulating dict. - # - # The other potentional issue is that it's possible to have the - # same label multiple times in the metadata, with no solid "right" - # answer with what to do in that case. As such, we'll do the only - # thing we can, which is treat the field as unparseable and add it - # to our list of unparsed fields. - parts = [p.strip() for p in pair.split(",", 1)] - parts.extend([""] * (max(0, 2 - len(parts)))) # Ensure 2 items - - # TODO: The spec doesn't say anything about if the keys should be - # considered case sensitive or not... logically they should - # be case-preserving and case-insensitive, but doing that - # would open up more cases where we might have duplicate - # entries. - label, url = parts - if label in urls: - # The label already exists in our set of urls, so this field - # is unparseable, and we can just add the whole thing to our - # unparseable data and stop processing it. - raise KeyError("duplicate labels in project urls") - urls[label] = url - - return urls - - -def _get_payload(msg: email.message.Message, source: Union[bytes, str]) -> str: - """Get the body of the message.""" - # If our source is a str, then our caller has managed encodings for us, - # and we don't need to deal with it. - if isinstance(source, str): - payload: str = msg.get_payload() - return payload - # If our source is a bytes, then we're managing the encoding and we need - # to deal with it. - else: - bpayload: bytes = msg.get_payload(decode=True) - try: - return bpayload.decode("utf8", "strict") - except UnicodeDecodeError: - raise ValueError("payload in an invalid encoding") - - -# The various parse_FORMAT functions here are intended to be as lenient as -# possible in their parsing, while still returning a correctly typed -# RawMetadata. -# -# To aid in this, we also generally want to do as little touching of the -# data as possible, except where there are possibly some historic holdovers -# that make valid data awkward to work with. -# -# While this is a lower level, intermediate format than our ``Metadata`` -# class, some light touch ups can make a massive difference in usability. - -# Map METADATA fields to RawMetadata. -_EMAIL_TO_RAW_MAPPING = { - "author": "author", - "author-email": "author_email", - "classifier": "classifiers", - "description": "description", - "description-content-type": "description_content_type", - "download-url": "download_url", - "dynamic": "dynamic", - "home-page": "home_page", - "keywords": "keywords", - "license": "license", - "maintainer": "maintainer", - "maintainer-email": "maintainer_email", - "metadata-version": "metadata_version", - "name": "name", - "obsoletes": "obsoletes", - "obsoletes-dist": "obsoletes_dist", - "platform": "platforms", - "project-url": "project_urls", - "provides": "provides", - "provides-dist": "provides_dist", - "provides-extra": "provides_extra", - "requires": "requires", - "requires-dist": "requires_dist", - "requires-external": "requires_external", - "requires-python": "requires_python", - "summary": "summary", - "supported-platform": "supported_platforms", - "version": "version", -} - - -def parse_email(data: Union[bytes, str]) -> Tuple[RawMetadata, Dict[str, List[str]]]: - """Parse a distribution's metadata. - - This function returns a two-item tuple of dicts. The first dict is of - recognized fields from the core metadata specification. Fields that can be - parsed and translated into Python's built-in types are converted - appropriately. All other fields are left as-is. Fields that are allowed to - appear multiple times are stored as lists. - - The second dict contains all other fields from the metadata. This includes - any unrecognized fields. It also includes any fields which are expected to - be parsed into a built-in type but were not formatted appropriately. Finally, - any fields that are expected to appear only once but are repeated are - included in this dict. - - """ - raw: Dict[str, Union[str, List[str], Dict[str, str]]] = {} - unparsed: Dict[str, List[str]] = {} - - if isinstance(data, str): - parsed = email.parser.Parser(policy=email.policy.compat32).parsestr(data) - else: - parsed = email.parser.BytesParser(policy=email.policy.compat32).parsebytes(data) - - # We have to wrap parsed.keys() in a set, because in the case of multiple - # values for a key (a list), the key will appear multiple times in the - # list of keys, but we're avoiding that by using get_all(). - for name in frozenset(parsed.keys()): - # Header names in RFC are case insensitive, so we'll normalize to all - # lower case to make comparisons easier. - name = name.lower() - - # We use get_all() here, even for fields that aren't multiple use, - # because otherwise someone could have e.g. two Name fields, and we - # would just silently ignore it rather than doing something about it. - headers = parsed.get_all(name) - - # The way the email module works when parsing bytes is that it - # unconditionally decodes the bytes as ascii using the surrogateescape - # handler. When you pull that data back out (such as with get_all() ), - # it looks to see if the str has any surrogate escapes, and if it does - # it wraps it in a Header object instead of returning the string. - # - # As such, we'll look for those Header objects, and fix up the encoding. - value = [] - # Flag if we have run into any issues processing the headers, thus - # signalling that the data belongs in 'unparsed'. - valid_encoding = True - for h in headers: - # It's unclear if this can return more types than just a Header or - # a str, so we'll just assert here to make sure. - assert isinstance(h, (email.header.Header, str)) - - # If it's a header object, we need to do our little dance to get - # the real data out of it. In cases where there is invalid data - # we're going to end up with mojibake, but there's no obvious, good - # way around that without reimplementing parts of the Header object - # ourselves. - # - # That should be fine since, if mojibacked happens, this key is - # going into the unparsed dict anyways. - if isinstance(h, email.header.Header): - # The Header object stores it's data as chunks, and each chunk - # can be independently encoded, so we'll need to check each - # of them. - chunks: List[Tuple[bytes, Optional[str]]] = [] - for bin, encoding in email.header.decode_header(h): - try: - bin.decode("utf8", "strict") - except UnicodeDecodeError: - # Enable mojibake. - encoding = "latin1" - valid_encoding = False - else: - encoding = "utf8" - chunks.append((bin, encoding)) - - # Turn our chunks back into a Header object, then let that - # Header object do the right thing to turn them into a - # string for us. - value.append(str(email.header.make_header(chunks))) - # This is already a string, so just add it. - else: - value.append(h) - - # We've processed all of our values to get them into a list of str, - # but we may have mojibake data, in which case this is an unparsed - # field. - if not valid_encoding: - unparsed[name] = value - continue - - raw_name = _EMAIL_TO_RAW_MAPPING.get(name) - if raw_name is None: - # This is a bit of a weird situation, we've encountered a key that - # we don't know what it means, so we don't know whether it's meant - # to be a list or not. - # - # Since we can't really tell one way or another, we'll just leave it - # as a list, even though it may be a single item list, because that's - # what makes the most sense for email headers. - unparsed[name] = value - continue - - # If this is one of our string fields, then we'll check to see if our - # value is a list of a single item. If it is then we'll assume that - # it was emitted as a single string, and unwrap the str from inside - # the list. - # - # If it's any other kind of data, then we haven't the faintest clue - # what we should parse it as, and we have to just add it to our list - # of unparsed stuff. - if raw_name in _STRING_FIELDS and len(value) == 1: - raw[raw_name] = value[0] - # If this is one of our list of string fields, then we can just assign - # the value, since email *only* has strings, and our get_all() call - # above ensures that this is a list. - elif raw_name in _LIST_STRING_FIELDS: - raw[raw_name] = value - # Special Case: Keywords - # The keywords field is implemented in the metadata spec as a str, - # but it conceptually is a list of strings, and is serialized using - # ", ".join(keywords), so we'll do some light data massaging to turn - # this into what it logically is. - elif raw_name == "keywords" and len(value) == 1: - raw[raw_name] = _parse_keywords(value[0]) - # Special Case: Project-URL - # The project urls is implemented in the metadata spec as a list of - # specially-formatted strings that represent a key and a value, which - # is fundamentally a mapping, however the email format doesn't support - # mappings in a sane way, so it was crammed into a list of strings - # instead. - # - # We will do a little light data massaging to turn this into a map as - # it logically should be. - elif raw_name == "project_urls": - try: - raw[raw_name] = _parse_project_urls(value) - except KeyError: - unparsed[name] = value - # Nothing that we've done has managed to parse this, so it'll just - # throw it in our unparseable data and move on. - else: - unparsed[name] = value - - # We need to support getting the Description from the message payload in - # addition to getting it from the the headers. This does mean, though, there - # is the possibility of it being set both ways, in which case we put both - # in 'unparsed' since we don't know which is right. - try: - payload = _get_payload(parsed, data) - except ValueError: - unparsed.setdefault("description", []).append( - parsed.get_payload(decode=isinstance(data, bytes)) - ) - else: - if payload: - # Check to see if we've already got a description, if so then both - # it, and this body move to unparseable. - if "description" in raw: - description_header = cast(str, raw.pop("description")) - unparsed.setdefault("description", []).extend( - [description_header, payload] - ) - elif "description" in unparsed: - unparsed["description"].append(payload) - else: - raw["description"] = payload - - # We need to cast our `raw` to a metadata, because a TypedDict only support - # literal key names, but we're computing our key names on purpose, but the - # way this function is implemented, our `TypedDict` can only have valid key - # names. - return cast(RawMetadata, raw), unparsed diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/platformdirs/unix.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/platformdirs/unix.py deleted file mode 100644 index 9aca5a030545d3ba26fa96fbfd7cae1c31dcdd15..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/platformdirs/unix.py +++ /dev/null @@ -1,181 +0,0 @@ -from __future__ import annotations - -import os -import sys -from configparser import ConfigParser -from pathlib import Path - -from .api import PlatformDirsABC - -if sys.platform.startswith("linux"): # pragma: no branch # no op check, only to please the type checker - from os import getuid -else: - - def getuid() -> int: - raise RuntimeError("should only be used on Linux") - - -class Unix(PlatformDirsABC): - """ - On Unix/Linux, we follow the - `XDG Basedir Spec `_. The spec allows - overriding directories with environment variables. The examples show are the default values, alongside the name of - the environment variable that overrides them. Makes use of the - `appname `, - `version `, - `multipath `, - `opinion `. - """ - - @property - def user_data_dir(self) -> str: - """ - :return: data directory tied to the user, e.g. ``~/.local/share/$appname/$version`` or - ``$XDG_DATA_HOME/$appname/$version`` - """ - path = os.environ.get("XDG_DATA_HOME", "") - if not path.strip(): - path = os.path.expanduser("~/.local/share") - return self._append_app_name_and_version(path) - - @property - def site_data_dir(self) -> str: - """ - :return: data directories shared by users (if `multipath ` is - enabled and ``XDG_DATA_DIR`` is set and a multi path the response is also a multi path separated by the OS - path separator), e.g. ``/usr/local/share/$appname/$version`` or ``/usr/share/$appname/$version`` - """ - # XDG default for $XDG_DATA_DIRS; only first, if multipath is False - path = os.environ.get("XDG_DATA_DIRS", "") - if not path.strip(): - path = f"/usr/local/share{os.pathsep}/usr/share" - return self._with_multi_path(path) - - def _with_multi_path(self, path: str) -> str: - path_list = path.split(os.pathsep) - if not self.multipath: - path_list = path_list[0:1] - path_list = [self._append_app_name_and_version(os.path.expanduser(p)) for p in path_list] - return os.pathsep.join(path_list) - - @property - def user_config_dir(self) -> str: - """ - :return: config directory tied to the user, e.g. ``~/.config/$appname/$version`` or - ``$XDG_CONFIG_HOME/$appname/$version`` - """ - path = os.environ.get("XDG_CONFIG_HOME", "") - if not path.strip(): - path = os.path.expanduser("~/.config") - return self._append_app_name_and_version(path) - - @property - def site_config_dir(self) -> str: - """ - :return: config directories shared by users (if `multipath ` - is enabled and ``XDG_DATA_DIR`` is set and a multi path the response is also a multi path separated by the OS - path separator), e.g. ``/etc/xdg/$appname/$version`` - """ - # XDG default for $XDG_CONFIG_DIRS only first, if multipath is False - path = os.environ.get("XDG_CONFIG_DIRS", "") - if not path.strip(): - path = "/etc/xdg" - return self._with_multi_path(path) - - @property - def user_cache_dir(self) -> str: - """ - :return: cache directory tied to the user, e.g. ``~/.cache/$appname/$version`` or - ``~/$XDG_CACHE_HOME/$appname/$version`` - """ - path = os.environ.get("XDG_CACHE_HOME", "") - if not path.strip(): - path = os.path.expanduser("~/.cache") - return self._append_app_name_and_version(path) - - @property - def user_state_dir(self) -> str: - """ - :return: state directory tied to the user, e.g. ``~/.local/state/$appname/$version`` or - ``$XDG_STATE_HOME/$appname/$version`` - """ - path = os.environ.get("XDG_STATE_HOME", "") - if not path.strip(): - path = os.path.expanduser("~/.local/state") - return self._append_app_name_and_version(path) - - @property - def user_log_dir(self) -> str: - """ - :return: log directory tied to the user, same as `user_state_dir` if not opinionated else ``log`` in it - """ - path = self.user_state_dir - if self.opinion: - path = os.path.join(path, "log") - return path - - @property - def user_documents_dir(self) -> str: - """ - :return: documents directory tied to the user, e.g. ``~/Documents`` - """ - documents_dir = _get_user_dirs_folder("XDG_DOCUMENTS_DIR") - if documents_dir is None: - documents_dir = os.environ.get("XDG_DOCUMENTS_DIR", "").strip() - if not documents_dir: - documents_dir = os.path.expanduser("~/Documents") - - return documents_dir - - @property - def user_runtime_dir(self) -> str: - """ - :return: runtime directory tied to the user, e.g. ``/run/user/$(id -u)/$appname/$version`` or - ``$XDG_RUNTIME_DIR/$appname/$version`` - """ - path = os.environ.get("XDG_RUNTIME_DIR", "") - if not path.strip(): - path = f"/run/user/{getuid()}" - return self._append_app_name_and_version(path) - - @property - def site_data_path(self) -> Path: - """:return: data path shared by users. Only return first item, even if ``multipath`` is set to ``True``""" - return self._first_item_as_path_if_multipath(self.site_data_dir) - - @property - def site_config_path(self) -> Path: - """:return: config path shared by the users. Only return first item, even if ``multipath`` is set to ``True``""" - return self._first_item_as_path_if_multipath(self.site_config_dir) - - def _first_item_as_path_if_multipath(self, directory: str) -> Path: - if self.multipath: - # If multipath is True, the first path is returned. - directory = directory.split(os.pathsep)[0] - return Path(directory) - - -def _get_user_dirs_folder(key: str) -> str | None: - """Return directory from user-dirs.dirs config file. See https://freedesktop.org/wiki/Software/xdg-user-dirs/""" - user_dirs_config_path = os.path.join(Unix().user_config_dir, "user-dirs.dirs") - if os.path.exists(user_dirs_config_path): - parser = ConfigParser() - - with open(user_dirs_config_path) as stream: - # Add fake section header, so ConfigParser doesn't complain - parser.read_string(f"[top]\n{stream.read()}") - - if key not in parser["top"]: - return None - - path = parser["top"][key].strip('"') - # Handle relative home paths - path = path.replace("$HOME", os.path.expanduser("~")) - return path - - return None - - -__all__ = [ - "Unix", -] diff --git a/spaces/ppsingh/cpu-demo/README.md b/spaces/ppsingh/cpu-demo/README.md deleted file mode 100644 index 3219958e7ca3f2f56ec23ed10ae73f8807cb9bb3..0000000000000000000000000000000000000000 --- a/spaces/ppsingh/cpu-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Cpu Demo -emoji: 🦀 -colorFrom: blue -colorTo: pink -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -duplicated_from: ppsingh/cpu-staging ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/certifi/core.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/certifi/core.py deleted file mode 100644 index de028981b97e1fcc8ef4ab2c817cc8731b9c8738..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/certifi/core.py +++ /dev/null @@ -1,108 +0,0 @@ -""" -certifi.py -~~~~~~~~~~ - -This module returns the installation location of cacert.pem or its contents. -""" -import sys - - -if sys.version_info >= (3, 11): - - from importlib.resources import as_file, files - - _CACERT_CTX = None - _CACERT_PATH = None - - def where() -> str: - # This is slightly terrible, but we want to delay extracting the file - # in cases where we're inside of a zipimport situation until someone - # actually calls where(), but we don't want to re-extract the file - # on every call of where(), so we'll do it once then store it in a - # global variable. - global _CACERT_CTX - global _CACERT_PATH - if _CACERT_PATH is None: - # This is slightly janky, the importlib.resources API wants you to - # manage the cleanup of this file, so it doesn't actually return a - # path, it returns a context manager that will give you the path - # when you enter it and will do any cleanup when you leave it. In - # the common case of not needing a temporary file, it will just - # return the file system location and the __exit__() is a no-op. - # - # We also have to hold onto the actual context manager, because - # it will do the cleanup whenever it gets garbage collected, so - # we will also store that at the global level as well. - _CACERT_CTX = as_file(files("certifi").joinpath("cacert.pem")) - _CACERT_PATH = str(_CACERT_CTX.__enter__()) - - return _CACERT_PATH - - def contents() -> str: - return files("certifi").joinpath("cacert.pem").read_text(encoding="ascii") - -elif sys.version_info >= (3, 7): - - from importlib.resources import path as get_path, read_text - - _CACERT_CTX = None - _CACERT_PATH = None - - def where() -> str: - # This is slightly terrible, but we want to delay extracting the - # file in cases where we're inside of a zipimport situation until - # someone actually calls where(), but we don't want to re-extract - # the file on every call of where(), so we'll do it once then store - # it in a global variable. - global _CACERT_CTX - global _CACERT_PATH - if _CACERT_PATH is None: - # This is slightly janky, the importlib.resources API wants you - # to manage the cleanup of this file, so it doesn't actually - # return a path, it returns a context manager that will give - # you the path when you enter it and will do any cleanup when - # you leave it. In the common case of not needing a temporary - # file, it will just return the file system location and the - # __exit__() is a no-op. - # - # We also have to hold onto the actual context manager, because - # it will do the cleanup whenever it gets garbage collected, so - # we will also store that at the global level as well. - _CACERT_CTX = get_path("certifi", "cacert.pem") - _CACERT_PATH = str(_CACERT_CTX.__enter__()) - - return _CACERT_PATH - - def contents() -> str: - return read_text("certifi", "cacert.pem", encoding="ascii") - -else: - import os - import types - from typing import Union - - Package = Union[types.ModuleType, str] - Resource = Union[str, "os.PathLike"] - - # This fallback will work for Python versions prior to 3.7 that lack the - # importlib.resources module but relies on the existing `where` function - # so won't address issues with environments like PyOxidizer that don't set - # __file__ on modules. - def read_text( - package: Package, - resource: Resource, - encoding: str = 'utf-8', - errors: str = 'strict' - ) -> str: - with open(where(), encoding=encoding) as data: - return data.read() - - # If we don't have importlib.resources, then we will just do the old logic - # of assuming we're on the filesystem and munge the path directly. - def where() -> str: - f = os.path.dirname(__file__) - - return os.path.join(f, "cacert.pem") - - def contents() -> str: - return read_text("certifi", "cacert.pem", encoding="ascii") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_J_.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_J_.py deleted file mode 100644 index bc8fe92aac9d18bfd5ee565588d8cebf7d00afd1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_J_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .T_S_I_V_ import table_T_S_I_V_ - - -class table_T_S_I_J_(table_T_S_I_V_): - pass diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/cli/commands/components/show.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/cli/commands/components/show.py deleted file mode 100644 index afd63fb9a319642137b1f0a2a3b7834f79f8d296..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/cli/commands/components/show.py +++ /dev/null @@ -1,70 +0,0 @@ -import inspect - -from rich.console import Console -from rich.table import Table - -import gradio._simple_templates -import gradio.components -import gradio.layouts -from gradio.blocks import BlockContext -from gradio.components import Component, FormComponent - -_IGNORE = { - "Text", - "Dataframe", - "Highlightedtext", - "Annotatedimage", - "Checkboxgroup", - "Json", - "Highlight", - "Component", - "Form", - "Dataset", - "FormComponent", - "Fallback", - "State", -} - -_BEGINNER_FRIENDLY = {"Slider", "Radio", "Checkbox", "Number", "CheckboxGroup", "File"} - - -def _get_table_items(module): - items = [] - for name in module.__all__: - gr_cls = getattr(module, name) - if not ( - inspect.isclass(gr_cls) and issubclass(gr_cls, (Component, BlockContext)) - ) or (name in _IGNORE): - continue - tags = [] - if "Simple" in name or name in _BEGINNER_FRIENDLY: - tags.append(":seedling::handshake:Beginner Friendly:seedling::handshake:") - if issubclass(gr_cls, FormComponent): - tags.append(":pencil::jigsaw:Form Component:pencil::jigsaw:") - if name in gradio.layouts.__all__: - tags.append(":triangular_ruler:Layout:triangular_ruler:") - doc = inspect.getdoc(gr_cls) or "No description available." - doc = doc.split(".")[0] - if tags: - doc = f"[{', '.join(tags)}]" + " " + doc - items.append((name, doc)) - - return items - - -def _show(): - items = ( - _get_table_items(gradio._simple_templates) - + _get_table_items(gradio.components) - + _get_table_items(gradio.layouts) - ) - table = Table(show_header=True, header_style="orange1", show_lines=True) - table.add_column("Name", justify="center") - table.add_column("Description", justify="center") - - for item in items: - table.add_row(*item) - - console = Console() - with console.pager(): - console.print(table) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-e24fc675.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-e24fc675.css deleted file mode 100644 index df50826e3be6b232e0cdd096afc4a71bee8b3422..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-e24fc675.css +++ /dev/null @@ -1 +0,0 @@ -.container.svelte-ju12zg.svelte-ju12zg{display:flex;flex-direction:column;gap:var(--spacing-sm);padding:var(--block-padding)}.hl.svelte-ju12zg+.hl.svelte-ju12zg{margin-left:var(--size-1)}.textspan.svelte-ju12zg:last-child>.label.svelte-ju12zg{margin-right:0}.category-legend.svelte-ju12zg.svelte-ju12zg{display:flex;flex-wrap:wrap;gap:var(--spacing-sm);color:#000}.category-label.svelte-ju12zg.svelte-ju12zg{cursor:pointer;border-radius:var(--radius-xs);padding-right:var(--size-2);padding-left:var(--size-2);font-weight:var(--weight-semibold)}.color-legend.svelte-ju12zg.svelte-ju12zg{display:flex;justify-content:space-between;border-radius:var(--radius-xs);background:linear-gradient(to right,var(--color-purple),rgba(255,255,255,0),var(--color-red));padding:var(--size-1) var(--size-2);font-weight:var(--weight-semibold)}.textfield.svelte-ju12zg.svelte-ju12zg{box-sizing:border-box;border-radius:var(--radius-xs);background:var(--background-fill-primary);background-color:transparent;max-width:var(--size-full);line-height:var(--scale-4);word-break:break-all}.textspan.svelte-ju12zg.svelte-ju12zg{transition:.15s;border-radius:var(--radius-xs);padding-top:2.5px;padding-right:var(--size-1);padding-bottom:3.5px;padding-left:var(--size-1);color:#000}.label.svelte-ju12zg.svelte-ju12zg{transition:.15s;margin-top:1px;border-radius:var(--radius-xs);padding:1px 5px;color:var(--body-text-color);color:#fff;font-weight:var(--weight-bold);font-size:var(--text-sm);text-transform:uppercase}.text.svelte-ju12zg.svelte-ju12zg{color:#000;white-space:pre-wrap}.score-text.svelte-ju12zg .text.svelte-ju12zg{color:var(--body-text-color)}.score-text.svelte-ju12zg.svelte-ju12zg{margin-right:var(--size-1);padding:var(--size-1)}.no-cat.svelte-ju12zg.svelte-ju12zg,.no-label.svelte-ju12zg.svelte-ju12zg{color:var(--body-text-color)}.selectable.svelte-ju12zg.svelte-ju12zg{cursor:pointer}.label-input.svelte-1cag2po{transition:.15s;margin-top:1px;margin-right:calc(var(--size-1));border-radius:var(--radius-xs);padding:1px 5px;color:#000;font-weight:var(--weight-bold);font-size:var(--text-sm);text-transform:uppercase;line-height:1;color:#fff}.label-input.svelte-1cag2po::placeholder{color:#01010180}.label-clear-button.svelte-1ozsnjl.svelte-1ozsnjl{display:none;border-radius:var(--radius-xs);padding-top:2.5px;padding-right:var(--size-1);padding-bottom:3.5px;padding-left:var(--size-1);color:#000;background-color:var(--background-fill-secondary);user-select:none;position:relative;left:-3px;border-radius:0 var(--radius-xs) var(--radius-xs) 0;color:var(--block-label-text-color)}.text-class_or_confidence-container.svelte-1ozsnjl:hover .label-clear-button.svelte-1ozsnjl,.text-class_or_confidence-container.svelte-1ozsnjl:focus-within .label-clear-button.svelte-1ozsnjl,.score-text-container.svelte-1ozsnjl:hover .label-clear-button.svelte-1ozsnjl,.score-text-container.svelte-1ozsnjl:focus-within .label-clear-button.svelte-1ozsnjl{display:inline}.text-class_or_confidence-container.svelte-1ozsnjl:hover .textspan.hl.svelte-1ozsnjl,.text-class_or_confidence-container.svelte-1ozsnjl:focus-within .textspan.hl.svelte-1ozsnjl,.score-text.svelte-1ozsnjl.svelte-1ozsnjl:hover{border-radius:var(--radius-xs) 0 0 var(--radius-xs)}.container.svelte-1ozsnjl.svelte-1ozsnjl{display:flex;flex-direction:column;gap:var(--spacing-sm);padding:var(--block-padding)}.hl.svelte-1ozsnjl.svelte-1ozsnjl{margin-left:var(--size-1);transition:background-color .3s;user-select:none}.textspan.svelte-1ozsnjl:last-child>.label.svelte-1ozsnjl{margin-right:0}.class_or_confidence-legend.svelte-1ozsnjl.svelte-1ozsnjl{display:flex;flex-wrap:wrap;gap:var(--spacing-sm);color:#000}.class_or_confidence-label.svelte-1ozsnjl.svelte-1ozsnjl{cursor:pointer;border-radius:var(--radius-xs);padding-right:var(--size-2);padding-left:var(--size-2);font-weight:var(--weight-semibold)}.color-legend.svelte-1ozsnjl.svelte-1ozsnjl{display:flex;justify-content:space-between;border-radius:var(--radius-xs);background:linear-gradient(to right,var(--color-purple),rgba(255,255,255,0),var(--color-red));padding:var(--size-1) var(--size-2);font-weight:var(--weight-semibold)}.textfield.svelte-1ozsnjl.svelte-1ozsnjl{box-sizing:border-box;border-radius:var(--radius-xs);background:var(--background-fill-primary);background-color:transparent;max-width:var(--size-full);line-height:var(--scale-4);word-break:break-all}.textspan.svelte-1ozsnjl.svelte-1ozsnjl{transition:.15s;border-radius:var(--radius-xs);padding-top:2.5px;padding-right:var(--size-1);padding-bottom:3.5px;padding-left:var(--size-1);color:#000;cursor:text}.label.svelte-1ozsnjl.svelte-1ozsnjl{transition:.15s;margin-top:1px;border-radius:var(--radius-xs);padding:1px 5px;color:var(--body-text-color);color:#fff;font-weight:var(--weight-bold);font-size:var(--text-sm);text-transform:uppercase;user-select:none}.text.svelte-1ozsnjl.svelte-1ozsnjl{color:#000;white-space:pre-wrap}.textspan.hl.svelte-1ozsnjl.svelte-1ozsnjl{user-select:none}.score-text-container.svelte-1ozsnjl.svelte-1ozsnjl{margin-right:var(--size-1)}.score-text.svelte-1ozsnjl .text.svelte-1ozsnjl,.no-cat.svelte-1ozsnjl.svelte-1ozsnjl{color:var(--body-text-color)}.no-label.svelte-1ozsnjl.svelte-1ozsnjl{color:var(--body-text-color);user-select:text}.selectable.svelte-1ozsnjl.svelte-1ozsnjl{cursor:text;user-select:text} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/index.html b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/index.html deleted file mode 100644 index 5508c038af15b495b49a2f81914e7a49d32223fa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/index.html +++ /dev/null @@ -1,87 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_backend_tools.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_backend_tools.py deleted file mode 100644 index cc05a1a98f78f6b7267cf3d861013b3cf4416624..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_backend_tools.py +++ /dev/null @@ -1,20 +0,0 @@ -import pytest - -from matplotlib.backend_tools import ToolHelpBase - - -@pytest.mark.parametrize('rc_shortcut,expected', [ - ('home', 'Home'), - ('backspace', 'Backspace'), - ('f1', 'F1'), - ('ctrl+a', 'Ctrl+A'), - ('ctrl+A', 'Ctrl+Shift+A'), - ('a', 'a'), - ('A', 'A'), - ('ctrl+shift+f1', 'Ctrl+Shift+F1'), - ('1', '1'), - ('cmd+p', 'Cmd+P'), - ('cmd+1', 'Cmd+1'), -]) -def test_format_shortcut(rc_shortcut, expected): - assert ToolHelpBase.format_shortcut(rc_shortcut) == expected diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axisartist/axisline_style.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axisartist/axisline_style.py deleted file mode 100644 index 5ae188021bb8989bc0a51ebab8bb032e193a859d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axisartist/axisline_style.py +++ /dev/null @@ -1,193 +0,0 @@ -""" -Provides classes to style the axis lines. -""" -import math - -import numpy as np - -import matplotlib as mpl -from matplotlib.patches import _Style, FancyArrowPatch -from matplotlib.path import Path -from matplotlib.transforms import IdentityTransform - - -class _FancyAxislineStyle: - class SimpleArrow(FancyArrowPatch): - """The artist class that will be returned for SimpleArrow style.""" - _ARROW_STYLE = "->" - - def __init__(self, axis_artist, line_path, transform, - line_mutation_scale): - self._axis_artist = axis_artist - self._line_transform = transform - self._line_path = line_path - self._line_mutation_scale = line_mutation_scale - - FancyArrowPatch.__init__(self, - path=self._line_path, - arrowstyle=self._ARROW_STYLE, - patchA=None, - patchB=None, - shrinkA=0., - shrinkB=0., - mutation_scale=line_mutation_scale, - mutation_aspect=None, - transform=IdentityTransform(), - ) - - def set_line_mutation_scale(self, scale): - self.set_mutation_scale(scale*self._line_mutation_scale) - - def _extend_path(self, path, mutation_size=10): - """ - Extend the path to make a room for drawing arrow. - """ - (x0, y0), (x1, y1) = path.vertices[-2:] - theta = math.atan2(y1 - y0, x1 - x0) - x2 = x1 + math.cos(theta) * mutation_size - y2 = y1 + math.sin(theta) * mutation_size - if path.codes is None: - return Path(np.concatenate([path.vertices, [[x2, y2]]])) - else: - return Path(np.concatenate([path.vertices, [[x2, y2]]]), - np.concatenate([path.codes, [Path.LINETO]])) - - def set_path(self, path): - self._line_path = path - - def draw(self, renderer): - """ - Draw the axis line. - 1) Transform the path to the display coordinate. - 2) Extend the path to make a room for arrow. - 3) Update the path of the FancyArrowPatch. - 4) Draw. - """ - path_in_disp = self._line_transform.transform_path(self._line_path) - mutation_size = self.get_mutation_scale() # line_mutation_scale() - extended_path = self._extend_path(path_in_disp, - mutation_size=mutation_size) - self._path_original = extended_path - FancyArrowPatch.draw(self, renderer) - - def get_window_extent(self, renderer=None): - - path_in_disp = self._line_transform.transform_path(self._line_path) - mutation_size = self.get_mutation_scale() # line_mutation_scale() - extended_path = self._extend_path(path_in_disp, - mutation_size=mutation_size) - self._path_original = extended_path - return FancyArrowPatch.get_window_extent(self, renderer) - - class FilledArrow(SimpleArrow): - """The artist class that will be returned for FilledArrow style.""" - _ARROW_STYLE = "-|>" - - def __init__(self, axis_artist, line_path, transform, - line_mutation_scale, facecolor): - super().__init__(axis_artist, line_path, transform, - line_mutation_scale) - self.set_facecolor(facecolor) - - -class AxislineStyle(_Style): - """ - A container class which defines style classes for AxisArtists. - - An instance of any axisline style class is a callable object, - whose call signature is :: - - __call__(self, axis_artist, path, transform) - - When called, this should return an `.Artist` with the following methods:: - - def set_path(self, path): - # set the path for axisline. - - def set_line_mutation_scale(self, scale): - # set the scale - - def draw(self, renderer): - # draw - """ - - _style_list = {} - - class _Base: - # The derived classes are required to be able to be initialized - # w/o arguments, i.e., all its argument (except self) must have - # the default values. - - def __init__(self): - """ - initialization. - """ - super().__init__() - - def __call__(self, axis_artist, transform): - """ - Given the AxisArtist instance, and transform for the path (set_path - method), return the Matplotlib artist for drawing the axis line. - """ - return self.new_line(axis_artist, transform) - - class SimpleArrow(_Base): - """ - A simple arrow. - """ - - ArrowAxisClass = _FancyAxislineStyle.SimpleArrow - - def __init__(self, size=1): - """ - Parameters - ---------- - size : float - Size of the arrow as a fraction of the ticklabel size. - """ - - self.size = size - super().__init__() - - def new_line(self, axis_artist, transform): - - linepath = Path([(0, 0), (0, 1)]) - axisline = self.ArrowAxisClass(axis_artist, linepath, transform, - line_mutation_scale=self.size) - return axisline - - _style_list["->"] = SimpleArrow - - class FilledArrow(SimpleArrow): - """ - An arrow with a filled head. - """ - - ArrowAxisClass = _FancyAxislineStyle.FilledArrow - - def __init__(self, size=1, facecolor=None): - """ - Parameters - ---------- - size : float - Size of the arrow as a fraction of the ticklabel size. - facecolor : color, default: :rc:`axes.edgecolor` - Fill color. - - .. versionadded:: 3.7 - """ - - if facecolor is None: - facecolor = mpl.rcParams['axes.edgecolor'] - self.size = size - self._facecolor = facecolor - super().__init__(size=size) - - def new_line(self, axis_artist, transform): - linepath = Path([(0, 0), (0, 1)]) - axisline = self.ArrowAxisClass(axis_artist, linepath, transform, - line_mutation_scale=self.size, - facecolor=self._facecolor) - return axisline - - _style_list["-|>"] = FilledArrow diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/config_init.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/config_init.py deleted file mode 100644 index 765b24fbc78683847d0027728fcaef62e422da18..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/config_init.py +++ /dev/null @@ -1,903 +0,0 @@ -""" -This module is imported from the pandas package __init__.py file -in order to ensure that the core.config options registered here will -be available as soon as the user loads the package. if register_option -is invoked inside specific modules, they will not be registered until that -module is imported, which may or may not be a problem. - -If you need to make sure options are available even before a certain -module is imported, register them here rather than in the module. - -""" -from __future__ import annotations - -import os -from typing import Callable - -import pandas._config.config as cf -from pandas._config.config import ( - is_bool, - is_callable, - is_instance_factory, - is_int, - is_nonnegative_int, - is_one_of_factory, - is_str, - is_text, -) - -# compute - -use_bottleneck_doc = """ -: bool - Use the bottleneck library to accelerate if it is installed, - the default is True - Valid values: False,True -""" - - -def use_bottleneck_cb(key) -> None: - from pandas.core import nanops - - nanops.set_use_bottleneck(cf.get_option(key)) - - -use_numexpr_doc = """ -: bool - Use the numexpr library to accelerate computation if it is installed, - the default is True - Valid values: False,True -""" - - -def use_numexpr_cb(key) -> None: - from pandas.core.computation import expressions - - expressions.set_use_numexpr(cf.get_option(key)) - - -use_numba_doc = """ -: bool - Use the numba engine option for select operations if it is installed, - the default is False - Valid values: False,True -""" - - -def use_numba_cb(key) -> None: - from pandas.core.util import numba_ - - numba_.set_use_numba(cf.get_option(key)) - - -with cf.config_prefix("compute"): - cf.register_option( - "use_bottleneck", - True, - use_bottleneck_doc, - validator=is_bool, - cb=use_bottleneck_cb, - ) - cf.register_option( - "use_numexpr", True, use_numexpr_doc, validator=is_bool, cb=use_numexpr_cb - ) - cf.register_option( - "use_numba", False, use_numba_doc, validator=is_bool, cb=use_numba_cb - ) -# -# options from the "display" namespace - -pc_precision_doc = """ -: int - Floating point output precision in terms of number of places after the - decimal, for regular formatting as well as scientific notation. Similar - to ``precision`` in :meth:`numpy.set_printoptions`. -""" - -pc_colspace_doc = """ -: int - Default space for DataFrame columns. -""" - -pc_max_rows_doc = """ -: int - If max_rows is exceeded, switch to truncate view. Depending on - `large_repr`, objects are either centrally truncated or printed as - a summary view. 'None' value means unlimited. - - In case python/IPython is running in a terminal and `large_repr` - equals 'truncate' this can be set to 0 and pandas will auto-detect - the height of the terminal and print a truncated object which fits - the screen height. The IPython notebook, IPython qtconsole, or - IDLE do not run in a terminal and hence it is not possible to do - correct auto-detection. -""" - -pc_min_rows_doc = """ -: int - The numbers of rows to show in a truncated view (when `max_rows` is - exceeded). Ignored when `max_rows` is set to None or 0. When set to - None, follows the value of `max_rows`. -""" - -pc_max_cols_doc = """ -: int - If max_cols is exceeded, switch to truncate view. Depending on - `large_repr`, objects are either centrally truncated or printed as - a summary view. 'None' value means unlimited. - - In case python/IPython is running in a terminal and `large_repr` - equals 'truncate' this can be set to 0 or None and pandas will auto-detect - the width of the terminal and print a truncated object which fits - the screen width. The IPython notebook, IPython qtconsole, or IDLE - do not run in a terminal and hence it is not possible to do - correct auto-detection and defaults to 20. -""" - -pc_max_categories_doc = """ -: int - This sets the maximum number of categories pandas should output when - printing out a `Categorical` or a Series of dtype "category". -""" - -pc_max_info_cols_doc = """ -: int - max_info_columns is used in DataFrame.info method to decide if - per column information will be printed. -""" - -pc_nb_repr_h_doc = """ -: boolean - When True, IPython notebook will use html representation for - pandas objects (if it is available). -""" - -pc_pprint_nest_depth = """ -: int - Controls the number of nested levels to process when pretty-printing -""" - -pc_multi_sparse_doc = """ -: boolean - "sparsify" MultiIndex display (don't display repeated - elements in outer levels within groups) -""" - -float_format_doc = """ -: callable - The callable should accept a floating point number and return - a string with the desired format of the number. This is used - in some places like SeriesFormatter. - See formats.format.EngFormatter for an example. -""" - -max_colwidth_doc = """ -: int or None - The maximum width in characters of a column in the repr of - a pandas data structure. When the column overflows, a "..." - placeholder is embedded in the output. A 'None' value means unlimited. -""" - -colheader_justify_doc = """ -: 'left'/'right' - Controls the justification of column headers. used by DataFrameFormatter. -""" - -pc_expand_repr_doc = """ -: boolean - Whether to print out the full DataFrame repr for wide DataFrames across - multiple lines, `max_columns` is still respected, but the output will - wrap-around across multiple "pages" if its width exceeds `display.width`. -""" - -pc_show_dimensions_doc = """ -: boolean or 'truncate' - Whether to print out dimensions at the end of DataFrame repr. - If 'truncate' is specified, only print out the dimensions if the - frame is truncated (e.g. not display all rows and/or columns) -""" - -pc_east_asian_width_doc = """ -: boolean - Whether to use the Unicode East Asian Width to calculate the display text - width. - Enabling this may affect to the performance (default: False) -""" - -pc_ambiguous_as_wide_doc = """ -: boolean - Whether to handle Unicode characters belong to Ambiguous as Wide (width=2) - (default: False) -""" - -pc_table_schema_doc = """ -: boolean - Whether to publish a Table Schema representation for frontends - that support it. - (default: False) -""" - -pc_html_border_doc = """ -: int - A ``border=value`` attribute is inserted in the ```` tag - for the DataFrame HTML repr. -""" - -pc_html_use_mathjax_doc = """\ -: boolean - When True, Jupyter notebook will process table contents using MathJax, - rendering mathematical expressions enclosed by the dollar symbol. - (default: True) -""" - -pc_max_dir_items = """\ -: int - The number of items that will be added to `dir(...)`. 'None' value means - unlimited. Because dir is cached, changing this option will not immediately - affect already existing dataframes until a column is deleted or added. - - This is for instance used to suggest columns from a dataframe to tab - completion. -""" - -pc_width_doc = """ -: int - Width of the display in characters. In case python/IPython is running in - a terminal this can be set to None and pandas will correctly auto-detect - the width. - Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a - terminal and hence it is not possible to correctly detect the width. -""" - -pc_chop_threshold_doc = """ -: float or None - if set to a float value, all float values smaller than the given threshold - will be displayed as exactly 0 by repr and friends. -""" - -pc_max_seq_items = """ -: int or None - When pretty-printing a long sequence, no more then `max_seq_items` - will be printed. If items are omitted, they will be denoted by the - addition of "..." to the resulting string. - - If set to None, the number of items to be printed is unlimited. -""" - -pc_max_info_rows_doc = """ -: int or None - df.info() will usually show null-counts for each column. - For large frames this can be quite slow. max_info_rows and max_info_cols - limit this null check only to frames with smaller dimensions than - specified. -""" - -pc_large_repr_doc = """ -: 'truncate'/'info' - For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can - show a truncated table, or switch to the view from - df.info() (the behaviour in earlier versions of pandas). -""" - -pc_memory_usage_doc = """ -: bool, string or None - This specifies if the memory usage of a DataFrame should be displayed when - df.info() is called. Valid values True,False,'deep' -""" - - -def table_schema_cb(key) -> None: - from pandas.io.formats.printing import enable_data_resource_formatter - - enable_data_resource_formatter(cf.get_option(key)) - - -def is_terminal() -> bool: - """ - Detect if Python is running in a terminal. - - Returns True if Python is running in a terminal or False if not. - """ - try: - # error: Name 'get_ipython' is not defined - ip = get_ipython() # type: ignore[name-defined] - except NameError: # assume standard Python interpreter in a terminal - return True - else: - if hasattr(ip, "kernel"): # IPython as a Jupyter kernel - return False - else: # IPython in a terminal - return True - - -with cf.config_prefix("display"): - cf.register_option("precision", 6, pc_precision_doc, validator=is_nonnegative_int) - cf.register_option( - "float_format", - None, - float_format_doc, - validator=is_one_of_factory([None, is_callable]), - ) - cf.register_option( - "max_info_rows", - 1690785, - pc_max_info_rows_doc, - validator=is_instance_factory((int, type(None))), - ) - cf.register_option("max_rows", 60, pc_max_rows_doc, validator=is_nonnegative_int) - cf.register_option( - "min_rows", - 10, - pc_min_rows_doc, - validator=is_instance_factory([type(None), int]), - ) - cf.register_option("max_categories", 8, pc_max_categories_doc, validator=is_int) - - cf.register_option( - "max_colwidth", - 50, - max_colwidth_doc, - validator=is_nonnegative_int, - ) - if is_terminal(): - max_cols = 0 # automatically determine optimal number of columns - else: - max_cols = 20 # cannot determine optimal number of columns - cf.register_option( - "max_columns", max_cols, pc_max_cols_doc, validator=is_nonnegative_int - ) - cf.register_option( - "large_repr", - "truncate", - pc_large_repr_doc, - validator=is_one_of_factory(["truncate", "info"]), - ) - cf.register_option("max_info_columns", 100, pc_max_info_cols_doc, validator=is_int) - cf.register_option( - "colheader_justify", "right", colheader_justify_doc, validator=is_text - ) - cf.register_option("notebook_repr_html", True, pc_nb_repr_h_doc, validator=is_bool) - cf.register_option("pprint_nest_depth", 3, pc_pprint_nest_depth, validator=is_int) - cf.register_option("multi_sparse", True, pc_multi_sparse_doc, validator=is_bool) - cf.register_option("expand_frame_repr", True, pc_expand_repr_doc) - cf.register_option( - "show_dimensions", - "truncate", - pc_show_dimensions_doc, - validator=is_one_of_factory([True, False, "truncate"]), - ) - cf.register_option("chop_threshold", None, pc_chop_threshold_doc) - cf.register_option("max_seq_items", 100, pc_max_seq_items) - cf.register_option( - "width", 80, pc_width_doc, validator=is_instance_factory([type(None), int]) - ) - cf.register_option( - "memory_usage", - True, - pc_memory_usage_doc, - validator=is_one_of_factory([None, True, False, "deep"]), - ) - cf.register_option( - "unicode.east_asian_width", False, pc_east_asian_width_doc, validator=is_bool - ) - cf.register_option( - "unicode.ambiguous_as_wide", False, pc_east_asian_width_doc, validator=is_bool - ) - cf.register_option( - "html.table_schema", - False, - pc_table_schema_doc, - validator=is_bool, - cb=table_schema_cb, - ) - cf.register_option("html.border", 1, pc_html_border_doc, validator=is_int) - cf.register_option( - "html.use_mathjax", True, pc_html_use_mathjax_doc, validator=is_bool - ) - cf.register_option( - "max_dir_items", 100, pc_max_dir_items, validator=is_nonnegative_int - ) - -tc_sim_interactive_doc = """ -: boolean - Whether to simulate interactive mode for purposes of testing -""" - -with cf.config_prefix("mode"): - cf.register_option("sim_interactive", False, tc_sim_interactive_doc) - -use_inf_as_na_doc = """ -: boolean - True means treat None, NaN, INF, -INF as NA (old way), - False means None and NaN are null, but INF, -INF are not NA - (new way). - - This option is deprecated in pandas 2.1.0 and will be removed in 3.0. -""" - -# We don't want to start importing everything at the global context level -# or we'll hit circular deps. - - -def use_inf_as_na_cb(key) -> None: - from pandas.core.dtypes.missing import _use_inf_as_na - - _use_inf_as_na(key) - - -with cf.config_prefix("mode"): - cf.register_option("use_inf_as_na", False, use_inf_as_na_doc, cb=use_inf_as_na_cb) - -cf.deprecate_option( - # GH#51684 - "mode.use_inf_as_na", - "use_inf_as_na option is deprecated and will be removed in a future " - "version. Convert inf values to NaN before operating instead.", -) - -data_manager_doc = """ -: string - Internal data manager type; can be "block" or "array". Defaults to "block", - unless overridden by the 'PANDAS_DATA_MANAGER' environment variable (needs - to be set before pandas is imported). -""" - - -with cf.config_prefix("mode"): - cf.register_option( - "data_manager", - # Get the default from an environment variable, if set, otherwise defaults - # to "block". This environment variable can be set for testing. - os.environ.get("PANDAS_DATA_MANAGER", "block"), - data_manager_doc, - validator=is_one_of_factory(["block", "array"]), - ) - - -# TODO better name? -copy_on_write_doc = """ -: bool - Use new copy-view behaviour using Copy-on-Write. Defaults to False, - unless overridden by the 'PANDAS_COPY_ON_WRITE' environment variable - (if set to "1" for True, needs to be set before pandas is imported). -""" - - -with cf.config_prefix("mode"): - cf.register_option( - "copy_on_write", - # Get the default from an environment variable, if set, otherwise defaults - # to False. This environment variable can be set for testing. - os.environ.get("PANDAS_COPY_ON_WRITE", "0") == "1", - copy_on_write_doc, - validator=is_bool, - ) - - -# user warnings -chained_assignment = """ -: string - Raise an exception, warn, or no action if trying to use chained assignment, - The default is warn -""" - -with cf.config_prefix("mode"): - cf.register_option( - "chained_assignment", - "warn", - chained_assignment, - validator=is_one_of_factory([None, "warn", "raise"]), - ) - - -string_storage_doc = """ -: string - The default storage for StringDtype. This option is ignored if - ``future.infer_string`` is set to True. -""" - -with cf.config_prefix("mode"): - cf.register_option( - "string_storage", - "python", - string_storage_doc, - validator=is_one_of_factory(["python", "pyarrow", "pyarrow_numpy"]), - ) - - -# Set up the io.excel specific reader configuration. -reader_engine_doc = """ -: string - The default Excel reader engine for '{ext}' files. Available options: - auto, {others}. -""" - -_xls_options = ["xlrd"] -_xlsm_options = ["xlrd", "openpyxl"] -_xlsx_options = ["xlrd", "openpyxl"] -_ods_options = ["odf"] -_xlsb_options = ["pyxlsb"] - - -with cf.config_prefix("io.excel.xls"): - cf.register_option( - "reader", - "auto", - reader_engine_doc.format(ext="xls", others=", ".join(_xls_options)), - validator=is_one_of_factory(_xls_options + ["auto"]), - ) - -with cf.config_prefix("io.excel.xlsm"): - cf.register_option( - "reader", - "auto", - reader_engine_doc.format(ext="xlsm", others=", ".join(_xlsm_options)), - validator=is_one_of_factory(_xlsm_options + ["auto"]), - ) - - -with cf.config_prefix("io.excel.xlsx"): - cf.register_option( - "reader", - "auto", - reader_engine_doc.format(ext="xlsx", others=", ".join(_xlsx_options)), - validator=is_one_of_factory(_xlsx_options + ["auto"]), - ) - - -with cf.config_prefix("io.excel.ods"): - cf.register_option( - "reader", - "auto", - reader_engine_doc.format(ext="ods", others=", ".join(_ods_options)), - validator=is_one_of_factory(_ods_options + ["auto"]), - ) - -with cf.config_prefix("io.excel.xlsb"): - cf.register_option( - "reader", - "auto", - reader_engine_doc.format(ext="xlsb", others=", ".join(_xlsb_options)), - validator=is_one_of_factory(_xlsb_options + ["auto"]), - ) - -# Set up the io.excel specific writer configuration. -writer_engine_doc = """ -: string - The default Excel writer engine for '{ext}' files. Available options: - auto, {others}. -""" - -_xlsm_options = ["openpyxl"] -_xlsx_options = ["openpyxl", "xlsxwriter"] -_ods_options = ["odf"] - - -with cf.config_prefix("io.excel.xlsm"): - cf.register_option( - "writer", - "auto", - writer_engine_doc.format(ext="xlsm", others=", ".join(_xlsm_options)), - validator=str, - ) - - -with cf.config_prefix("io.excel.xlsx"): - cf.register_option( - "writer", - "auto", - writer_engine_doc.format(ext="xlsx", others=", ".join(_xlsx_options)), - validator=str, - ) - - -with cf.config_prefix("io.excel.ods"): - cf.register_option( - "writer", - "auto", - writer_engine_doc.format(ext="ods", others=", ".join(_ods_options)), - validator=str, - ) - - -# Set up the io.parquet specific configuration. -parquet_engine_doc = """ -: string - The default parquet reader/writer engine. Available options: - 'auto', 'pyarrow', 'fastparquet', the default is 'auto' -""" - -with cf.config_prefix("io.parquet"): - cf.register_option( - "engine", - "auto", - parquet_engine_doc, - validator=is_one_of_factory(["auto", "pyarrow", "fastparquet"]), - ) - - -# Set up the io.sql specific configuration. -sql_engine_doc = """ -: string - The default sql reader/writer engine. Available options: - 'auto', 'sqlalchemy', the default is 'auto' -""" - -with cf.config_prefix("io.sql"): - cf.register_option( - "engine", - "auto", - sql_engine_doc, - validator=is_one_of_factory(["auto", "sqlalchemy"]), - ) - -# -------- -# Plotting -# --------- - -plotting_backend_doc = """ -: str - The plotting backend to use. The default value is "matplotlib", the - backend provided with pandas. Other backends can be specified by - providing the name of the module that implements the backend. -""" - - -def register_plotting_backend_cb(key) -> None: - if key == "matplotlib": - # We defer matplotlib validation, since it's the default - return - from pandas.plotting._core import _get_plot_backend - - _get_plot_backend(key) - - -with cf.config_prefix("plotting"): - cf.register_option( - "backend", - defval="matplotlib", - doc=plotting_backend_doc, - validator=register_plotting_backend_cb, - ) - - -register_converter_doc = """ -: bool or 'auto'. - Whether to register converters with matplotlib's units registry for - dates, times, datetimes, and Periods. Toggling to False will remove - the converters, restoring any converters that pandas overwrote. -""" - - -def register_converter_cb(key) -> None: - from pandas.plotting import ( - deregister_matplotlib_converters, - register_matplotlib_converters, - ) - - if cf.get_option(key): - register_matplotlib_converters() - else: - deregister_matplotlib_converters() - - -with cf.config_prefix("plotting.matplotlib"): - cf.register_option( - "register_converters", - "auto", - register_converter_doc, - validator=is_one_of_factory(["auto", True, False]), - cb=register_converter_cb, - ) - -# ------ -# Styler -# ------ - -styler_sparse_index_doc = """ -: bool - Whether to sparsify the display of a hierarchical index. Setting to False will - display each explicit level element in a hierarchical key for each row. -""" - -styler_sparse_columns_doc = """ -: bool - Whether to sparsify the display of hierarchical columns. Setting to False will - display each explicit level element in a hierarchical key for each column. -""" - -styler_render_repr = """ -: str - Determine which output to use in Jupyter Notebook in {"html", "latex"}. -""" - -styler_max_elements = """ -: int - The maximum number of data-cell (
) elements that will be rendered before - trimming will occur over columns, rows or both if needed. -""" - -styler_max_rows = """ -: int, optional - The maximum number of rows that will be rendered. May still be reduced to - satisfy ``max_elements``, which takes precedence. -""" - -styler_max_columns = """ -: int, optional - The maximum number of columns that will be rendered. May still be reduced to - satisfy ``max_elements``, which takes precedence. -""" - -styler_precision = """ -: int - The precision for floats and complex numbers. -""" - -styler_decimal = """ -: str - The character representation for the decimal separator for floats and complex. -""" - -styler_thousands = """ -: str, optional - The character representation for thousands separator for floats, int and complex. -""" - -styler_na_rep = """ -: str, optional - The string representation for values identified as missing. -""" - -styler_escape = """ -: str, optional - Whether to escape certain characters according to the given context; html or latex. -""" - -styler_formatter = """ -: str, callable, dict, optional - A formatter object to be used as default within ``Styler.format``. -""" - -styler_multirow_align = """ -: {"c", "t", "b"} - The specifier for vertical alignment of sparsified LaTeX multirows. -""" - -styler_multicol_align = r""" -: {"r", "c", "l", "naive-l", "naive-r"} - The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe - decorators can also be added to non-naive values to draw vertical - rules, e.g. "\|r" will draw a rule on the left side of right aligned merged cells. -""" - -styler_hrules = """ -: bool - Whether to add horizontal rules on top and bottom and below the headers. -""" - -styler_environment = """ -: str - The environment to replace ``\\begin{table}``. If "longtable" is used results - in a specific longtable environment format. -""" - -styler_encoding = """ -: str - The encoding used for output HTML and LaTeX files. -""" - -styler_mathjax = """ -: bool - If False will render special CSS classes to table attributes that indicate Mathjax - will not be used in Jupyter Notebook. -""" - -with cf.config_prefix("styler"): - cf.register_option("sparse.index", True, styler_sparse_index_doc, validator=is_bool) - - cf.register_option( - "sparse.columns", True, styler_sparse_columns_doc, validator=is_bool - ) - - cf.register_option( - "render.repr", - "html", - styler_render_repr, - validator=is_one_of_factory(["html", "latex"]), - ) - - cf.register_option( - "render.max_elements", - 2**18, - styler_max_elements, - validator=is_nonnegative_int, - ) - - cf.register_option( - "render.max_rows", - None, - styler_max_rows, - validator=is_nonnegative_int, - ) - - cf.register_option( - "render.max_columns", - None, - styler_max_columns, - validator=is_nonnegative_int, - ) - - cf.register_option("render.encoding", "utf-8", styler_encoding, validator=is_str) - - cf.register_option("format.decimal", ".", styler_decimal, validator=is_str) - - cf.register_option( - "format.precision", 6, styler_precision, validator=is_nonnegative_int - ) - - cf.register_option( - "format.thousands", - None, - styler_thousands, - validator=is_instance_factory([type(None), str]), - ) - - cf.register_option( - "format.na_rep", - None, - styler_na_rep, - validator=is_instance_factory([type(None), str]), - ) - - cf.register_option( - "format.escape", - None, - styler_escape, - validator=is_one_of_factory([None, "html", "latex", "latex-math"]), - ) - - cf.register_option( - "format.formatter", - None, - styler_formatter, - validator=is_instance_factory([type(None), dict, Callable, str]), - ) - - cf.register_option("html.mathjax", True, styler_mathjax, validator=is_bool) - - cf.register_option( - "latex.multirow_align", - "c", - styler_multirow_align, - validator=is_one_of_factory(["c", "t", "b", "naive"]), - ) - - val_mca = ["r", "|r|", "|r", "r|", "c", "|c|", "|c", "c|", "l", "|l|", "|l", "l|"] - val_mca += ["naive-l", "naive-r"] - cf.register_option( - "latex.multicol_align", - "r", - styler_multicol_align, - validator=is_one_of_factory(val_mca), - ) - - cf.register_option("latex.hrules", False, styler_hrules, validator=is_bool) - - cf.register_option( - "latex.environment", - None, - styler_environment, - validator=is_instance_factory([type(None), str]), - ) - - -with cf.config_prefix("future"): - cf.register_option( - "infer_string", - False, - "Whether to infer sequence of str objects as pyarrow string " - "dtype, which will be the default in pandas 3.0 " - "(at which point this option will be deprecated).", - validator=is_one_of_factory([True, False]), - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/window/numba_.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/window/numba_.py deleted file mode 100644 index 9357945e78c631a3fada24ec3015ca0cf183b99c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/window/numba_.py +++ /dev/null @@ -1,351 +0,0 @@ -from __future__ import annotations - -import functools -from typing import ( - TYPE_CHECKING, - Any, - Callable, -) - -import numpy as np - -from pandas.compat._optional import import_optional_dependency - -from pandas.core.util.numba_ import jit_user_function - -if TYPE_CHECKING: - from pandas._typing import Scalar - - -@functools.cache -def generate_numba_apply_func( - func: Callable[..., Scalar], - nopython: bool, - nogil: bool, - parallel: bool, -): - """ - Generate a numba jitted apply function specified by values from engine_kwargs. - - 1. jit the user's function - 2. Return a rolling apply function with the jitted function inline - - Configurations specified in engine_kwargs apply to both the user's - function _AND_ the rolling apply function. - - Parameters - ---------- - func : function - function to be applied to each window and will be JITed - nopython : bool - nopython to be passed into numba.jit - nogil : bool - nogil to be passed into numba.jit - parallel : bool - parallel to be passed into numba.jit - - Returns - ------- - Numba function - """ - numba_func = jit_user_function(func) - if TYPE_CHECKING: - import numba - else: - numba = import_optional_dependency("numba") - - @numba.jit(nopython=nopython, nogil=nogil, parallel=parallel) - def roll_apply( - values: np.ndarray, - begin: np.ndarray, - end: np.ndarray, - minimum_periods: int, - *args: Any, - ) -> np.ndarray: - result = np.empty(len(begin)) - for i in numba.prange(len(result)): - start = begin[i] - stop = end[i] - window = values[start:stop] - count_nan = np.sum(np.isnan(window)) - if len(window) - count_nan >= minimum_periods: - result[i] = numba_func(window, *args) - else: - result[i] = np.nan - return result - - return roll_apply - - -@functools.cache -def generate_numba_ewm_func( - nopython: bool, - nogil: bool, - parallel: bool, - com: float, - adjust: bool, - ignore_na: bool, - deltas: tuple, - normalize: bool, -): - """ - Generate a numba jitted ewm mean or sum function specified by values - from engine_kwargs. - - Parameters - ---------- - nopython : bool - nopython to be passed into numba.jit - nogil : bool - nogil to be passed into numba.jit - parallel : bool - parallel to be passed into numba.jit - com : float - adjust : bool - ignore_na : bool - deltas : tuple - normalize : bool - - Returns - ------- - Numba function - """ - if TYPE_CHECKING: - import numba - else: - numba = import_optional_dependency("numba") - - @numba.jit(nopython=nopython, nogil=nogil, parallel=parallel) - def ewm( - values: np.ndarray, - begin: np.ndarray, - end: np.ndarray, - minimum_periods: int, - ) -> np.ndarray: - result = np.empty(len(values)) - alpha = 1.0 / (1.0 + com) - old_wt_factor = 1.0 - alpha - new_wt = 1.0 if adjust else alpha - - for i in numba.prange(len(begin)): - start = begin[i] - stop = end[i] - window = values[start:stop] - sub_result = np.empty(len(window)) - - weighted = window[0] - nobs = int(not np.isnan(weighted)) - sub_result[0] = weighted if nobs >= minimum_periods else np.nan - old_wt = 1.0 - - for j in range(1, len(window)): - cur = window[j] - is_observation = not np.isnan(cur) - nobs += is_observation - if not np.isnan(weighted): - if is_observation or not ignore_na: - if normalize: - # note that len(deltas) = len(vals) - 1 and deltas[i] - # is to be used in conjunction with vals[i+1] - old_wt *= old_wt_factor ** deltas[start + j - 1] - else: - weighted = old_wt_factor * weighted - if is_observation: - if normalize: - # avoid numerical errors on constant series - if weighted != cur: - weighted = old_wt * weighted + new_wt * cur - if normalize: - weighted = weighted / (old_wt + new_wt) - if adjust: - old_wt += new_wt - else: - old_wt = 1.0 - else: - weighted += cur - elif is_observation: - weighted = cur - - sub_result[j] = weighted if nobs >= minimum_periods else np.nan - - result[start:stop] = sub_result - - return result - - return ewm - - -@functools.cache -def generate_numba_table_func( - func: Callable[..., np.ndarray], - nopython: bool, - nogil: bool, - parallel: bool, -): - """ - Generate a numba jitted function to apply window calculations table-wise. - - Func will be passed a M window size x N number of columns array, and - must return a 1 x N number of columns array. Func is intended to operate - row-wise, but the result will be transposed for axis=1. - - 1. jit the user's function - 2. Return a rolling apply function with the jitted function inline - - Parameters - ---------- - func : function - function to be applied to each window and will be JITed - nopython : bool - nopython to be passed into numba.jit - nogil : bool - nogil to be passed into numba.jit - parallel : bool - parallel to be passed into numba.jit - - Returns - ------- - Numba function - """ - numba_func = jit_user_function(func) - if TYPE_CHECKING: - import numba - else: - numba = import_optional_dependency("numba") - - @numba.jit(nopython=nopython, nogil=nogil, parallel=parallel) - def roll_table( - values: np.ndarray, - begin: np.ndarray, - end: np.ndarray, - minimum_periods: int, - *args: Any, - ): - result = np.empty((len(begin), values.shape[1])) - min_periods_mask = np.empty(result.shape) - for i in numba.prange(len(result)): - start = begin[i] - stop = end[i] - window = values[start:stop] - count_nan = np.sum(np.isnan(window), axis=0) - sub_result = numba_func(window, *args) - nan_mask = len(window) - count_nan >= minimum_periods - min_periods_mask[i, :] = nan_mask - result[i, :] = sub_result - result = np.where(min_periods_mask, result, np.nan) - return result - - return roll_table - - -# This function will no longer be needed once numba supports -# axis for all np.nan* agg functions -# https://github.com/numba/numba/issues/1269 -@functools.cache -def generate_manual_numpy_nan_agg_with_axis(nan_func): - if TYPE_CHECKING: - import numba - else: - numba = import_optional_dependency("numba") - - @numba.jit(nopython=True, nogil=True, parallel=True) - def nan_agg_with_axis(table): - result = np.empty(table.shape[1]) - for i in numba.prange(table.shape[1]): - partition = table[:, i] - result[i] = nan_func(partition) - return result - - return nan_agg_with_axis - - -@functools.cache -def generate_numba_ewm_table_func( - nopython: bool, - nogil: bool, - parallel: bool, - com: float, - adjust: bool, - ignore_na: bool, - deltas: tuple, - normalize: bool, -): - """ - Generate a numba jitted ewm mean or sum function applied table wise specified - by values from engine_kwargs. - - Parameters - ---------- - nopython : bool - nopython to be passed into numba.jit - nogil : bool - nogil to be passed into numba.jit - parallel : bool - parallel to be passed into numba.jit - com : float - adjust : bool - ignore_na : bool - deltas : tuple - normalize: bool - - Returns - ------- - Numba function - """ - if TYPE_CHECKING: - import numba - else: - numba = import_optional_dependency("numba") - - @numba.jit(nopython=nopython, nogil=nogil, parallel=parallel) - def ewm_table( - values: np.ndarray, - begin: np.ndarray, - end: np.ndarray, - minimum_periods: int, - ) -> np.ndarray: - alpha = 1.0 / (1.0 + com) - old_wt_factor = 1.0 - alpha - new_wt = 1.0 if adjust else alpha - old_wt = np.ones(values.shape[1]) - - result = np.empty(values.shape) - weighted = values[0].copy() - nobs = (~np.isnan(weighted)).astype(np.int64) - result[0] = np.where(nobs >= minimum_periods, weighted, np.nan) - for i in range(1, len(values)): - cur = values[i] - is_observations = ~np.isnan(cur) - nobs += is_observations.astype(np.int64) - for j in numba.prange(len(cur)): - if not np.isnan(weighted[j]): - if is_observations[j] or not ignore_na: - if normalize: - # note that len(deltas) = len(vals) - 1 and deltas[i] - # is to be used in conjunction with vals[i+1] - old_wt[j] *= old_wt_factor ** deltas[i - 1] - else: - weighted[j] = old_wt_factor * weighted[j] - if is_observations[j]: - if normalize: - # avoid numerical errors on constant series - if weighted[j] != cur[j]: - weighted[j] = ( - old_wt[j] * weighted[j] + new_wt * cur[j] - ) - if normalize: - weighted[j] = weighted[j] / (old_wt[j] + new_wt) - if adjust: - old_wt[j] += new_wt - else: - old_wt[j] = 1.0 - else: - weighted[j] += cur[j] - elif is_observations[j]: - weighted[j] = cur[j] - - result[i] = np.where(nobs >= minimum_periods, weighted, np.nan) - - return result - - return ewm_table diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_add_prefix_suffix.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_add_prefix_suffix.py deleted file mode 100644 index 289a56b98b7e123fb1c1edf5cc9ff41369d51122..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_add_prefix_suffix.py +++ /dev/null @@ -1,41 +0,0 @@ -import pytest - -from pandas import Index -import pandas._testing as tm - - -def test_add_prefix_suffix(string_series): - with_prefix = string_series.add_prefix("foo#") - expected = Index([f"foo#{c}" for c in string_series.index]) - tm.assert_index_equal(with_prefix.index, expected) - - with_suffix = string_series.add_suffix("#foo") - expected = Index([f"{c}#foo" for c in string_series.index]) - tm.assert_index_equal(with_suffix.index, expected) - - with_pct_prefix = string_series.add_prefix("%") - expected = Index([f"%{c}" for c in string_series.index]) - tm.assert_index_equal(with_pct_prefix.index, expected) - - with_pct_suffix = string_series.add_suffix("%") - expected = Index([f"{c}%" for c in string_series.index]) - tm.assert_index_equal(with_pct_suffix.index, expected) - - -def test_add_prefix_suffix_axis(string_series): - # GH 47819 - with_prefix = string_series.add_prefix("foo#", axis=0) - expected = Index([f"foo#{c}" for c in string_series.index]) - tm.assert_index_equal(with_prefix.index, expected) - - with_pct_suffix = string_series.add_suffix("#foo", axis=0) - expected = Index([f"{c}#foo" for c in string_series.index]) - tm.assert_index_equal(with_pct_suffix.index, expected) - - -def test_add_prefix_suffix_invalid_axis(string_series): - with pytest.raises(ValueError, match="No axis named 1 for object type Series"): - string_series.add_prefix("foo#", axis=1) - - with pytest.raises(ValueError, match="No axis named 1 for object type Series"): - string_series.add_suffix("foo#", axis=1) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/models/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/models/__init__.py deleted file mode 100644 index 7855226e4b500142deef8fb247cd33a9a991d122..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/models/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -"""A package that contains models that represent entities. -""" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/self_outdated_check.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/self_outdated_check.py deleted file mode 100644 index 7300e0ea4c0d06ced25a6abdeab0769354167920..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/self_outdated_check.py +++ /dev/null @@ -1,189 +0,0 @@ -import datetime -import hashlib -import json -import logging -import optparse -import os.path -import sys -from typing import Any, Dict - -from pip._vendor.packaging.version import parse as parse_version - -from pip._internal.index.collector import LinkCollector -from pip._internal.index.package_finder import PackageFinder -from pip._internal.metadata import get_default_environment -from pip._internal.models.selection_prefs import SelectionPreferences -from pip._internal.network.session import PipSession -from pip._internal.utils.filesystem import adjacent_tmp_file, check_path_owner, replace -from pip._internal.utils.misc import ensure_dir - -SELFCHECK_DATE_FMT = "%Y-%m-%dT%H:%M:%SZ" - - -logger = logging.getLogger(__name__) - - -def _get_statefile_name(key: str) -> str: - key_bytes = key.encode() - name = hashlib.sha224(key_bytes).hexdigest() - return name - - -class SelfCheckState: - def __init__(self, cache_dir: str) -> None: - self.state: Dict[str, Any] = {} - self.statefile_path = None - - # Try to load the existing state - if cache_dir: - self.statefile_path = os.path.join( - cache_dir, "selfcheck", _get_statefile_name(self.key) - ) - try: - with open(self.statefile_path, encoding="utf-8") as statefile: - self.state = json.load(statefile) - except (OSError, ValueError, KeyError): - # Explicitly suppressing exceptions, since we don't want to - # error out if the cache file is invalid. - pass - - @property - def key(self) -> str: - return sys.prefix - - def save(self, pypi_version: str, current_time: datetime.datetime) -> None: - # If we do not have a path to cache in, don't bother saving. - if not self.statefile_path: - return - - # Check to make sure that we own the directory - if not check_path_owner(os.path.dirname(self.statefile_path)): - return - - # Now that we've ensured the directory is owned by this user, we'll go - # ahead and make sure that all our directories are created. - ensure_dir(os.path.dirname(self.statefile_path)) - - state = { - # Include the key so it's easy to tell which pip wrote the - # file. - "key": self.key, - "last_check": current_time.strftime(SELFCHECK_DATE_FMT), - "pypi_version": pypi_version, - } - - text = json.dumps(state, sort_keys=True, separators=(",", ":")) - - with adjacent_tmp_file(self.statefile_path) as f: - f.write(text.encode()) - - try: - # Since we have a prefix-specific state file, we can just - # overwrite whatever is there, no need to check. - replace(f.name, self.statefile_path) - except OSError: - # Best effort. - pass - - -def was_installed_by_pip(pkg: str) -> bool: - """Checks whether pkg was installed by pip - - This is used not to display the upgrade message when pip is in fact - installed by system package manager, such as dnf on Fedora. - """ - dist = get_default_environment().get_distribution(pkg) - return dist is not None and "pip" == dist.installer - - -def pip_self_version_check(session: PipSession, options: optparse.Values) -> None: - """Check for an update for pip. - - Limit the frequency of checks to once per week. State is stored either in - the active virtualenv or in the user's USER_CACHE_DIR keyed off the prefix - of the pip script path. - """ - installed_dist = get_default_environment().get_distribution("pip") - if not installed_dist: - return - - pip_version = installed_dist.version - pypi_version = None - - try: - state = SelfCheckState(cache_dir=options.cache_dir) - - current_time = datetime.datetime.utcnow() - # Determine if we need to refresh the state - if "last_check" in state.state and "pypi_version" in state.state: - last_check = datetime.datetime.strptime( - state.state["last_check"], SELFCHECK_DATE_FMT - ) - if (current_time - last_check).total_seconds() < 7 * 24 * 60 * 60: - pypi_version = state.state["pypi_version"] - - # Refresh the version if we need to or just see if we need to warn - if pypi_version is None: - # Lets use PackageFinder to see what the latest pip version is - link_collector = LinkCollector.create( - session, - options=options, - suppress_no_index=True, - ) - - # Pass allow_yanked=False so we don't suggest upgrading to a - # yanked version. - selection_prefs = SelectionPreferences( - allow_yanked=False, - allow_all_prereleases=False, # Explicitly set to False - ) - - finder = PackageFinder.create( - link_collector=link_collector, - selection_prefs=selection_prefs, - use_deprecated_html5lib=( - "html5lib" in options.deprecated_features_enabled - ), - ) - best_candidate = finder.find_best_candidate("pip").best_candidate - if best_candidate is None: - return - pypi_version = str(best_candidate.version) - - # save that we've performed a check - state.save(pypi_version, current_time) - - remote_version = parse_version(pypi_version) - - local_version_is_older = ( - pip_version < remote_version - and pip_version.base_version != remote_version.base_version - and was_installed_by_pip("pip") - ) - - # Determine if our pypi_version is older - if not local_version_is_older: - return - - # We cannot tell how the current pip is available in the current - # command context, so be pragmatic here and suggest the command - # that's always available. This does not accommodate spaces in - # `sys.executable` on purpose as it is not possible to do it - # correctly without knowing the user's shell. Thus, - # it won't be done until possible through the standard library. - # Do not be tempted to use the undocumented subprocess.list2cmdline. - # It is considered an internal implementation detail for a reason. - pip_cmd = f"{sys.executable} -m pip" - logger.warning( - "You are using pip version %s; however, version %s is " - "available.\nYou should consider upgrading via the " - "'%s install --upgrade pip' command.", - pip_version, - pypi_version, - pip_cmd, - ) - except Exception: - logger.debug( - "There was an error checking the latest version of pip", - exc_info=True, - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/columns.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/columns.py deleted file mode 100644 index 669a3a7074f9a9e1af29cb4bc78b05851df67959..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/columns.py +++ /dev/null @@ -1,187 +0,0 @@ -from collections import defaultdict -from itertools import chain -from operator import itemgetter -from typing import Dict, Iterable, List, Optional, Tuple - -from .align import Align, AlignMethod -from .console import Console, ConsoleOptions, RenderableType, RenderResult -from .constrain import Constrain -from .measure import Measurement -from .padding import Padding, PaddingDimensions -from .table import Table -from .text import TextType -from .jupyter import JupyterMixin - - -class Columns(JupyterMixin): - """Display renderables in neat columns. - - Args: - renderables (Iterable[RenderableType]): Any number of Rich renderables (including str). - width (int, optional): The desired width of the columns, or None to auto detect. Defaults to None. - padding (PaddingDimensions, optional): Optional padding around cells. Defaults to (0, 1). - expand (bool, optional): Expand columns to full width. Defaults to False. - equal (bool, optional): Arrange in to equal sized columns. Defaults to False. - column_first (bool, optional): Align items from top to bottom (rather than left to right). Defaults to False. - right_to_left (bool, optional): Start column from right hand side. Defaults to False. - align (str, optional): Align value ("left", "right", or "center") or None for default. Defaults to None. - title (TextType, optional): Optional title for Columns. - """ - - def __init__( - self, - renderables: Optional[Iterable[RenderableType]] = None, - padding: PaddingDimensions = (0, 1), - *, - width: Optional[int] = None, - expand: bool = False, - equal: bool = False, - column_first: bool = False, - right_to_left: bool = False, - align: Optional[AlignMethod] = None, - title: Optional[TextType] = None, - ) -> None: - self.renderables = list(renderables or []) - self.width = width - self.padding = padding - self.expand = expand - self.equal = equal - self.column_first = column_first - self.right_to_left = right_to_left - self.align: Optional[AlignMethod] = align - self.title = title - - def add_renderable(self, renderable: RenderableType) -> None: - """Add a renderable to the columns. - - Args: - renderable (RenderableType): Any renderable object. - """ - self.renderables.append(renderable) - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - render_str = console.render_str - renderables = [ - render_str(renderable) if isinstance(renderable, str) else renderable - for renderable in self.renderables - ] - if not renderables: - return - _top, right, _bottom, left = Padding.unpack(self.padding) - width_padding = max(left, right) - max_width = options.max_width - widths: Dict[int, int] = defaultdict(int) - column_count = len(renderables) - - get_measurement = Measurement.get - renderable_widths = [ - get_measurement(console, options, renderable).maximum - for renderable in renderables - ] - if self.equal: - renderable_widths = [max(renderable_widths)] * len(renderable_widths) - - def iter_renderables( - column_count: int, - ) -> Iterable[Tuple[int, Optional[RenderableType]]]: - item_count = len(renderables) - if self.column_first: - width_renderables = list(zip(renderable_widths, renderables)) - - column_lengths: List[int] = [item_count // column_count] * column_count - for col_no in range(item_count % column_count): - column_lengths[col_no] += 1 - - row_count = (item_count + column_count - 1) // column_count - cells = [[-1] * column_count for _ in range(row_count)] - row = col = 0 - for index in range(item_count): - cells[row][col] = index - column_lengths[col] -= 1 - if column_lengths[col]: - row += 1 - else: - col += 1 - row = 0 - for index in chain.from_iterable(cells): - if index == -1: - break - yield width_renderables[index] - else: - yield from zip(renderable_widths, renderables) - # Pad odd elements with spaces - if item_count % column_count: - for _ in range(column_count - (item_count % column_count)): - yield 0, None - - table = Table.grid(padding=self.padding, collapse_padding=True, pad_edge=False) - table.expand = self.expand - table.title = self.title - - if self.width is not None: - column_count = (max_width) // (self.width + width_padding) - for _ in range(column_count): - table.add_column(width=self.width) - else: - while column_count > 1: - widths.clear() - column_no = 0 - for renderable_width, _ in iter_renderables(column_count): - widths[column_no] = max(widths[column_no], renderable_width) - total_width = sum(widths.values()) + width_padding * ( - len(widths) - 1 - ) - if total_width > max_width: - column_count = len(widths) - 1 - break - else: - column_no = (column_no + 1) % column_count - else: - break - - get_renderable = itemgetter(1) - _renderables = [ - get_renderable(_renderable) - for _renderable in iter_renderables(column_count) - ] - if self.equal: - _renderables = [ - None - if renderable is None - else Constrain(renderable, renderable_widths[0]) - for renderable in _renderables - ] - if self.align: - align = self.align - _Align = Align - _renderables = [ - None if renderable is None else _Align(renderable, align) - for renderable in _renderables - ] - - right_to_left = self.right_to_left - add_row = table.add_row - for start in range(0, len(_renderables), column_count): - row = _renderables[start : start + column_count] - if right_to_left: - row = row[::-1] - add_row(*row) - yield table - - -if __name__ == "__main__": # pragma: no cover - import os - - console = Console() - - files = [f"{i} {s}" for i, s in enumerate(sorted(os.listdir()))] - columns = Columns(files, padding=(0, 1), expand=False, equal=False) - console.print(columns) - console.rule() - columns.column_first = True - console.print(columns) - columns.right_to_left = True - console.rule() - console.print(columns) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/urllib3/util/retry.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/urllib3/util/retry.py deleted file mode 100644 index 3398323fd7c985a18740bd65b854ce90fe007e27..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/urllib3/util/retry.py +++ /dev/null @@ -1,620 +0,0 @@ -from __future__ import absolute_import - -import email -import logging -import re -import time -import warnings -from collections import namedtuple -from itertools import takewhile - -from ..exceptions import ( - ConnectTimeoutError, - InvalidHeader, - MaxRetryError, - ProtocolError, - ProxyError, - ReadTimeoutError, - ResponseError, -) -from ..packages import six - -log = logging.getLogger(__name__) - - -# Data structure for representing the metadata of requests that result in a retry. -RequestHistory = namedtuple( - "RequestHistory", ["method", "url", "error", "status", "redirect_location"] -) - - -# TODO: In v2 we can remove this sentinel and metaclass with deprecated options. -_Default = object() - - -class _RetryMeta(type): - @property - def DEFAULT_METHOD_WHITELIST(cls): - warnings.warn( - "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", - DeprecationWarning, - ) - return cls.DEFAULT_ALLOWED_METHODS - - @DEFAULT_METHOD_WHITELIST.setter - def DEFAULT_METHOD_WHITELIST(cls, value): - warnings.warn( - "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", - DeprecationWarning, - ) - cls.DEFAULT_ALLOWED_METHODS = value - - @property - def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls): - warnings.warn( - "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead", - DeprecationWarning, - ) - return cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT - - @DEFAULT_REDIRECT_HEADERS_BLACKLIST.setter - def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls, value): - warnings.warn( - "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead", - DeprecationWarning, - ) - cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT = value - - @property - def BACKOFF_MAX(cls): - warnings.warn( - "Using 'Retry.BACKOFF_MAX' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead", - DeprecationWarning, - ) - return cls.DEFAULT_BACKOFF_MAX - - @BACKOFF_MAX.setter - def BACKOFF_MAX(cls, value): - warnings.warn( - "Using 'Retry.BACKOFF_MAX' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead", - DeprecationWarning, - ) - cls.DEFAULT_BACKOFF_MAX = value - - -@six.add_metaclass(_RetryMeta) -class Retry(object): - """Retry configuration. - - Each retry attempt will create a new Retry object with updated values, so - they can be safely reused. - - Retries can be defined as a default for a pool:: - - retries = Retry(connect=5, read=2, redirect=5) - http = PoolManager(retries=retries) - response = http.request('GET', 'http://example.com/') - - Or per-request (which overrides the default for the pool):: - - response = http.request('GET', 'http://example.com/', retries=Retry(10)) - - Retries can be disabled by passing ``False``:: - - response = http.request('GET', 'http://example.com/', retries=False) - - Errors will be wrapped in :class:`~urllib3.exceptions.MaxRetryError` unless - retries are disabled, in which case the causing exception will be raised. - - :param int total: - Total number of retries to allow. Takes precedence over other counts. - - Set to ``None`` to remove this constraint and fall back on other - counts. - - Set to ``0`` to fail on the first retry. - - Set to ``False`` to disable and imply ``raise_on_redirect=False``. - - :param int connect: - How many connection-related errors to retry on. - - These are errors raised before the request is sent to the remote server, - which we assume has not triggered the server to process the request. - - Set to ``0`` to fail on the first retry of this type. - - :param int read: - How many times to retry on read errors. - - These errors are raised after the request was sent to the server, so the - request may have side-effects. - - Set to ``0`` to fail on the first retry of this type. - - :param int redirect: - How many redirects to perform. Limit this to avoid infinite redirect - loops. - - A redirect is a HTTP response with a status code 301, 302, 303, 307 or - 308. - - Set to ``0`` to fail on the first retry of this type. - - Set to ``False`` to disable and imply ``raise_on_redirect=False``. - - :param int status: - How many times to retry on bad status codes. - - These are retries made on responses, where status code matches - ``status_forcelist``. - - Set to ``0`` to fail on the first retry of this type. - - :param int other: - How many times to retry on other errors. - - Other errors are errors that are not connect, read, redirect or status errors. - These errors might be raised after the request was sent to the server, so the - request might have side-effects. - - Set to ``0`` to fail on the first retry of this type. - - If ``total`` is not set, it's a good idea to set this to 0 to account - for unexpected edge cases and avoid infinite retry loops. - - :param iterable allowed_methods: - Set of uppercased HTTP method verbs that we should retry on. - - By default, we only retry on methods which are considered to be - idempotent (multiple requests with the same parameters end with the - same state). See :attr:`Retry.DEFAULT_ALLOWED_METHODS`. - - Set to a ``False`` value to retry on any verb. - - .. warning:: - - Previously this parameter was named ``method_whitelist``, that - usage is deprecated in v1.26.0 and will be removed in v2.0. - - :param iterable status_forcelist: - A set of integer HTTP status codes that we should force a retry on. - A retry is initiated if the request method is in ``allowed_methods`` - and the response status code is in ``status_forcelist``. - - By default, this is disabled with ``None``. - - :param float backoff_factor: - A backoff factor to apply between attempts after the second try - (most errors are resolved immediately by a second try without a - delay). urllib3 will sleep for:: - - {backoff factor} * (2 ** ({number of total retries} - 1)) - - seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep - for [0.0s, 0.2s, 0.4s, ...] between retries. It will never be longer - than :attr:`Retry.DEFAULT_BACKOFF_MAX`. - - By default, backoff is disabled (set to 0). - - :param bool raise_on_redirect: Whether, if the number of redirects is - exhausted, to raise a MaxRetryError, or to return a response with a - response code in the 3xx range. - - :param bool raise_on_status: Similar meaning to ``raise_on_redirect``: - whether we should raise an exception, or return a response, - if status falls in ``status_forcelist`` range and retries have - been exhausted. - - :param tuple history: The history of the request encountered during - each call to :meth:`~Retry.increment`. The list is in the order - the requests occurred. Each list item is of class :class:`RequestHistory`. - - :param bool respect_retry_after_header: - Whether to respect Retry-After header on status codes defined as - :attr:`Retry.RETRY_AFTER_STATUS_CODES` or not. - - :param iterable remove_headers_on_redirect: - Sequence of headers to remove from the request when a response - indicating a redirect is returned before firing off the redirected - request. - """ - - #: Default methods to be used for ``allowed_methods`` - DEFAULT_ALLOWED_METHODS = frozenset( - ["HEAD", "GET", "PUT", "DELETE", "OPTIONS", "TRACE"] - ) - - #: Default status codes to be used for ``status_forcelist`` - RETRY_AFTER_STATUS_CODES = frozenset([413, 429, 503]) - - #: Default headers to be used for ``remove_headers_on_redirect`` - DEFAULT_REMOVE_HEADERS_ON_REDIRECT = frozenset(["Authorization"]) - - #: Maximum backoff time. - DEFAULT_BACKOFF_MAX = 120 - - def __init__( - self, - total=10, - connect=None, - read=None, - redirect=None, - status=None, - other=None, - allowed_methods=_Default, - status_forcelist=None, - backoff_factor=0, - raise_on_redirect=True, - raise_on_status=True, - history=None, - respect_retry_after_header=True, - remove_headers_on_redirect=_Default, - # TODO: Deprecated, remove in v2.0 - method_whitelist=_Default, - ): - - if method_whitelist is not _Default: - if allowed_methods is not _Default: - raise ValueError( - "Using both 'allowed_methods' and " - "'method_whitelist' together is not allowed. " - "Instead only use 'allowed_methods'" - ) - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - stacklevel=2, - ) - allowed_methods = method_whitelist - if allowed_methods is _Default: - allowed_methods = self.DEFAULT_ALLOWED_METHODS - if remove_headers_on_redirect is _Default: - remove_headers_on_redirect = self.DEFAULT_REMOVE_HEADERS_ON_REDIRECT - - self.total = total - self.connect = connect - self.read = read - self.status = status - self.other = other - - if redirect is False or total is False: - redirect = 0 - raise_on_redirect = False - - self.redirect = redirect - self.status_forcelist = status_forcelist or set() - self.allowed_methods = allowed_methods - self.backoff_factor = backoff_factor - self.raise_on_redirect = raise_on_redirect - self.raise_on_status = raise_on_status - self.history = history or tuple() - self.respect_retry_after_header = respect_retry_after_header - self.remove_headers_on_redirect = frozenset( - [h.lower() for h in remove_headers_on_redirect] - ) - - def new(self, **kw): - params = dict( - total=self.total, - connect=self.connect, - read=self.read, - redirect=self.redirect, - status=self.status, - other=self.other, - status_forcelist=self.status_forcelist, - backoff_factor=self.backoff_factor, - raise_on_redirect=self.raise_on_redirect, - raise_on_status=self.raise_on_status, - history=self.history, - remove_headers_on_redirect=self.remove_headers_on_redirect, - respect_retry_after_header=self.respect_retry_after_header, - ) - - # TODO: If already given in **kw we use what's given to us - # If not given we need to figure out what to pass. We decide - # based on whether our class has the 'method_whitelist' property - # and if so we pass the deprecated 'method_whitelist' otherwise - # we use 'allowed_methods'. Remove in v2.0 - if "method_whitelist" not in kw and "allowed_methods" not in kw: - if "method_whitelist" in self.__dict__: - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - params["method_whitelist"] = self.allowed_methods - else: - params["allowed_methods"] = self.allowed_methods - - params.update(kw) - return type(self)(**params) - - @classmethod - def from_int(cls, retries, redirect=True, default=None): - """Backwards-compatibility for the old retries format.""" - if retries is None: - retries = default if default is not None else cls.DEFAULT - - if isinstance(retries, Retry): - return retries - - redirect = bool(redirect) and None - new_retries = cls(retries, redirect=redirect) - log.debug("Converted retries value: %r -> %r", retries, new_retries) - return new_retries - - def get_backoff_time(self): - """Formula for computing the current backoff - - :rtype: float - """ - # We want to consider only the last consecutive errors sequence (Ignore redirects). - consecutive_errors_len = len( - list( - takewhile(lambda x: x.redirect_location is None, reversed(self.history)) - ) - ) - if consecutive_errors_len <= 1: - return 0 - - backoff_value = self.backoff_factor * (2 ** (consecutive_errors_len - 1)) - return min(self.DEFAULT_BACKOFF_MAX, backoff_value) - - def parse_retry_after(self, retry_after): - # Whitespace: https://tools.ietf.org/html/rfc7230#section-3.2.4 - if re.match(r"^\s*[0-9]+\s*$", retry_after): - seconds = int(retry_after) - else: - retry_date_tuple = email.utils.parsedate_tz(retry_after) - if retry_date_tuple is None: - raise InvalidHeader("Invalid Retry-After header: %s" % retry_after) - if retry_date_tuple[9] is None: # Python 2 - # Assume UTC if no timezone was specified - # On Python2.7, parsedate_tz returns None for a timezone offset - # instead of 0 if no timezone is given, where mktime_tz treats - # a None timezone offset as local time. - retry_date_tuple = retry_date_tuple[:9] + (0,) + retry_date_tuple[10:] - - retry_date = email.utils.mktime_tz(retry_date_tuple) - seconds = retry_date - time.time() - - if seconds < 0: - seconds = 0 - - return seconds - - def get_retry_after(self, response): - """Get the value of Retry-After in seconds.""" - - retry_after = response.getheader("Retry-After") - - if retry_after is None: - return None - - return self.parse_retry_after(retry_after) - - def sleep_for_retry(self, response=None): - retry_after = self.get_retry_after(response) - if retry_after: - time.sleep(retry_after) - return True - - return False - - def _sleep_backoff(self): - backoff = self.get_backoff_time() - if backoff <= 0: - return - time.sleep(backoff) - - def sleep(self, response=None): - """Sleep between retry attempts. - - This method will respect a server's ``Retry-After`` response header - and sleep the duration of the time requested. If that is not present, it - will use an exponential backoff. By default, the backoff factor is 0 and - this method will return immediately. - """ - - if self.respect_retry_after_header and response: - slept = self.sleep_for_retry(response) - if slept: - return - - self._sleep_backoff() - - def _is_connection_error(self, err): - """Errors when we're fairly sure that the server did not receive the - request, so it should be safe to retry. - """ - if isinstance(err, ProxyError): - err = err.original_error - return isinstance(err, ConnectTimeoutError) - - def _is_read_error(self, err): - """Errors that occur after the request has been started, so we should - assume that the server began processing it. - """ - return isinstance(err, (ReadTimeoutError, ProtocolError)) - - def _is_method_retryable(self, method): - """Checks if a given HTTP method should be retried upon, depending if - it is included in the allowed_methods - """ - # TODO: For now favor if the Retry implementation sets its own method_whitelist - # property outside of our constructor to avoid breaking custom implementations. - if "method_whitelist" in self.__dict__: - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - allowed_methods = self.method_whitelist - else: - allowed_methods = self.allowed_methods - - if allowed_methods and method.upper() not in allowed_methods: - return False - return True - - def is_retry(self, method, status_code, has_retry_after=False): - """Is this method/status code retryable? (Based on allowlists and control - variables such as the number of total retries to allow, whether to - respect the Retry-After header, whether this header is present, and - whether the returned status code is on the list of status codes to - be retried upon on the presence of the aforementioned header) - """ - if not self._is_method_retryable(method): - return False - - if self.status_forcelist and status_code in self.status_forcelist: - return True - - return ( - self.total - and self.respect_retry_after_header - and has_retry_after - and (status_code in self.RETRY_AFTER_STATUS_CODES) - ) - - def is_exhausted(self): - """Are we out of retries?""" - retry_counts = ( - self.total, - self.connect, - self.read, - self.redirect, - self.status, - self.other, - ) - retry_counts = list(filter(None, retry_counts)) - if not retry_counts: - return False - - return min(retry_counts) < 0 - - def increment( - self, - method=None, - url=None, - response=None, - error=None, - _pool=None, - _stacktrace=None, - ): - """Return a new Retry object with incremented retry counters. - - :param response: A response object, or None, if the server did not - return a response. - :type response: :class:`~urllib3.response.HTTPResponse` - :param Exception error: An error encountered during the request, or - None if the response was received successfully. - - :return: A new ``Retry`` object. - """ - if self.total is False and error: - # Disabled, indicate to re-raise the error. - raise six.reraise(type(error), error, _stacktrace) - - total = self.total - if total is not None: - total -= 1 - - connect = self.connect - read = self.read - redirect = self.redirect - status_count = self.status - other = self.other - cause = "unknown" - status = None - redirect_location = None - - if error and self._is_connection_error(error): - # Connect retry? - if connect is False: - raise six.reraise(type(error), error, _stacktrace) - elif connect is not None: - connect -= 1 - - elif error and self._is_read_error(error): - # Read retry? - if read is False or not self._is_method_retryable(method): - raise six.reraise(type(error), error, _stacktrace) - elif read is not None: - read -= 1 - - elif error: - # Other retry? - if other is not None: - other -= 1 - - elif response and response.get_redirect_location(): - # Redirect retry? - if redirect is not None: - redirect -= 1 - cause = "too many redirects" - redirect_location = response.get_redirect_location() - status = response.status - - else: - # Incrementing because of a server error like a 500 in - # status_forcelist and the given method is in the allowed_methods - cause = ResponseError.GENERIC_ERROR - if response and response.status: - if status_count is not None: - status_count -= 1 - cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) - status = response.status - - history = self.history + ( - RequestHistory(method, url, error, status, redirect_location), - ) - - new_retry = self.new( - total=total, - connect=connect, - read=read, - redirect=redirect, - status=status_count, - other=other, - history=history, - ) - - if new_retry.is_exhausted(): - raise MaxRetryError(_pool, url, error or ResponseError(cause)) - - log.debug("Incremented Retry for (url='%s'): %r", url, new_retry) - - return new_retry - - def __repr__(self): - return ( - "{cls.__name__}(total={self.total}, connect={self.connect}, " - "read={self.read}, redirect={self.redirect}, status={self.status})" - ).format(cls=type(self), self=self) - - def __getattr__(self, item): - if item == "method_whitelist": - # TODO: Remove this deprecated alias in v2.0 - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - return self.allowed_methods - try: - return getattr(super(Retry, self), item) - except AttributeError: - return getattr(Retry, item) - - -# For backwards compatibility (equivalent to pre-v1.9): -Retry.DEFAULT = Retry(3) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/textfmts.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/textfmts.py deleted file mode 100644 index c7cfb6d0415eebb432d9b2ef9adcb6e184c5a14b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/textfmts.py +++ /dev/null @@ -1,436 +0,0 @@ -""" - pygments.lexers.textfmts - ~~~~~~~~~~~~~~~~~~~~~~~~ - - Lexers for various text formats. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re - -from pygments.lexers import guess_lexer, get_lexer_by_name -from pygments.lexer import RegexLexer, bygroups, default, include -from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ - Number, Generic, Literal, Punctuation -from pygments.util import ClassNotFound - -__all__ = ['IrcLogsLexer', 'TodotxtLexer', 'HttpLexer', 'GettextLexer', - 'NotmuchLexer', 'KernelLogLexer'] - - -class IrcLogsLexer(RegexLexer): - """ - Lexer for IRC logs in *irssi*, *xchat* or *weechat* style. - """ - - name = 'IRC logs' - aliases = ['irc'] - filenames = ['*.weechatlog'] - mimetypes = ['text/x-irclog'] - - flags = re.VERBOSE | re.MULTILINE - timestamp = r""" - ( - # irssi / xchat and others - (?: \[|\()? # Opening bracket or paren for the timestamp - (?: # Timestamp - (?: (?:\d{1,4} [-/])* # Date as - or /-separated groups of digits - (?:\d{1,4}) - [T ])? # Date/time separator: T or space - (?: \d?\d [:.])* # Time as :/.-separated groups of 1 or 2 digits - (?: \d?\d) - ) - (?: \]|\))?\s+ # Closing bracket or paren for the timestamp - | - # weechat - \d{4}\s\w{3}\s\d{2}\s # Date - \d{2}:\d{2}:\d{2}\s+ # Time + Whitespace - | - # xchat - \w{3}\s\d{2}\s # Date - \d{2}:\d{2}:\d{2}\s+ # Time + Whitespace - )? - """ - tokens = { - 'root': [ - # log start/end - (r'^\*\*\*\*(.*)\*\*\*\*$', Comment), - # hack - ("^" + timestamp + r'(\s*<[^>]*>\s*)$', bygroups(Comment.Preproc, Name.Tag)), - # normal msgs - ("^" + timestamp + r""" - (\s*<.*?>\s*) # Nick """, - bygroups(Comment.Preproc, Name.Tag), 'msg'), - # /me msgs - ("^" + timestamp + r""" - (\s*[*]\s+) # Star - (\S+\s+.*?\n) # Nick + rest of message """, - bygroups(Comment.Preproc, Keyword, Generic.Inserted)), - # join/part msgs - ("^" + timestamp + r""" - (\s*(?:\*{3}|?)\s*) # Star(s) or symbols - (\S+\s+) # Nick + Space - (.*?\n) # Rest of message """, - bygroups(Comment.Preproc, Keyword, String, Comment)), - (r"^.*?\n", Text), - ], - 'msg': [ - (r"\S+:(?!//)", Name.Attribute), # Prefix - (r".*\n", Text, '#pop'), - ], - } - - -class GettextLexer(RegexLexer): - """ - Lexer for Gettext catalog files. - - .. versionadded:: 0.9 - """ - name = 'Gettext Catalog' - aliases = ['pot', 'po'] - filenames = ['*.pot', '*.po'] - mimetypes = ['application/x-gettext', 'text/x-gettext', 'text/gettext'] - - tokens = { - 'root': [ - (r'^#,\s.*?$', Keyword.Type), - (r'^#:\s.*?$', Keyword.Declaration), - # (r'^#$', Comment), - (r'^(#|#\.\s|#\|\s|#~\s|#\s).*$', Comment.Single), - (r'^(")([A-Za-z-]+:)(.*")$', - bygroups(String, Name.Property, String)), - (r'^".*"$', String), - (r'^(msgid|msgid_plural|msgstr|msgctxt)(\s+)(".*")$', - bygroups(Name.Variable, Text, String)), - (r'^(msgstr\[)(\d)(\])(\s+)(".*")$', - bygroups(Name.Variable, Number.Integer, Name.Variable, Text, String)), - ] - } - - -class HttpLexer(RegexLexer): - """ - Lexer for HTTP sessions. - - .. versionadded:: 1.5 - """ - - name = 'HTTP' - aliases = ['http'] - - flags = re.DOTALL - - def get_tokens_unprocessed(self, text, stack=('root',)): - """Reset the content-type state.""" - self.content_type = None - return RegexLexer.get_tokens_unprocessed(self, text, stack) - - def header_callback(self, match): - if match.group(1).lower() == 'content-type': - content_type = match.group(5).strip() - if ';' in content_type: - content_type = content_type[:content_type.find(';')].strip() - self.content_type = content_type - yield match.start(1), Name.Attribute, match.group(1) - yield match.start(2), Text, match.group(2) - yield match.start(3), Operator, match.group(3) - yield match.start(4), Text, match.group(4) - yield match.start(5), Literal, match.group(5) - yield match.start(6), Text, match.group(6) - - def continuous_header_callback(self, match): - yield match.start(1), Text, match.group(1) - yield match.start(2), Literal, match.group(2) - yield match.start(3), Text, match.group(3) - - def content_callback(self, match): - content_type = getattr(self, 'content_type', None) - content = match.group() - offset = match.start() - if content_type: - from pygments.lexers import get_lexer_for_mimetype - possible_lexer_mimetypes = [content_type] - if '+' in content_type: - # application/calendar+xml can be treated as application/xml - # if there's not a better match. - general_type = re.sub(r'^(.*)/.*\+(.*)$', r'\1/\2', - content_type) - possible_lexer_mimetypes.append(general_type) - - for i in possible_lexer_mimetypes: - try: - lexer = get_lexer_for_mimetype(i) - except ClassNotFound: - pass - else: - for idx, token, value in lexer.get_tokens_unprocessed(content): - yield offset + idx, token, value - return - yield offset, Text, content - - tokens = { - 'root': [ - (r'([a-zA-Z][-_a-zA-Z]+)( +)([^ ]+)( +)' - r'(HTTP)(/)(1\.[01]|2(?:\.0)?|3)(\r?\n|\Z)', - bygroups(Name.Function, Text, Name.Namespace, Text, - Keyword.Reserved, Operator, Number, Text), - 'headers'), - (r'(HTTP)(/)(1\.[01]|2(?:\.0)?|3)( +)(\d{3})(?:( +)([^\r\n]*))?(\r?\n|\Z)', - bygroups(Keyword.Reserved, Operator, Number, Text, Number, Text, - Name.Exception, Text), - 'headers'), - ], - 'headers': [ - (r'([^\s:]+)( *)(:)( *)([^\r\n]*)(\r?\n|\Z)', header_callback), - (r'([\t ]+)([^\r\n]+)(\r?\n|\Z)', continuous_header_callback), - (r'\r?\n', Text, 'content') - ], - 'content': [ - (r'.+', content_callback) - ] - } - - def analyse_text(text): - return any ( - re.search(pattern, text) is not None - for pattern in ( - r'^([a-zA-Z][-_a-zA-Z]+)( +)([^ ]+)( +)(HTTP)(/)(1\.[01]|2(?:\.0)?|3)(\r?\n|\Z)', - r'^(HTTP)(/)(1\.[01]|2(?:\.0)?|3)( +)(\d{3})(?:( +)([^\r\n]*))?(\r?\n|\Z)', - ) - ) - - -class TodotxtLexer(RegexLexer): - """ - Lexer for Todo.txt todo list format. - - .. versionadded:: 2.0 - """ - - name = 'Todotxt' - url = 'http://todotxt.com/' - aliases = ['todotxt'] - # *.todotxt is not a standard extension for Todo.txt files; including it - # makes testing easier, and also makes autodetecting file type easier. - filenames = ['todo.txt', '*.todotxt'] - mimetypes = ['text/x-todo'] - - # Aliases mapping standard token types of Todo.txt format concepts - CompleteTaskText = Operator # Chosen to de-emphasize complete tasks - IncompleteTaskText = Text # Incomplete tasks should look like plain text - - # Priority should have most emphasis to indicate importance of tasks - Priority = Generic.Heading - # Dates should have next most emphasis because time is important - Date = Generic.Subheading - - # Project and context should have equal weight, and be in different colors - Project = Generic.Error - Context = String - - # If tag functionality is added, it should have the same weight as Project - # and Context, and a different color. Generic.Traceback would work well. - - # Regex patterns for building up rules; dates, priorities, projects, and - # contexts are all atomic - # TODO: Make date regex more ISO 8601 compliant - date_regex = r'\d{4,}-\d{2}-\d{2}' - priority_regex = r'\([A-Z]\)' - project_regex = r'\+\S+' - context_regex = r'@\S+' - - # Compound regex expressions - complete_one_date_regex = r'(x )(' + date_regex + r')' - complete_two_date_regex = (complete_one_date_regex + r'( )(' + - date_regex + r')') - priority_date_regex = r'(' + priority_regex + r')( )(' + date_regex + r')' - - tokens = { - # Should parse starting at beginning of line; each line is a task - 'root': [ - # Complete task entry points: two total: - # 1. Complete task with two dates - (complete_two_date_regex, bygroups(CompleteTaskText, Date, - CompleteTaskText, Date), - 'complete'), - # 2. Complete task with one date - (complete_one_date_regex, bygroups(CompleteTaskText, Date), - 'complete'), - - # Incomplete task entry points: six total: - # 1. Priority plus date - (priority_date_regex, bygroups(Priority, IncompleteTaskText, Date), - 'incomplete'), - # 2. Priority only - (priority_regex, Priority, 'incomplete'), - # 3. Leading date - (date_regex, Date, 'incomplete'), - # 4. Leading context - (context_regex, Context, 'incomplete'), - # 5. Leading project - (project_regex, Project, 'incomplete'), - # 6. Non-whitespace catch-all - (r'\S+', IncompleteTaskText, 'incomplete'), - ], - - # Parse a complete task - 'complete': [ - # Newline indicates end of task, should return to root - (r'\s*\n', CompleteTaskText, '#pop'), - # Tokenize contexts and projects - (context_regex, Context), - (project_regex, Project), - # Tokenize non-whitespace text - (r'\S+', CompleteTaskText), - # Tokenize whitespace not containing a newline - (r'\s+', CompleteTaskText), - ], - - # Parse an incomplete task - 'incomplete': [ - # Newline indicates end of task, should return to root - (r'\s*\n', IncompleteTaskText, '#pop'), - # Tokenize contexts and projects - (context_regex, Context), - (project_regex, Project), - # Tokenize non-whitespace text - (r'\S+', IncompleteTaskText), - # Tokenize whitespace not containing a newline - (r'\s+', IncompleteTaskText), - ], - } - - -class NotmuchLexer(RegexLexer): - """ - For Notmuch email text format. - - .. versionadded:: 2.5 - - Additional options accepted: - - `body_lexer` - If given, highlight the contents of the message body with the specified - lexer, else guess it according to the body content (default: ``None``). - """ - - name = 'Notmuch' - url = 'https://notmuchmail.org/' - aliases = ['notmuch'] - - def _highlight_code(self, match): - code = match.group(1) - - try: - if self.body_lexer: - lexer = get_lexer_by_name(self.body_lexer) - else: - lexer = guess_lexer(code.strip()) - except ClassNotFound: - lexer = get_lexer_by_name('text') - - yield from lexer.get_tokens_unprocessed(code) - - tokens = { - 'root': [ - (r'\fmessage\{\s*', Keyword, ('message', 'message-attr')), - ], - 'message-attr': [ - (r'(\s*id:\s*)(\S+)', bygroups(Name.Attribute, String)), - (r'(\s*(?:depth|match|excluded):\s*)(\d+)', - bygroups(Name.Attribute, Number.Integer)), - (r'(\s*filename:\s*)(.+\n)', - bygroups(Name.Attribute, String)), - default('#pop'), - ], - 'message': [ - (r'\fmessage\}\n', Keyword, '#pop'), - (r'\fheader\{\n', Keyword, 'header'), - (r'\fbody\{\n', Keyword, 'body'), - ], - 'header': [ - (r'\fheader\}\n', Keyword, '#pop'), - (r'((?:Subject|From|To|Cc|Date):\s*)(.*\n)', - bygroups(Name.Attribute, String)), - (r'(.*)(\s*\(.*\))(\s*\(.*\)\n)', - bygroups(Generic.Strong, Literal, Name.Tag)), - ], - 'body': [ - (r'\fpart\{\n', Keyword, 'part'), - (r'\f(part|attachment)\{\s*', Keyword, ('part', 'part-attr')), - (r'\fbody\}\n', Keyword, '#pop'), - ], - 'part-attr': [ - (r'(ID:\s*)(\d+)', bygroups(Name.Attribute, Number.Integer)), - (r'(,\s*)((?:Filename|Content-id):\s*)([^,]+)', - bygroups(Punctuation, Name.Attribute, String)), - (r'(,\s*)(Content-type:\s*)(.+\n)', - bygroups(Punctuation, Name.Attribute, String)), - default('#pop'), - ], - 'part': [ - (r'\f(?:part|attachment)\}\n', Keyword, '#pop'), - (r'\f(?:part|attachment)\{\s*', Keyword, ('#push', 'part-attr')), - (r'^Non-text part: .*\n', Comment), - (r'(?s)(.*?(?=\f(?:part|attachment)\}\n))', _highlight_code), - ], - } - - def analyse_text(text): - return 1.0 if text.startswith('\fmessage{') else 0.0 - - def __init__(self, **options): - self.body_lexer = options.get('body_lexer', None) - RegexLexer.__init__(self, **options) - - -class KernelLogLexer(RegexLexer): - """ - For Linux Kernel log ("dmesg") output. - - .. versionadded:: 2.6 - """ - name = 'Kernel log' - aliases = ['kmsg', 'dmesg'] - filenames = ['*.kmsg', '*.dmesg'] - - tokens = { - 'root': [ - (r'^[^:]+:debug : (?=\[)', Text, 'debug'), - (r'^[^:]+:info : (?=\[)', Text, 'info'), - (r'^[^:]+:warn : (?=\[)', Text, 'warn'), - (r'^[^:]+:notice: (?=\[)', Text, 'warn'), - (r'^[^:]+:err : (?=\[)', Text, 'error'), - (r'^[^:]+:crit : (?=\[)', Text, 'error'), - (r'^(?=\[)', Text, 'unknown'), - ], - 'unknown': [ - (r'^(?=.+(warning|notice|audit|deprecated))', Text, 'warn'), - (r'^(?=.+(error|critical|fail|Bug))', Text, 'error'), - default('info'), - ], - 'base': [ - (r'\[[0-9. ]+\] ', Number), - (r'(?<=\] ).+?:', Keyword), - (r'\n', Text, '#pop'), - ], - 'debug': [ - include('base'), - (r'.+\n', Comment, '#pop') - ], - 'info': [ - include('base'), - (r'.+\n', Text, '#pop') - ], - 'warn': [ - include('base'), - (r'.+\n', Generic.Strong, '#pop') - ], - 'error': [ - include('base'), - (r'.+\n', Generic.Error, '#pop') - ] - } diff --git a/spaces/prognosis/inference-bloom-doc-qa/Dockerfile b/spaces/prognosis/inference-bloom-doc-qa/Dockerfile deleted file mode 100644 index 6d9ce8f6fb9a9a9fdaf065f46ecc1e4d366024ea..0000000000000000000000000000000000000000 --- a/spaces/prognosis/inference-bloom-doc-qa/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM python:3.9 - -USER root - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -RUN mkdir -p /.cache/huggingface && chown root:root /.cache -RUN mkdir -p / cachedir && chown root:root / cachedir -RUN chmod -R 777 /.cache -RUN chmod -R 777 /.cache/huggingface -#RUN chmod -R 777 /cachedir - -#RUN mkdir -p /.cache && chown : /.cache - -COPY . . - -CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/pszemraj/FLAN-grammar-correction/README.md b/spaces/pszemraj/FLAN-grammar-correction/README.md deleted file mode 100644 index 2e0eb1f5072bb0a6117de6a366a4a4dee1b13658..0000000000000000000000000000000000000000 --- a/spaces/pszemraj/FLAN-grammar-correction/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FLAN Grammar Correction -emoji: 🔥 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/puuuw/pu/README.md b/spaces/puuuw/pu/README.md deleted file mode 100644 index 9ef86c160d55ca85799f7a5351a3a1011c101949..0000000000000000000000000000000000000000 --- a/spaces/puuuw/pu/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Pu -emoji: 🌖 -colorFrom: yellow -colorTo: green -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/qinzhu/diy-girlfriend/attentions.py b/spaces/qinzhu/diy-girlfriend/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/qinzhu/diy-girlfriend/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Casio Fx Es Plus Emulator ((NEW)).md b/spaces/quidiaMuxgu/Expedit-SAM/Casio Fx Es Plus Emulator ((NEW)).md deleted file mode 100644 index d8f78b669fcc1ae699ecbccb8115bcccb998b8ec..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Casio Fx Es Plus Emulator ((NEW)).md +++ /dev/null @@ -1,32 +0,0 @@ -
-

How to Use Casio Fx Es Plus Emulator for Teaching and Learning

-

Casio Fx Es Plus Emulator is a software that simulates the operation of scientific calculators from the Casio Fx Es Plus series on your computer. It is ideal for preparing teaching materials and presenting in the classroom, as well as for self-study and practice. In this article, we will show you how to use Casio Fx Es Plus Emulator for different purposes and models.

-

What is Casio Fx Es Plus Emulator?

-

Casio Fx Es Plus Emulator is an emulator of fx-ES PLUS series, which are popular scientific calculators used in schools and colleges. The emulator has the same basic functionality and appearance as the real calculators, such as fx-82ES PLUS, fx-85ES PLUS, fx-350ES PLUS, fx-570ES PLUS, fx-991ES PLUS, and more[^1^]. You can use the emulator to perform calculations, graph functions, solve equations, and perform other operations with a similar operating feel.

-

Casio Fx Es Plus Emulator


Download Ziphttps://geags.com/2uCrPw



-

Why Use Casio Fx Es Plus Emulator?

-

Casio Fx Es Plus Emulator has many benefits for both teachers and students. Here are some of them:

-
    -
  • It provides strong support to teachers with many of the functions most appropriate and needed for educational-material preparation and the teaching process, such as screen capture, zoom display, and key logging functions[^1^].
  • -
  • It allows teachers to demonstrate how to use the calculators effectively and clearly on a large screen or projector. Teachers can also create worksheets and quizzes based on the emulator's output.
  • -
  • It enables students to practice and learn how to use the calculators at their own pace and convenience. Students can also check their answers and review their steps using the emulator.
  • -
  • It helps students to develop their mathematical skills and understanding by exploring different features and modes of the calculators.
  • -
  • It saves time and money by eliminating the need to buy or maintain physical calculators.
  • -
-

How to Use Casio Fx Es Plus Emulator?

-

Casio Fx Es Plus Emulator is easy to use and install. Here are the steps to follow:

-
    -
  1. Download the emulator software from Casio's official website. You can choose from various models and licensing options according to your needs.
  2. -
  3. Install the emulator software on your computer by following the instructions on the screen. You will need an activation code to activate the software.
  4. -
  5. Launch the emulator software from your desktop or start menu. You will see a virtual calculator on your screen that resembles the model you selected.
  6. -
  7. Use your mouse or keyboard to operate the virtual calculator. You can also use the menu bar or toolbar to access different functions and settings.
  8. -
  9. To capture a screenshot of the calculator display, click on the camera icon on the toolbar or press Ctrl+C. You can save or copy the image to use in your documents or presentations.
  10. -
  11. To zoom in or out of the calculator display, click on the magnifying glass icon on the toolbar or press Ctrl+Z. You can adjust the zoom level by dragging the slider or using the mouse wheel.
  12. -
  13. To record or playback your keystrokes, click on the record or play icon on the toolbar or press Ctrl+R or Ctrl+P. You can use this feature to review your steps or demonstrate a procedure.
  14. -
  15. To change the calculator model, click on the model name on the menu bar or press Ctrl+M. You can switch between different models without exiting the emulator.
  16. -
-

Conclusion

-

Casio Fx Es Plus Emulator is a useful tool for teaching and learning mathematics using scientific calculators. It simulates the operation of various models of fx-ES PLUS series with a similar operating feel. It also provides many functions that support educational-material preparation and presentation. You can download and install Casio Fx Es Plus Emulator from Casio's official website and use it for different purposes and models.

-

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Fifa 07 Crack Download [2021] Pc.md b/spaces/quidiaMuxgu/Expedit-SAM/Fifa 07 Crack Download [2021] Pc.md deleted file mode 100644 index 93be3874cda2e6f7d5d755f319df2dd6ab53863c..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Fifa 07 Crack Download [2021] Pc.md +++ /dev/null @@ -1,34 +0,0 @@ - -

How to Download and Install FIFA 07 Crack for PC

-

FIFA 07 is one of the most popular football simulation games ever released by EA Sports. It features realistic gameplay, stunning graphics, and licensed teams and players from around the world. If you want to enjoy this classic game on your PC, you will need to download and install FIFA 07 crack first. Here is a step-by-step guide on how to do it.

-

fifa 07 crack download pc


DOWNLOAD > https://geags.com/2uCq7W



-
    -
  1. Download FIFA 07 from a reliable source. You can use one of the links below to download the game files. Make sure you have enough space on your hard drive and a fast internet connection.
  2. -
  3. Extract the downloaded files using a program like WinRAR or 7-Zip. You will get a folder with the game files and a folder with the crack files.
  4. -
  5. Mount or burn the game ISO file using a program like Daemon Tools or PowerISO. You will get a virtual drive with the game disc.
  6. -
  7. Run the setup.exe file from the game disc and follow the instructions to install the game. You will need to enter a serial key during the installation. You can use one of these keys:
    • QK2C-RWMV-4FTW-HH42-DWFG
    • KWXM-6ANL-ECU8-4E48-2GDT
    • 6L33-2D43-6S96-36S4-WRLD
  8. -
  9. Copy the crack files from the crack folder to the game installation folder. You will need to replace the original files with the cracked ones.
  10. -
  11. Run the game from the game.exe file or create a shortcut on your desktop. Enjoy playing FIFA 07 on your PC!
  12. -
-

Note: This is a cracked version of FIFA 07 and it may not work properly on some systems or with some antivirus programs. Use it at your own risk and discretion. We do not condone piracy and we recommend buying the original game if you like it.

- -

What are the Features of FIFA 07?

-

FIFA 07 is not just a game of soccer, but also a game of features. It offers a variety of modes, options, and extras that will keep you entertained for hours. Here are some of the features that you can enjoy in FIFA 07:

-
    -
  • Manager Mode: This is the most in-depth and realistic mode in FIFA 07, where you can take control of any club from any league and manage every aspect of it. You can buy and sell players, scout for new talent, negotiate contracts, set tactics, train your team, and more. You can also play or simulate every match in your season, and deal with injuries, suspensions, morale, and media pressure. You can even create your own custom club and start from scratch.
  • -
  • Interactive Leagues: This is a new feature in FIFA 07 that lets you compete online with other players in real-time leagues based on real-world fixtures. You can choose from the English Premier League, the German Bundesliga, the French Ligue 1, or the Mexican Primera División. You can play every match of your chosen league as it happens in real life, and see how your results affect the league table and your ranking. You can also chat with other players, check stats and news, and watch highlights of other matches.
  • -
  • Challenge Mode: This is a mode where you can test your skills and knowledge of soccer by completing various challenges based on real-life scenarios. You can choose from different categories such as Goalscoring, Passing, Defending, Goalkeeping, and more. You can also create your own custom challenges and share them with other players online.
  • -
  • Fan Shop: This is a feature where you can spend points that you earn by playing the game to unlock various items and bonuses. You can unlock new balls, stadiums, kits, boots, players, teams, and more. You can also unlock classic teams and players from past FIFA games.
  • -
  • Gameplay Features: FIFA 07 also boasts a number of gameplay features that make it more realistic and enjoyable than ever. Some of these features are:
  • -
      -
    • The new ball physics system that makes the ball behave more naturally and unpredictably.
    • -
    • The new shooting system that takes into account the player's position, balance, strength, and preferred foot.
    • -
    • The new trick system that lets you perform various moves and skills with different combinations of buttons.
    • -
    • The new AI system that makes the players more intelligent and responsive to your actions and tactics.
    • -
    • The new commentary system that provides more insight and analysis from Martin Tyler (Xbox 360 only) and Andy Gray (all versions).
    • -
    -
-

FIFA 07 is a game that offers something for everyone who loves soccer. Whether you want to manage your favorite club, compete online with other players, or just have fun playing a quick match, FIFA 07 has it all. Download FIFA 07 crack today and enjoy the ultimate soccer experience on your PC!

-

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/L2 Walker High Five Download Hit.md b/spaces/quidiaMuxgu/Expedit-SAM/L2 Walker High Five Download Hit.md deleted file mode 100644 index 48bfdd14b30106756f7db0c495f2287f97df5295..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/L2 Walker High Five Download Hit.md +++ /dev/null @@ -1,8 +0,0 @@ -

L2 Walker High Five Download Hit


Download 🌟 https://geags.com/2uCrfh



- -September 18, 2554 BC — Discussion of the official Request L2 walker High Five server in the Lin2 Exploits, Hacks, Bots, Tools & Lineage 2 forum macros ... - Discussion of the official part of Lineage 2 High Five, C4 server - Forum of the official part of Lineage 2 ... -— Discussion of the official part of Lineage 2 High Five, C4 server — Forum of the official part of Lineage 2 -— Lineage 2 High Five official part discussion, C4 server — Lineage 2 official part forum 8a78ff9644
-
-
-

diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Loquendo Text To Speech 7.5.4 [Multilenguaje].rar Download Pc.md b/spaces/quidiaMuxgu/Expedit-SAM/Loquendo Text To Speech 7.5.4 [Multilenguaje].rar Download Pc.md deleted file mode 100644 index d5d5f8ef790e5c037650fd61b6201e8751635a8a..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Loquendo Text To Speech 7.5.4 [Multilenguaje].rar Download Pc.md +++ /dev/null @@ -1,6 +0,0 @@ -

Loquendo Text To Speech 7.5.4 [Multilenguaje].rar download pc


Download Ziphttps://geags.com/2uCsIU



-
-Download Loquendo Text to Speech 7.5.4 Multilingual. Loquendo Text to ... http://rapidshare.com/files/322585817/1724LTTSM.part1.rar.html 4d29de3e1b
-
-
-

diff --git a/spaces/r3gm/RVC_HF/tools/infer_cli.py b/spaces/r3gm/RVC_HF/tools/infer_cli.py deleted file mode 100644 index bbe0a53c1aac6a8f2d42613d554b2bdd07abea2d..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/tools/infer_cli.py +++ /dev/null @@ -1,67 +0,0 @@ -import argparse -import os -import sys - -now_dir = os.getcwd() -sys.path.append(now_dir) -from dotenv import load_dotenv -from scipy.io import wavfile - -from configs.config import Config -from infer.modules.vc.modules import VC - -#### -# USAGE -# -# In your Terminal or CMD or whatever - - -def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--f0up_key", type=int, default=0) - parser.add_argument("--input_path", type=str, help="input path") - parser.add_argument("--index_path", type=str, help="index path") - parser.add_argument("--f0method", type=str, default="harvest", help="harvest or pm") - parser.add_argument("--opt_path", type=str, help="opt path") - parser.add_argument("--model_name", type=str, help="store in assets/weight_root") - parser.add_argument("--index_rate", type=float, default=0.66, help="index rate") - parser.add_argument("--device", type=str, help="device") - parser.add_argument("--is_half", type=bool, help="use half -> True") - parser.add_argument("--filter_radius", type=int, default=3, help="filter radius") - parser.add_argument("--resample_sr", type=int, default=0, help="resample sr") - parser.add_argument("--rms_mix_rate", type=float, default=1, help="rms mix rate") - parser.add_argument("--protect", type=float, default=0.33, help="protect") - - args = parser.parse_args() - sys.argv = sys.argv[:1] - - return args - - -def main(): - load_dotenv() - args = arg_parse() - config = Config() - config.device = args.device if args.device else config.device - config.is_half = args.is_half if args.is_half else config.is_half - vc = VC(config) - vc.get_vc(args.model_name) - _, wav_opt = vc.vc_single( - 0, - args.input_path, - args.f0up_key, - None, - args.f0method, - args.index_path, - None, - args.index_rate, - args.filter_radius, - args.resample_sr, - args.rms_mix_rate, - args.protect, - ) - wavfile.write(args.opt_path, wav_opt[0], wav_opt[1]) - - -if __name__ == "__main__": - main() diff --git a/spaces/raaec/Pix2Pix-Video-prv/share_btn.py b/spaces/raaec/Pix2Pix-Video-prv/share_btn.py deleted file mode 100644 index 66e0de15dce2d65f4cd0ef512c7bd8adad0beb77..0000000000000000000000000000000000000000 --- a/spaces/raaec/Pix2Pix-Video-prv/share_btn.py +++ /dev/null @@ -1,73 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getVideoBlobFile(videoEL){ - const res = await fetch(videoEL.src); - const blob = await res.blob(); - const videoId = Date.now() % 200; - const fileName = `vid-pix2pix-${{videoId}}.wav`; - const videoBlob = new File([blob], fileName, { type: 'video/mp4' }); - console.log(videoBlob); - return videoBlob; - } - - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - const captionTxt = gradioEl.querySelector('#prompt-in textarea').value; - const inputVidEl = gradioEl.querySelector('#input-vid video'); - const outputVideo = gradioEl.querySelector('#video-output video'); - - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputVideo){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const inputFile = await getVideoBlobFile(inputVidEl); - const urlInputVid = await uploadFile(inputFile); - const videoOutFile = await getVideoBlobFile(outputVideo); - const dataOutputVid = await uploadFile(videoOutFile); - - const descriptionMd = ` -#### Video input: -${urlInputVid} - -#### Pix2Pix result: -${dataOutputVid} -`; - const params = new URLSearchParams({ - title: captionTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/Pix2Pix-Video/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/radames/Candle-T5-Generation-Wasm/build/m-quantized.js b/spaces/radames/Candle-T5-Generation-Wasm/build/m-quantized.js deleted file mode 100644 index d6b84e834c74b3004e5984c9c3541b20699b45e4..0000000000000000000000000000000000000000 --- a/spaces/radames/Candle-T5-Generation-Wasm/build/m-quantized.js +++ /dev/null @@ -1,734 +0,0 @@ -let wasm; - -const heap = new Array(128).fill(undefined); - -heap.push(undefined, null, true, false); - -function getObject(idx) { return heap[idx]; } - -let WASM_VECTOR_LEN = 0; - -let cachedUint8Memory0 = null; - -function getUint8Memory0() { - if (cachedUint8Memory0 === null || cachedUint8Memory0.byteLength === 0) { - cachedUint8Memory0 = new Uint8Array(wasm.memory.buffer); - } - return cachedUint8Memory0; -} - -const cachedTextEncoder = (typeof TextEncoder !== 'undefined' ? new TextEncoder('utf-8') : { encode: () => { throw Error('TextEncoder not available') } } ); - -const encodeString = (typeof cachedTextEncoder.encodeInto === 'function' - ? function (arg, view) { - return cachedTextEncoder.encodeInto(arg, view); -} - : function (arg, view) { - const buf = cachedTextEncoder.encode(arg); - view.set(buf); - return { - read: arg.length, - written: buf.length - }; -}); - -function passStringToWasm0(arg, malloc, realloc) { - - if (realloc === undefined) { - const buf = cachedTextEncoder.encode(arg); - const ptr = malloc(buf.length, 1) >>> 0; - getUint8Memory0().subarray(ptr, ptr + buf.length).set(buf); - WASM_VECTOR_LEN = buf.length; - return ptr; - } - - let len = arg.length; - let ptr = malloc(len, 1) >>> 0; - - const mem = getUint8Memory0(); - - let offset = 0; - - for (; offset < len; offset++) { - const code = arg.charCodeAt(offset); - if (code > 0x7F) break; - mem[ptr + offset] = code; - } - - if (offset !== len) { - if (offset !== 0) { - arg = arg.slice(offset); - } - ptr = realloc(ptr, len, len = offset + arg.length * 3, 1) >>> 0; - const view = getUint8Memory0().subarray(ptr + offset, ptr + len); - const ret = encodeString(arg, view); - - offset += ret.written; - } - - WASM_VECTOR_LEN = offset; - return ptr; -} - -function isLikeNone(x) { - return x === undefined || x === null; -} - -let cachedInt32Memory0 = null; - -function getInt32Memory0() { - if (cachedInt32Memory0 === null || cachedInt32Memory0.byteLength === 0) { - cachedInt32Memory0 = new Int32Array(wasm.memory.buffer); - } - return cachedInt32Memory0; -} - -let heap_next = heap.length; - -function dropObject(idx) { - if (idx < 132) return; - heap[idx] = heap_next; - heap_next = idx; -} - -function takeObject(idx) { - const ret = getObject(idx); - dropObject(idx); - return ret; -} - -const cachedTextDecoder = (typeof TextDecoder !== 'undefined' ? new TextDecoder('utf-8', { ignoreBOM: true, fatal: true }) : { decode: () => { throw Error('TextDecoder not available') } } ); - -if (typeof TextDecoder !== 'undefined') { cachedTextDecoder.decode(); }; - -function getStringFromWasm0(ptr, len) { - ptr = ptr >>> 0; - return cachedTextDecoder.decode(getUint8Memory0().subarray(ptr, ptr + len)); -} - -function addHeapObject(obj) { - if (heap_next === heap.length) heap.push(heap.length + 1); - const idx = heap_next; - heap_next = heap[idx]; - - heap[idx] = obj; - return idx; -} - -let cachedFloat64Memory0 = null; - -function getFloat64Memory0() { - if (cachedFloat64Memory0 === null || cachedFloat64Memory0.byteLength === 0) { - cachedFloat64Memory0 = new Float64Array(wasm.memory.buffer); - } - return cachedFloat64Memory0; -} - -let cachedBigInt64Memory0 = null; - -function getBigInt64Memory0() { - if (cachedBigInt64Memory0 === null || cachedBigInt64Memory0.byteLength === 0) { - cachedBigInt64Memory0 = new BigInt64Array(wasm.memory.buffer); - } - return cachedBigInt64Memory0; -} - -function debugString(val) { - // primitive types - const type = typeof val; - if (type == 'number' || type == 'boolean' || val == null) { - return `${val}`; - } - if (type == 'string') { - return `"${val}"`; - } - if (type == 'symbol') { - const description = val.description; - if (description == null) { - return 'Symbol'; - } else { - return `Symbol(${description})`; - } - } - if (type == 'function') { - const name = val.name; - if (typeof name == 'string' && name.length > 0) { - return `Function(${name})`; - } else { - return 'Function'; - } - } - // objects - if (Array.isArray(val)) { - const length = val.length; - let debug = '['; - if (length > 0) { - debug += debugString(val[0]); - } - for(let i = 1; i < length; i++) { - debug += ', ' + debugString(val[i]); - } - debug += ']'; - return debug; - } - // Test for built-in - const builtInMatches = /\[object ([^\]]+)\]/.exec(toString.call(val)); - let className; - if (builtInMatches.length > 1) { - className = builtInMatches[1]; - } else { - // Failed to match the standard '[object ClassName]' - return toString.call(val); - } - if (className == 'Object') { - // we're a user defined class or Object - // JSON.stringify avoids problems with cycles, and is generally much - // easier than looping through ownProperties of `val`. - try { - return 'Object(' + JSON.stringify(val) + ')'; - } catch (_) { - return 'Object'; - } - } - // errors - if (val instanceof Error) { - return `${val.name}: ${val.message}\n${val.stack}`; - } - // TODO we could test for more things here, like `Set`s and `Map`s. - return className; -} - -function passArray8ToWasm0(arg, malloc) { - const ptr = malloc(arg.length * 1, 1) >>> 0; - getUint8Memory0().set(arg, ptr / 1); - WASM_VECTOR_LEN = arg.length; - return ptr; -} - -function handleError(f, args) { - try { - return f.apply(this, args); - } catch (e) { - wasm.__wbindgen_exn_store(addHeapObject(e)); - } -} -/** -*/ -export class ModelConditionalGeneration { - - static __wrap(ptr) { - ptr = ptr >>> 0; - const obj = Object.create(ModelConditionalGeneration.prototype); - obj.__wbg_ptr = ptr; - - return obj; - } - - __destroy_into_raw() { - const ptr = this.__wbg_ptr; - this.__wbg_ptr = 0; - - return ptr; - } - - free() { - const ptr = this.__destroy_into_raw(); - wasm.__wbg_modelconditionalgeneration_free(ptr); - } - /** - * @param {Uint8Array} weights - * @param {Uint8Array} tokenizer - * @param {Uint8Array} config - */ - constructor(weights, tokenizer, config) { - try { - const retptr = wasm.__wbindgen_add_to_stack_pointer(-16); - const ptr0 = passArray8ToWasm0(weights, wasm.__wbindgen_malloc); - const len0 = WASM_VECTOR_LEN; - const ptr1 = passArray8ToWasm0(tokenizer, wasm.__wbindgen_malloc); - const len1 = WASM_VECTOR_LEN; - const ptr2 = passArray8ToWasm0(config, wasm.__wbindgen_malloc); - const len2 = WASM_VECTOR_LEN; - wasm.modelconditionalgeneration_load(retptr, ptr0, len0, ptr1, len1, ptr2, len2); - var r0 = getInt32Memory0()[retptr / 4 + 0]; - var r1 = getInt32Memory0()[retptr / 4 + 1]; - var r2 = getInt32Memory0()[retptr / 4 + 2]; - if (r2) { - throw takeObject(r1); - } - return ModelConditionalGeneration.__wrap(r0); - } finally { - wasm.__wbindgen_add_to_stack_pointer(16); - } - } - /** - * @param {any} input - * @returns {any} - */ - decode(input) { - try { - const retptr = wasm.__wbindgen_add_to_stack_pointer(-16); - wasm.modelconditionalgeneration_decode(retptr, this.__wbg_ptr, addHeapObject(input)); - var r0 = getInt32Memory0()[retptr / 4 + 0]; - var r1 = getInt32Memory0()[retptr / 4 + 1]; - var r2 = getInt32Memory0()[retptr / 4 + 2]; - if (r2) { - throw takeObject(r1); - } - return takeObject(r0); - } finally { - wasm.__wbindgen_add_to_stack_pointer(16); - } - } -} -/** -*/ -export class ModelEncoder { - - static __wrap(ptr) { - ptr = ptr >>> 0; - const obj = Object.create(ModelEncoder.prototype); - obj.__wbg_ptr = ptr; - - return obj; - } - - __destroy_into_raw() { - const ptr = this.__wbg_ptr; - this.__wbg_ptr = 0; - - return ptr; - } - - free() { - const ptr = this.__destroy_into_raw(); - wasm.__wbg_modelencoder_free(ptr); - } - /** - * @param {Uint8Array} weights - * @param {Uint8Array} tokenizer - * @param {Uint8Array} config - */ - constructor(weights, tokenizer, config) { - try { - const retptr = wasm.__wbindgen_add_to_stack_pointer(-16); - const ptr0 = passArray8ToWasm0(weights, wasm.__wbindgen_malloc); - const len0 = WASM_VECTOR_LEN; - const ptr1 = passArray8ToWasm0(tokenizer, wasm.__wbindgen_malloc); - const len1 = WASM_VECTOR_LEN; - const ptr2 = passArray8ToWasm0(config, wasm.__wbindgen_malloc); - const len2 = WASM_VECTOR_LEN; - wasm.modelencoder_load(retptr, ptr0, len0, ptr1, len1, ptr2, len2); - var r0 = getInt32Memory0()[retptr / 4 + 0]; - var r1 = getInt32Memory0()[retptr / 4 + 1]; - var r2 = getInt32Memory0()[retptr / 4 + 2]; - if (r2) { - throw takeObject(r1); - } - return ModelEncoder.__wrap(r0); - } finally { - wasm.__wbindgen_add_to_stack_pointer(16); - } - } - /** - * @param {any} input - * @returns {any} - */ - decode(input) { - try { - const retptr = wasm.__wbindgen_add_to_stack_pointer(-16); - wasm.modelencoder_decode(retptr, this.__wbg_ptr, addHeapObject(input)); - var r0 = getInt32Memory0()[retptr / 4 + 0]; - var r1 = getInt32Memory0()[retptr / 4 + 1]; - var r2 = getInt32Memory0()[retptr / 4 + 2]; - if (r2) { - throw takeObject(r1); - } - return takeObject(r0); - } finally { - wasm.__wbindgen_add_to_stack_pointer(16); - } - } -} - -async function __wbg_load(module, imports) { - if (typeof Response === 'function' && module instanceof Response) { - if (typeof WebAssembly.instantiateStreaming === 'function') { - try { - return await WebAssembly.instantiateStreaming(module, imports); - - } catch (e) { - if (module.headers.get('Content-Type') != 'application/wasm') { - console.warn("`WebAssembly.instantiateStreaming` failed because your server does not serve wasm with `application/wasm` MIME type. Falling back to `WebAssembly.instantiate` which is slower. Original error:\n", e); - - } else { - throw e; - } - } - } - - const bytes = await module.arrayBuffer(); - return await WebAssembly.instantiate(bytes, imports); - - } else { - const instance = await WebAssembly.instantiate(module, imports); - - if (instance instanceof WebAssembly.Instance) { - return { instance, module }; - - } else { - return instance; - } - } -} - -function __wbg_get_imports() { - const imports = {}; - imports.wbg = {}; - imports.wbg.__wbindgen_string_get = function(arg0, arg1) { - const obj = getObject(arg1); - const ret = typeof(obj) === 'string' ? obj : undefined; - var ptr1 = isLikeNone(ret) ? 0 : passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc); - var len1 = WASM_VECTOR_LEN; - getInt32Memory0()[arg0 / 4 + 1] = len1; - getInt32Memory0()[arg0 / 4 + 0] = ptr1; - }; - imports.wbg.__wbindgen_object_drop_ref = function(arg0) { - takeObject(arg0); - }; - imports.wbg.__wbindgen_error_new = function(arg0, arg1) { - const ret = new Error(getStringFromWasm0(arg0, arg1)); - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_is_bigint = function(arg0) { - const ret = typeof(getObject(arg0)) === 'bigint'; - return ret; - }; - imports.wbg.__wbindgen_bigint_from_u64 = function(arg0) { - const ret = BigInt.asUintN(64, arg0); - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_jsval_eq = function(arg0, arg1) { - const ret = getObject(arg0) === getObject(arg1); - return ret; - }; - imports.wbg.__wbindgen_boolean_get = function(arg0) { - const v = getObject(arg0); - const ret = typeof(v) === 'boolean' ? (v ? 1 : 0) : 2; - return ret; - }; - imports.wbg.__wbindgen_is_object = function(arg0) { - const val = getObject(arg0); - const ret = typeof(val) === 'object' && val !== null; - return ret; - }; - imports.wbg.__wbindgen_is_undefined = function(arg0) { - const ret = getObject(arg0) === undefined; - return ret; - }; - imports.wbg.__wbindgen_in = function(arg0, arg1) { - const ret = getObject(arg0) in getObject(arg1); - return ret; - }; - imports.wbg.__wbindgen_number_get = function(arg0, arg1) { - const obj = getObject(arg1); - const ret = typeof(obj) === 'number' ? obj : undefined; - getFloat64Memory0()[arg0 / 8 + 1] = isLikeNone(ret) ? 0 : ret; - getInt32Memory0()[arg0 / 4 + 0] = !isLikeNone(ret); - }; - imports.wbg.__wbindgen_object_clone_ref = function(arg0) { - const ret = getObject(arg0); - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_jsval_loose_eq = function(arg0, arg1) { - const ret = getObject(arg0) == getObject(arg1); - return ret; - }; - imports.wbg.__wbg_String_4370c5505c674d30 = function(arg0, arg1) { - const ret = String(getObject(arg1)); - const ptr1 = passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc); - const len1 = WASM_VECTOR_LEN; - getInt32Memory0()[arg0 / 4 + 1] = len1; - getInt32Memory0()[arg0 / 4 + 0] = ptr1; - }; - imports.wbg.__wbindgen_number_new = function(arg0) { - const ret = arg0; - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_string_new = function(arg0, arg1) { - const ret = getStringFromWasm0(arg0, arg1); - return addHeapObject(ret); - }; - imports.wbg.__wbg_getwithrefkey_d1f0d12f1f1b63ea = function(arg0, arg1) { - const ret = getObject(arg0)[getObject(arg1)]; - return addHeapObject(ret); - }; - imports.wbg.__wbg_set_bd72c078edfa51ad = function(arg0, arg1, arg2) { - getObject(arg0)[takeObject(arg1)] = takeObject(arg2); - }; - imports.wbg.__wbg_new_abda76e883ba8a5f = function() { - const ret = new Error(); - return addHeapObject(ret); - }; - imports.wbg.__wbg_stack_658279fe44541cf6 = function(arg0, arg1) { - const ret = getObject(arg1).stack; - const ptr1 = passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc); - const len1 = WASM_VECTOR_LEN; - getInt32Memory0()[arg0 / 4 + 1] = len1; - getInt32Memory0()[arg0 / 4 + 0] = ptr1; - }; - imports.wbg.__wbg_error_f851667af71bcfc6 = function(arg0, arg1) { - let deferred0_0; - let deferred0_1; - try { - deferred0_0 = arg0; - deferred0_1 = arg1; - console.error(getStringFromWasm0(arg0, arg1)); - } finally { - wasm.__wbindgen_free(deferred0_0, deferred0_1, 1); - } - }; - imports.wbg.__wbg_log_d03200ce29166fbd = function(arg0, arg1) { - console.log(getStringFromWasm0(arg0, arg1)); - }; - imports.wbg.__wbg_crypto_c48a774b022d20ac = function(arg0) { - const ret = getObject(arg0).crypto; - return addHeapObject(ret); - }; - imports.wbg.__wbg_process_298734cf255a885d = function(arg0) { - const ret = getObject(arg0).process; - return addHeapObject(ret); - }; - imports.wbg.__wbg_versions_e2e78e134e3e5d01 = function(arg0) { - const ret = getObject(arg0).versions; - return addHeapObject(ret); - }; - imports.wbg.__wbg_node_1cd7a5d853dbea79 = function(arg0) { - const ret = getObject(arg0).node; - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_is_string = function(arg0) { - const ret = typeof(getObject(arg0)) === 'string'; - return ret; - }; - imports.wbg.__wbg_msCrypto_bcb970640f50a1e8 = function(arg0) { - const ret = getObject(arg0).msCrypto; - return addHeapObject(ret); - }; - imports.wbg.__wbg_require_8f08ceecec0f4fee = function() { return handleError(function () { - const ret = module.require; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbindgen_is_function = function(arg0) { - const ret = typeof(getObject(arg0)) === 'function'; - return ret; - }; - imports.wbg.__wbg_getRandomValues_37fa2ca9e4e07fab = function() { return handleError(function (arg0, arg1) { - getObject(arg0).getRandomValues(getObject(arg1)); - }, arguments) }; - imports.wbg.__wbg_randomFillSync_dc1e9a60c158336d = function() { return handleError(function (arg0, arg1) { - getObject(arg0).randomFillSync(takeObject(arg1)); - }, arguments) }; - imports.wbg.__wbg_get_44be0491f933a435 = function(arg0, arg1) { - const ret = getObject(arg0)[arg1 >>> 0]; - return addHeapObject(ret); - }; - imports.wbg.__wbg_length_fff51ee6522a1a18 = function(arg0) { - const ret = getObject(arg0).length; - return ret; - }; - imports.wbg.__wbg_new_898a68150f225f2e = function() { - const ret = new Array(); - return addHeapObject(ret); - }; - imports.wbg.__wbg_newnoargs_581967eacc0e2604 = function(arg0, arg1) { - const ret = new Function(getStringFromWasm0(arg0, arg1)); - return addHeapObject(ret); - }; - imports.wbg.__wbg_next_526fc47e980da008 = function(arg0) { - const ret = getObject(arg0).next; - return addHeapObject(ret); - }; - imports.wbg.__wbg_next_ddb3312ca1c4e32a = function() { return handleError(function (arg0) { - const ret = getObject(arg0).next(); - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_done_5c1f01fb660d73b5 = function(arg0) { - const ret = getObject(arg0).done; - return ret; - }; - imports.wbg.__wbg_value_1695675138684bd5 = function(arg0) { - const ret = getObject(arg0).value; - return addHeapObject(ret); - }; - imports.wbg.__wbg_iterator_97f0c81209c6c35a = function() { - const ret = Symbol.iterator; - return addHeapObject(ret); - }; - imports.wbg.__wbg_get_97b561fb56f034b5 = function() { return handleError(function (arg0, arg1) { - const ret = Reflect.get(getObject(arg0), getObject(arg1)); - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_call_cb65541d95d71282 = function() { return handleError(function (arg0, arg1) { - const ret = getObject(arg0).call(getObject(arg1)); - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_new_b51585de1b234aff = function() { - const ret = new Object(); - return addHeapObject(ret); - }; - imports.wbg.__wbg_self_1ff1d729e9aae938 = function() { return handleError(function () { - const ret = self.self; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_window_5f4faef6c12b79ec = function() { return handleError(function () { - const ret = window.window; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_globalThis_1d39714405582d3c = function() { return handleError(function () { - const ret = globalThis.globalThis; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_global_651f05c6a0944d1c = function() { return handleError(function () { - const ret = global.global; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_set_502d29070ea18557 = function(arg0, arg1, arg2) { - getObject(arg0)[arg1 >>> 0] = takeObject(arg2); - }; - imports.wbg.__wbg_isArray_4c24b343cb13cfb1 = function(arg0) { - const ret = Array.isArray(getObject(arg0)); - return ret; - }; - imports.wbg.__wbg_instanceof_ArrayBuffer_39ac22089b74fddb = function(arg0) { - let result; - try { - result = getObject(arg0) instanceof ArrayBuffer; - } catch { - result = false; - } - const ret = result; - return ret; - }; - imports.wbg.__wbg_call_01734de55d61e11d = function() { return handleError(function (arg0, arg1, arg2) { - const ret = getObject(arg0).call(getObject(arg1), getObject(arg2)); - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_isSafeInteger_bb8e18dd21c97288 = function(arg0) { - const ret = Number.isSafeInteger(getObject(arg0)); - return ret; - }; - imports.wbg.__wbg_buffer_085ec1f694018c4f = function(arg0) { - const ret = getObject(arg0).buffer; - return addHeapObject(ret); - }; - imports.wbg.__wbg_newwithbyteoffsetandlength_6da8e527659b86aa = function(arg0, arg1, arg2) { - const ret = new Uint8Array(getObject(arg0), arg1 >>> 0, arg2 >>> 0); - return addHeapObject(ret); - }; - imports.wbg.__wbg_new_8125e318e6245eed = function(arg0) { - const ret = new Uint8Array(getObject(arg0)); - return addHeapObject(ret); - }; - imports.wbg.__wbg_set_5cf90238115182c3 = function(arg0, arg1, arg2) { - getObject(arg0).set(getObject(arg1), arg2 >>> 0); - }; - imports.wbg.__wbg_length_72e2208bbc0efc61 = function(arg0) { - const ret = getObject(arg0).length; - return ret; - }; - imports.wbg.__wbg_instanceof_Uint8Array_d8d9cb2b8e8ac1d4 = function(arg0) { - let result; - try { - result = getObject(arg0) instanceof Uint8Array; - } catch { - result = false; - } - const ret = result; - return ret; - }; - imports.wbg.__wbg_newwithlength_e5d69174d6984cd7 = function(arg0) { - const ret = new Uint8Array(arg0 >>> 0); - return addHeapObject(ret); - }; - imports.wbg.__wbg_subarray_13db269f57aa838d = function(arg0, arg1, arg2) { - const ret = getObject(arg0).subarray(arg1 >>> 0, arg2 >>> 0); - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_bigint_get_as_i64 = function(arg0, arg1) { - const v = getObject(arg1); - const ret = typeof(v) === 'bigint' ? v : undefined; - getBigInt64Memory0()[arg0 / 8 + 1] = isLikeNone(ret) ? BigInt(0) : ret; - getInt32Memory0()[arg0 / 4 + 0] = !isLikeNone(ret); - }; - imports.wbg.__wbindgen_debug_string = function(arg0, arg1) { - const ret = debugString(getObject(arg1)); - const ptr1 = passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc); - const len1 = WASM_VECTOR_LEN; - getInt32Memory0()[arg0 / 4 + 1] = len1; - getInt32Memory0()[arg0 / 4 + 0] = ptr1; - }; - imports.wbg.__wbindgen_throw = function(arg0, arg1) { - throw new Error(getStringFromWasm0(arg0, arg1)); - }; - imports.wbg.__wbindgen_memory = function() { - const ret = wasm.memory; - return addHeapObject(ret); - }; - - return imports; -} - -function __wbg_init_memory(imports, maybe_memory) { - -} - -function __wbg_finalize_init(instance, module) { - wasm = instance.exports; - __wbg_init.__wbindgen_wasm_module = module; - cachedBigInt64Memory0 = null; - cachedFloat64Memory0 = null; - cachedInt32Memory0 = null; - cachedUint8Memory0 = null; - - wasm.__wbindgen_start(); - return wasm; -} - -function initSync(module) { - if (wasm !== undefined) return wasm; - - const imports = __wbg_get_imports(); - - __wbg_init_memory(imports); - - if (!(module instanceof WebAssembly.Module)) { - module = new WebAssembly.Module(module); - } - - const instance = new WebAssembly.Instance(module, imports); - - return __wbg_finalize_init(instance, module); -} - -async function __wbg_init(input) { - if (wasm !== undefined) return wasm; - - if (typeof input === 'undefined') { - input = new URL('m-quantized_bg.wasm', import.meta.url); - } - const imports = __wbg_get_imports(); - - if (typeof input === 'string' || (typeof Request === 'function' && input instanceof Request) || (typeof URL === 'function' && input instanceof URL)) { - input = fetch(input); - } - - __wbg_init_memory(imports); - - const { instance, module } = await __wbg_load(await input, imports); - - return __wbg_finalize_init(instance, module); -} - -export { initSync } -export default __wbg_init; diff --git a/spaces/radames/Real-Time-Latent-Consistency-Model/img2img/tailwind.config.js b/spaces/radames/Real-Time-Latent-Consistency-Model/img2img/tailwind.config.js deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/radames/gradio-chatbot-read-query-param/app.py b/spaces/radames/gradio-chatbot-read-query-param/app.py deleted file mode 100644 index b8acfc519bd9dad49f2581a1b443a6d2c056a37b..0000000000000000000000000000000000000000 --- a/spaces/radames/gradio-chatbot-read-query-param/app.py +++ /dev/null @@ -1,50 +0,0 @@ -import gradio as gr -import time -import random -import json - -get_window_url_params = """ - function() { - const params = new URLSearchParams(window.location.search); - const url_params = Object.fromEntries(params); - return url_params; - } - """ - -with gr.Blocks() as demo: - gr.Markdown("""## Gradio send queryparam to chatbot -type `read query` -""") - url_params = gr.JSON({}, visible=False, label="URL Params") - chatbot = gr.Chatbot().style(height=500) - msg = gr.Textbox() - clear = gr.Button("Clear") - - def user(user_message, url_params, history): - return "", history + [[user_message, None]] - - def bot(history, url_params): - if "read query" in history[-1][0]: - bot_message = f""" - here your URL params: - {json.dumps(url_params, indent=4)} - """ - else: - bot_message = random.choice(["Yes", "No"]) - history[-1][1] = bot_message - time.sleep(1) - return history - - msg.submit(user, inputs=[msg, url_params, chatbot], outputs=[msg, chatbot], queue=False).then( - fn=bot, inputs=[chatbot, url_params], outputs=[chatbot] - ) - clear.click(lambda: None, None, chatbot, queue=False) - demo.load( - fn=lambda x: x, - inputs=[url_params], - outputs=[url_params], - _js=get_window_url_params, - queue=False - ) - -demo.launch() diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Alea Jacta Est Hannibal Terror Of Rome Torrent Download [addons] WORK.md b/spaces/raedeXanto/academic-chatgpt-beta/Alea Jacta Est Hannibal Terror Of Rome Torrent Download [addons] WORK.md deleted file mode 100644 index e772f574d337246721e4b231312c7c3d5dfa1d3c..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Alea Jacta Est Hannibal Terror Of Rome Torrent Download [addons] WORK.md +++ /dev/null @@ -1,31 +0,0 @@ -
-

Alea Jacta Est: Hannibal Terror of Rome Torrent Download [addons]

-

If you are a fan of historical strategy games, you might be interested in Alea Jacta Est: Hannibal Terror of Rome, a game that lets you relive the epic wars between Rome and Carthage. You can play as either side and lead your armies across the Mediterranean, Africa, and Spain. You can also experience the famous battles of Cannae, Zama, and Trasimene.

-

Alea Jacta Est: Hannibal Terror of Rome Torrent Download [addons]


Download File ————— https://tinourl.com/2uL1Ty



-

But what if you want to enhance your gaming experience with some addons? Well, you are in luck, because there are several torrent sites that offer Alea Jacta Est: Hannibal Terror of Rome torrent download with addons. These addons can add new features, scenarios, units, graphics, and more to the game. Some of the most popular addons are:

-
    -
  • The Punic Wars: This addon adds a new campaign that covers the entire period of the Punic Wars, from 264 BC to 146 BC. You can choose from 10 different factions and fight for supremacy over the Mediterranean.
  • -
  • The Rise of Rome: This addon focuses on the early expansion of Rome, from 280 BC to 201 BC. You can play as Rome or one of its enemies, such as Pyrrhus, the Samnites, or the Gauls.
  • -
  • The Roman Civil Wars: This addon depicts the turbulent era of the Roman Republic, from 91 BC to 31 BC. You can take part in the conflicts between Marius and Sulla, Caesar and Pompey, Antony and Octavian, and more.
  • -
-

To download these addons, you need to have a torrent client installed on your computer. Then, you can visit one of the torrent sites that offer Alea Jacta Est: Hannibal Terror of Rome torrent download with addons. Some of these sites are:

-
    -
  • The Pirate Bay: This is one of the most popular and reliable torrent sites on the internet. You can find many torrents for Alea Jacta Est: Hannibal Terror of Rome with addons here.
  • -
  • Kickass Torrents: This is another well-known torrent site that has a large collection of games torrents. You can also find Alea Jacta Est: Hannibal Terror of Rome with addons here.
  • -
  • RARBG: This is a torrent site that specializes in high-quality torrents. You can download Alea Jacta Est: Hannibal Terror of Rome with addons in HD here.
  • -
-

However, before you download any torrents, you should be aware of the risks involved. Torrenting is illegal in many countries and can expose you to malware, viruses, and hackers. You should also use a VPN service to protect your privacy and security online. A VPN can hide your IP address and encrypt your traffic, making it harder for anyone to track or spy on you.

-

-

Alea Jacta Est: Hannibal Terror of Rome is a great game for history buffs and strategy fans alike. With the help of some addons, you can make it even better. Just remember to be careful when downloading torrents and use a VPN service to stay safe online.

- -

If you are wondering what makes Alea Jacta Est: Hannibal Terror of Rome different from other strategy games, here are some of its features:

-
    -
  • Historical accuracy: The game is based on historical sources and research, and tries to recreate the historical events and situations as faithfully as possible. You can learn about the history and culture of the ancient world while playing.
  • -
  • Complex gameplay: The game has a deep and rich gameplay system that covers many aspects of warfare, diplomacy, politics, economy, and culture. You have to manage your resources, plan your strategies, deal with your allies and enemies, and face random events and challenges.
  • -
  • Multiple scenarios: The game offers several scenarios that cover different periods and regions of the ancient world. You can play the main campaign that follows the Punic Wars, or choose from other scenarios that focus on specific wars or regions.
  • -
  • Moddability: The game is highly moddable and allows you to customize your gaming experience. You can create your own scenarios, factions, units, events, and more. You can also download mods created by other players and enjoy new content and features.
  • -
-

Alea Jacta Est: Hannibal Terror of Rome is a game that will appeal to anyone who loves history and strategy. It is a game that will challenge your skills and knowledge, and immerse you in the ancient world. It is a game that you can play for hours and hours, and never get bored.

-

So what are you waiting for? Download Alea Jacta Est: Hannibal Terror of Rome torrent with addons today and enjoy this amazing game. And don't forget to use a VPN service to protect yourself online. Happy gaming!

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Encore enlpc 2s1p driver Download and update the latest version for Windows and Linux.md b/spaces/raedeXanto/academic-chatgpt-beta/Encore enlpc 2s1p driver Download and update the latest version for Windows and Linux.md deleted file mode 100644 index d7ed0df9c258d00e4cb0803414890227e124571f..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Encore enlpc 2s1p driver Download and update the latest version for Windows and Linux.md +++ /dev/null @@ -1,114 +0,0 @@ - -

Encore enlpc 2s1p driver: What is it and how to install it?

-

If you are looking for a way to connect up to three devices at the same time, such as printers, scanners, barcode readers, and more, you might want to consider using Encore enlpc 2s1p driver. This is a PCI adapter that adds two serial ports and one parallel port to your desktop computer. In this article, we will explain what Encore enlpc 2s1p driver is, what it can do for you, and how to install it on your PC.

-

Encore enlpc 2s1p driver


Download Filehttps://tinourl.com/2uL2DV



-

Features

-

Encore enlpc 2s1p driver has many features that make it a great choice for expanding your connectivity options. Here are some of them:

-
    -
  • It is compatible with 16C550 UART, which supports semi duplex receive and transmit conversion automatic and IrDA infrared encode and decode.
  • -
  • It supports communication baud rate up to 115200bps, which is ideal for fast data transfer.
  • -
  • It supports various serial modes, such as RS232, RS422, and RS485, as well as different parallel modes, such as SPP, Nibble, Byte, PS/2, EPP, and ECP.
  • -
  • It supports hardware and software flow control, as well as custom baud rate by programming internal PLL or external clock.
  • -
  • It supports remote wakeup and power management features, as well as serial port transceiver shutdown support.
  • -
  • It supports both legacy and MSI interrupt, as well as PCIe power management.
  • -
-

Compatibility

-

Encore enlpc 2s1p driver is compatible with various operating systems and devices. Here are some of them:

-
    -
  • It supports Windows XP, Vista, 7, 8, and Linux operating systems.
  • -
  • It can be used with any device that has a serial or parallel port interface, such as printers, scanners, barcode readers, modems, PDAs, digital cameras, etc.
  • -
  • It can be installed on any desktop computer that has a PCI Express slot.
  • -
-

Installation

-

Installing Encore enlpc 2s1p driver is easy and straightforward. Just follow these steps:

-

Encore enlpc 2s1p driver download
-Encore enlpc 2s1p driver windows 10
-Encore enlpc 2s1p driver update
-Encore enlpc 2s1p driver installation
-Encore enlpc 2s1p driver error
-Encore enlpc 2s1p driver compatibility
-Encore enlpc 2s1p driver manual
-Encore enlpc 2s1p driver review
-Encore enlpc 2s1p driver troubleshooting
-Encore enlpc 2s1p driver support
-Encore enlpc 2s1p pci card
-Encore enlpc 2s1p pci card driver
-Encore enlpc 2s1p pci card installation
-Encore enlpc 2s1p pci card error
-Encore enlpc 2s1p pci card compatibility
-Encore enlpc 2s1p pci card manual
-Encore enlpc 2s1p pci card review
-Encore enlpc 2s1p pci card troubleshooting
-Encore enlpc 2s1p pci card support
-Encore electronics enlpc 2s1p
-Encore electronics enlpc 2s1p driver
-Encore electronics enlpc 2s1p driver download
-Encore electronics enlpc 2s1p driver windows 10
-Encore electronics enlpc 2s1p driver update
-Encore electronics enlpc 2s1p driver installation
-Encore electronics enlpc 2s1p driver error
-Encore electronics enlpc 2s1p driver compatibility
-Encore electronics enlpc 2s1p driver manual
-Encore electronics enlpc 2s1p driver review
-Encore electronics enlpc 2s1p driver troubleshooting
-Encore electronics enlpc 2s1p driver support
-Encore electronics enlpc 2s1p pci card
-Encore electronics enlpc 2s1p pci card driver
-Encore electronics enlpc 2s1p pci card installation
-Encore electronics enlpc 2s1p pci card error
-Encore electronics enlpc 2s1p pci card compatibility
-Encore electronics enlpc 2s1p pci card manual
-Encore electronics enlpc 2s1p pci card review
-Encore electronics enlpc 2s1p pci card troubleshooting
-Encore electronics enlpc 2s1p pci card support
-How to install encore enlpc 2s1p driver
-How to update encore enlpc 2s1p driver
-How to fix encore enlpc 2s1p driver error
-How to check encore enlpc 2s1p driver compatibility
-How to use encore enlpc 2s1p driver manual
-How to uninstall encore enlpc 2s1p driver
-How to contact encore enlpc 2s1p driver support
-Best alternative to encore enlpc 2s1p driver
-Benefits of encore enlpc 2s1p driver
-Drawbacks of encore enlpc 2s1p driver

-
    -
  1. Turn off your computer and unplug the power cord.
  2. -
  3. Open the computer case and locate an available PCI Express slot.
  4. -
  5. Insert the Encore enlpc 2s1p adapter into the slot and secure it with a screw.
  6. -
  7. Close the computer case and plug the power cord back in.
  8. -
  9. Turn on your computer and wait for the operating system to detect the new hardware.
  10. -
  11. Insert the driver CD that came with the adapter into your CD-ROM drive.
  12. -
  13. Follow the on-screen instructions to install the driver software.
  14. -
  15. Restart your computer if prompted.
  16. -
  17. Connect your devices to the serial or parallel ports of the adapter using appropriate cables.
  18. -
-

Troubleshooting

-

If you encounter any problems with Encore enlpc 2s1p driver, here are some tips that might help you:

-
    -
  • Make sure that you have installed the correct driver software for your operating system.
  • -
  • Make sure that you have connected your devices to the right ports of the adapter.
  • -
  • Make sure that your devices are compatible with the adapter and have the latest drivers installed.
  • -
  • Make sure that your devices are powered on and functioning properly.
  • -
  • If you have multiple devices connected to the adapter, try disconnecting some of them and see if that solves the problem.
  • -
  • If you have other PCI or PCIe cards installed on your computer, try changing their slots or removing them temporarily and see if that solves the problem.
  • -
  • If none of these tips work, contact Encore Electronics customer support for further assistance.
  • -
-

Conclusion

-

In conclusion, Encore enlpc 2s1p driver is a useful PCI adapter that allows you to connect up to three devices at the same time using serial or parallel ports. It has many features and benefits that make it a great choice for enhancing your connectivity options. It is also easy to install and compatible with various operating systems and devices. If you are looking for a way to upgrade your desktop computer through PCI Express slot, you might want to give Encore enlpc 2s1p driver a try.

-

Frequently Asked Questions

-

Here are some frequently asked questions about Encore enlpc 2s1p driver:

-
    -
  1. What is the difference between serial port and parallel port?
  2. -

    A serial port is a type of interface that transmits data one bit at a time over a single wire. A parallel port is a type of interface that transmits data multiple bits at a time over multiple wires. Serial ports are usually faster than parallel ports but require more complex protocols. Parallel ports are usually simpler than serial ports but require more wires and space.

    -
  3. What is UART?
  4. -

    UART stands for Universal Asynchronous Receiver/Transmitter. It is a chip that converts parallel data into serial data and vice versa. It also handles communication protocols such as baud rate, parity bit, stop bit, etc. UART is commonly used in serial communication devices such as modems, printers, scanners, etc.

    -
  5. What is IrDA?
  6. -

    IrDA stands for Infrared Data Association. It is a standard for wireless communication using infrared light. IrDA can be used for short-range data transfer between devices such as PDAs, laptops, mobile phones, etc. IrDA requires line-of-sight between devices and has a low data rate compared to other wireless technologies such as Bluetooth or Wi-Fi.

    -
  7. What is EPP/ECP?
  8. -

    EPP stands for Enhanced Parallel Port. It is a mode of parallel port that allows bi-directional data transfer at high speed. ECP stands for Extended Capability Port. It is another mode of parallel port that allows bi-directional data transfer at high speed with additional features such as DMA (Direct Memory Access) support. EPP/ECP modes are commonly used by devices such as scanners or external hard drives that require fast data transfer over parallel port.

    -
  9. What is MSI interrupt?
  10. -

    MSI stands for Message Signaled Interrupt. It is a type of interrupt mechanism that uses messages instead of physical signals to notify devices of events. MSI interrupt reduces latency and improves performance compared to legacy interrupt mechanisms such as IRQ (Interrupt Request) lines or INTx pins. MSI interrupt is supported by PCI Express specification.

    -
- "

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Epson T1110 Adjustment Program Free Full How It Works and What It Can Do for Your Printer.md b/spaces/raedeXanto/academic-chatgpt-beta/Epson T1110 Adjustment Program Free Full How It Works and What It Can Do for Your Printer.md deleted file mode 100644 index 4c351bd1505dc23479e350c6de436b608fd3f6a4..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Epson T1110 Adjustment Program Free Full How It Works and What It Can Do for Your Printer.md +++ /dev/null @@ -1,130 +0,0 @@ - -
- How to download Epson T1110 Adjustment Program for free from a reliable source
- How to install and run Epson T1110 Adjustment Program on your computer
- How to use Epson T1110 Adjustment Program to reset your printer's waste ink counter and other settings
- Conclusion: Summary of the main points and benefits of using Epson T1110 Adjustment Program | | H2: What is Epson T1110 Adjustment Program and why do you need it? | - Explain what Epson T1110 Adjustment Program is and what it can do for your printer
- Mention some common problems that Epson T1110 Adjustment Program can fix, such as waste ink overflow, paper jam, print quality issues, etc.
- Emphasize the importance of using Epson T1110 Adjustment Program regularly to maintain your printer's performance and avoid costly repairs | | H2: How to download Epson T1110 Adjustment Program for free from a reliable source | - Warn the readers about the risks of downloading Epson T1110 Adjustment Program from untrusted websites, such as viruses, malware, or fake programs
- Recommend a trusted website that offers Epson T1110 Adjustment Program for free download, such as https://www.resetterepson.com/epson-t1110-resetter-adjustment-program-free-download/
- Provide a step-by-step guide on how to download Epson T1110 Adjustment Program from the website, including screenshots and links | | H2: How to install and run Epson T1110 Adjustment Program on your computer | - Explain the system requirements and compatibility of Epson T1110 Adjustment Program with different operating systems
- Provide a step-by-step guide on how to install Epson T1110 Adjustment Program on your computer, including screenshots and instructions
- Provide a step-by-step guide on how to run Epson T1110 Adjustment Program on your computer, including screenshots and instructions | | H2: How to use Epson T1110 Adjustment Program to reset your printer's waste ink counter and other settings | - Explain what the waste ink counter is and why it needs to be reset periodically
- Provide a step-by-step guide on how to use Epson T1110 Adjustment Program to reset the waste ink counter, including screenshots and instructions
- Explain what other settings can be adjusted with Epson T1110 Adjustment Program, such as print head alignment, nozzle check, cleaning, etc.
- Provide a step-by-step guide on how to use Epson T1110 Adjustment Program to adjust other settings, including screenshots and instructions | | H2: Conclusion: Summary of the main points and benefits of using Epson T1110 Adjustment Program | - Summarize the main points of the article in a few sentences
- Highlight the benefits of using Epson T1110 Adjustment Program for your printer, such as saving money, time, and hassle
- Encourage the readers to try Epson T1110 Adjustment Program for themselves and share their feedback | | H3: FAQs | - Provide 5 unique FAQs related to the topic of the article, such as:
Q: How often should I use Epson T1110 Adjustment Program?
A: It depends on how frequently you use your printer and how much ink you consume. Generally, it is recommended to use Epson T1110 Adjustment Program once every 6 months or when you notice any problem with your printer.
Q: Will using Epson T1110 Adjustment Program void my printer's warranty?
A: No, using Epson T1110 Adjustment Program will not affect your printer's warranty. However, you should always follow the instructions carefully and use the program at your own risk.
Q: Can I use Epson T1110 Adjustment Program for other models of printers?
A: No, Epson T1110 Adjustment Program is designed specifically for Epson Stylus Office T1110 printers. Using it for other models may cause damage or malfunction. You should always use the appropriate adjustment program for your printer model.
Q: Where can I find more information about Epson T1110 Adjustment Program?
A: You can find more information about Epson T1110 Adjustment Program on the official website of ResetterEpson.com or by contacting their customer support team. They will be happy to assist you with any questions or issues you may have.
Q: What are some alternatives to using Epson T1110 Adjustment Program?
A: If you don't want to use Epson T1110 Adjustment Program or if it doesn't work for you, you can try some other methods to fix your printer problems. For example, you can manually clean the print head or replace the ink cartridges. However, these methods may not be as effective or convenient as using Epson T1110 Adjustment Program. | Article with HTML formatting:

How to Download and Use Epson T1110 Adjustment Program for Free

-

If you own an Epson Stylus Office T1110 printer, you may have encountered some problems with it over time. For example, you may have noticed that your printer's waste ink pad is full or that your print quality is poor. These problems can affect your printer's performance and functionality.

-

epson t1110 adjustment program free Full


Download Zip ✺✺✺ https://tinourl.com/2uL3be



-

Luckily, there is a simple solution for these problems. You can use a software tool called Epson T1110 Adjustment Program to reset your printer's waste ink counter and adjust other settings. This tool can help you fix your printer issues and save you money and time.

-

In this article, we will show you how to download and use Epson T1110 Adjustment Program for free from a reliable source. We will also explain what this tool can do for your printer and why you need it. By following our guide, you will be able to enjoy your printer's optimal performance again.

-

What is Epson T1110 Adjustment Program and why do you need it?

-

Epson T1110 Adjustment Program is a software tool that allows you to reset your printer's waste ink counter and adjust other settings. It is also known as a resetter or a maintenance utility.

-

The waste ink counter is a feature that tracks how much ink is used by your printer during cleaning cycles and printing processes. When the waste ink counter reaches a certain limit, your printer will stop working and display an error message such as "Service required" or "Parts inside your printer are near the end of their service life". This is because your printer's waste ink pad is full and needs to be replaced.

-

Epson T1110 Adjustment Program can help you reset the waste ink counter so that your printer can resume working normally. You don't need to replace the waste ink pad or take your printer to a service center. You can do it yourself at home with just a few clicks.

-

Besides resetting the waste ink counter, Epson T1110 Adjustment Program can also help you adjust other settings such as print head alignment, nozzle check, cleaning, etc. These settings can affect your print quality and performance. By using this tool regularly, you can maintain your printer's optimal condition and prevent future problems.

-

epson t1110 resetter software free download full version
-how to use epson t1110 adjustment program for free
-epson t1110 printer service required solution free
-epson t1110 waste ink pad resetter free download
-epson t1110 ink level reset software free full
-epson t1110 adjustment program cracked free download
-epson t1110 head cleaning software free full version
-epson t1110 maintenance mode software free download
-epson t1110 error code resetter free full
-epson t1110 firmware update tool free download full
-epson t1110 paper jam reset software free full
-epson t1110 print quality adjustment program free
-epson t1110 nozzle check software free download full
-epson t1110 alignment tool free full version
-epson t1110 counter reset software free download
-epson t1110 ink charge software free full version
-epson t1110 power cleaning software free download
-epson t1110 manual feed adjustment program free
-epson t1110 borderless printing software free full
-epson t1110 duplex printing software free download
-epson t1110 color management software free full version
-epson t1110 network setup software free download
-epson t1110 wireless printing software free full version
-epson t1110 bluetooth printing software free download
-epson t1110 cloud printing software free full version
-epson t1110 scan to email software free download
-epson t1110 scan to pdf software free full version
-epson t1110 ocr software free download full version
-epson t1110 driver for windows 10 64 bit free download
-epson t1110 driver for mac os x free download full version
-epson t1110 driver for linux ubuntu free download full version
-epson t1110 driver for android phone free download full version
-epson t1110 driver for iphone ios free download full version
-epson t1110 driver for chromebook free download full version
-epson t1110 driver for windows 7 32 bit free download full version
-epson t1110 driver for windows 8.1 64 bit free download full version
-epson t1110 driver for windows xp sp3 free download full version
-epson t1110 driver for windows vista 32 bit free download full version
-epson t1110 driver for windows server 2012 r2 free download full version
-epson t1110 driver for windows server 2008 r2 free download full version
-epson t1110 driver for windows server 2003 r2 free download full version
-epson t1110 driver for windows server 2016 r2 free download full version
-epson t1110 driver for windows server 2019 r2 free download full version
-epson t1110 driver for mac os x lion 10.7.5 free download full version
-epson t1110 driver for mac os x mountain lion 10.8.5 free download full version
-epson t1110 driver for mac os x mavericks 10.9.5 free download full version
-epson t1110 driver for mac os x yosemite 10.10.5 free download full version
-epson t1110 driver for mac os x el capitan 10.11.6 free download full version
-epson t1110 driver for mac os x sierra 10.12.6 free download full version
-epson t1110 driver for mac os x high sierra 10.13.6 free download full version

-

Epson T1110 Adjustment Program is an essential tool for any owner of an Epson Stylus Office T1110 printer. It can help you fix common problems and extend your printer's lifespan.

-

How to download Epson T1110 Adjustment Program for free from a reliable source

-

If you want to use Epson T1110 Adjustment Program for free, you need to download it from a reliable source. There are many websites that offer this tool for free download, but not all of them are trustworthy. Some of them may contain viruses, malware, or fake programs that can harm your computer or printer.

-

To avoid these risks, we recommend that you download Epson T1110 Adjustment Program from ResetterEpson.com. This website is one of the most trusted sources for downloading adjustment programs for various models of printers. They offer high-quality software tools that are safe and effective.

-

To download Epson T1110 Adjustment Program from ResetterEpson.com, follow these steps:

-
    -
  1. Go to https://www.resetterepson.com/epson-t1110-resetter-adjustment-program-free-download/
  2. -
  3. Click on the "Download" button at the bottom of the page
  4. -
  5. You will be redirected to another page where you need to enter a password
  6. -
  7. The password is "resetterepsons.com" (without quotes)
  8. -
  9. Type in the password and click on "Submit"
  10. -110XXXW_T1110_T1110N_T1110W_T1110X_T1110XL_T1110XN_T1110XW_T1110XXL_T1110XXN_T1110XXW_T1110XXXL_T1110XXXN_T1110XXXW.rar" -
  11. Click on the link and save the file to your computer
  12. -
  13. You have successfully downloaded Epson T1110 Adjustment Program for free
  14. -
-

How to install and run Epson T1110 Adjustment Program on your computer

-

After downloading Epson T1110 Adjustment Program, you need to install and run it on your computer. Here are the steps to do so:

-
    -
  1. Extract the file "Epson_T110_T110N_T110W_T110X_T110XL_T110XN_T110XW_T110XXL_T110XXN_T110XXW_T110XXXL_T110XXXN_T 110XXXW_T1110_T1110N_T1110W_T1110X_T1110XL_T1110XN_T1110XW_T1110XXL_T1110XXN_T1110XXW_T1110XXXL_T1110XXXN_T1110XXXW.rar" using a software such as WinRAR or 7-Zip
  2. -
  3. You will see a folder named "Epson T110-T110N-T110W-T110X-T110XL-T110XN-T110XW-T110XXL-T110XXN-T110XXW-T 110XXXL-T110XXXN-T110XXXW-T1110-T1110N-T1110W-T1110X-T1110XL-T1110XN-T1110XW-T 1110XXL-T1110XXN-T1110XXW-T 1110XXXN-T1110XXXW"
  4. -
  5. Open the folder and double-click on the file "AdjProg.exe" to launch Epson T1110 Adjustment Program
  6. -
  7. You may see a warning message from your antivirus software or Windows Defender. This is normal and you can ignore it. Click on "Run anyway" or "Allow" to proceed
  8. -
  9. You will see the main window of Epson T1110 Adjustment Program
  10. -
  11. You have successfully installed and run Epson T1110 Adjustment Program on your computer
  12. -
-

How to use Epson T1110 Adjustment Program to reset your printer's waste ink counter and other settings

-

Now that you have installed and run Epson T1110 Adjustment Program on your computer, you can use it to reset your printer's waste ink counter and other settings. Here are the steps to do so:

-
    -
  1. Make sure that your printer is connected to your computer via USB cable and turned on
  2. -
  3. In the main window of Epson T1110 Adjustment Program, click on "Select"
  4. -
  5. In the pop-up window, select your printer model (Epson Stylus Office T1110) and port (Auto Selection)
  6. -
  7. Click on "OK"
  8. -
  9. In the main window of Epson T1110 Adjustment Program, click on "Particular adjustment mode"
  10. -
  11. In the next window, select "Waste ink pad counter" and click on "OK"
  12. -
  13. In the next window, check the box next to "Main pad counter" and click on "Check"
  14. -
  15. You will see the current value of your waste ink counter. If it is above 100%, you need to reset it
  16. -
  17. Click on "Initialization" and wait for a few seconds
  18. -
  19. You will see a message saying "Please turn off printer". Click on "OK" and turn off your printer
  20. -
  21. Wait for 10 seconds and turn on your printer again
  22. -
  23. You have successfully reset your printer's waste ink counter
  24. -
-

Besides resetting the waste ink counter, you can also use Epson T1110 Adjustment Program to adjust other settings such as print head alignment, nozzle check, cleaning, etc. To do so, follow these steps:

-
    -
  1. In the main window of Epson T1110 Adjustment Program, click on "Particular adjustment mode"
  2. -
  3. In the next window, select the setting that you want to adjust from the list. For example, if you want to align the print head, select "Head alignment"
  4. -
  5. Click on "OK"
  6. -
  7. Follow the instructions on the screen to complete the adjustment process. For example, if you are aligning the print head, you will need to print a test pattern and choose the best alignment option
  8. -
  9. You have successfully adjusted your printer's setting
  10. -
-

Conclusion: Summary of the main points and benefits of using Epson T1110 Adjustment Program

-

In this article, we have shown you how to download and use Epson T1110 Adjustment Program for free from a reliable source. We have also explained what this tool can do for your printer and why you need it.

-

Epson T1110 Adjustment Program is a software tool that allows you to reset your printer's waste ink counter and adjust other settings. It can help you fix common problems and extend your printer's lifespan.

-

By using this tool regularly, you can maintain your printer's optimal performance and avoid costly repairs. You can also save money and time by doing it yourself at home with just a few clicks.

-

We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.

-

FAQs

-
    -
  • Q: How often should I use Epson T1110 Adjustment Program?
  • -
  • A: It depends on how frequently you use your printer and how much ink you consume. Generally, it is recommended to use Epson T1110 Adjustment Program once every 6 months or when you notice any problem with your printer.
  • -
  • Q: Will using Epson T1110 Adjustment Program void my printer's warranty?
  • -
  • A: No, using Epson T1110 Adjustment Program will not affect your printer's warranty. However, you should always follow the instructions carefully and use the program at your own risk.
  • -
  • Q: Can I use Epson T1110 Adjustment Program for other models of printers?
  • -
  • A: No, Epson T1110 Adjustment Program is designed specifically for Epson Stylus Office T1110 printers. Using it for other models may cause damage or malfunction. You should always use the appropriate adjustment program for your printer model.
  • -
  • Q: Where can I find more information about Epson T1110 Adjustment Program?
  • -
  • A: You can find more information about Epson T1110 Adjustment Program on the official website of ResetterEpson.com or by contacting their customer support team. They will be happy to assist you with any questions or issues you may have.
  • -
  • Q: What are some alternatives to using Epson T1110 Adjustment Program?
  • -
  • A: If you don't want to use Epson T1110 Adjustment Program or if it doesn't work for you, you can try some other methods to fix your printer problems. For example, you can manually clean the print head or replace the ink cartridges. However, these methods may not be as effective or convenient as using Epson T1110 Adjustment Program.
  • -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raelfromgenesis/oai-proxy/greeting.md b/spaces/raelfromgenesis/oai-proxy/greeting.md deleted file mode 100644 index 5408875a87eb4fe56ebaebb7dc35cd68b997b3c4..0000000000000000000000000000000000000000 --- a/spaces/raelfromgenesis/oai-proxy/greeting.md +++ /dev/null @@ -1 +0,0 @@ -https://youtu.be/tax7ZlWyYvA?si=DW634X48ywl8bqPY \ No newline at end of file diff --git a/spaces/rd13/Pix2Pix-Video/app.py b/spaces/rd13/Pix2Pix-Video/app.py deleted file mode 100644 index 9504a98dc7f12dfcae08af834153bef32f3759b3..0000000000000000000000000000000000000000 --- a/spaces/rd13/Pix2Pix-Video/app.py +++ /dev/null @@ -1,248 +0,0 @@ -import gradio as gr -import os -import cv2 -import numpy as np -from moviepy.editor import * -from share_btn import community_icon_html, loading_icon_html, share_js - -from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler -import torch -from PIL import Image -import time -import psutil -import random - -is_shared_ui = True if "AIFILMS/Pix2Pix-Video" in os.environ['SPACE_ID'] else False - -pipe = DiffusionPipeline.from_pretrained("timbrooks/instruct-pix2pix", torch_dtype=torch.float16, safety_checker=None) -pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) -if(not is_shared_ui): - pipe.enable_xformers_memory_efficient_attention() -pipe.unet.to(memory_format=torch.channels_last) - -device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶" - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - -def pix2pix( - prompt, - text_guidance_scale, - image_guidance_scale, - image, - steps, - neg_prompt="", - width=512, - height=512, - seed=0, -): - print(psutil.virtual_memory()) # print memory usage - - if seed == 0: - seed = random.randint(0, 2147483647) - - generator = torch.Generator("cuda").manual_seed(seed) - - try: - image = Image.open(image) - ratio = min(height / image.height, width / image.width) - image = image.resize((int(image.width * ratio), int(image.height * ratio)), Image.LANCZOS) - - result = pipe( - prompt, - negative_prompt=neg_prompt, - image=image, - num_inference_steps=int(steps), - image_guidance_scale=image_guidance_scale, - guidance_scale=text_guidance_scale, - generator=generator, - ) - - # return replace_nsfw_images(result) - return result.images, result.nsfw_content_detected, seed - except Exception as e: - return None, None, error_str(e) - -def error_str(error, title="Error"): - return ( - f"""#### {title} - {error}""" - if error - else "" - ) - -def get_frames(video_in): - frames = [] - #resize the video - clip = VideoFileClip(video_in) - - #check fps - if clip.fps > 30: - print("vide rate is over 30, resetting to 30") - clip_resized = clip.resize(height=512) - clip_resized.write_videofile("video_resized.mp4", fps=30) - else: - print("video rate is OK") - clip_resized = clip.resize(height=512) - clip_resized.write_videofile("video_resized.mp4", fps=clip.fps) - - print("video resized to 512 height") - - # Opens the Video file with CV2 - cap= cv2.VideoCapture("video_resized.mp4") - - fps = cap.get(cv2.CAP_PROP_FPS) - print("video fps: " + str(fps)) - i=0 - while(cap.isOpened()): - ret, frame = cap.read() - if ret == False: - break - cv2.imwrite('kang'+str(i)+'.jpg',frame) - frames.append('kang'+str(i)+'.jpg') - i+=1 - - cap.release() - cv2.destroyAllWindows() - print("broke the video into frames") - - return frames, fps - - -def create_video(frames, fps): - print("building video result") - clip = ImageSequenceClip(frames, fps=fps) - clip.write_videofile("movie.mp4", fps=fps) - - return 'movie.mp4' - - -def infer(prompt,video_in, seed_in, trim_value): - if(is_shared_ui): - raise gr.Error("This Space doesn't work on this shared UI.") - print(prompt) - break_vid = get_frames(video_in) - - frames_list= break_vid[0] - fps = break_vid[1] - n_frame = int(trim_value*fps) - - if n_frame >= len(frames_list): - print("video is shorter than the cut value") - n_frame = len(frames_list) - - result_frames = [] - print("set stop frames to: " + str(n_frame)) - - for i in frames_list[0:int(n_frame)]: - pix2pix_img = pix2pix(prompt,5.5,1.5,i,15,"",512,512,seed_in) - images = pix2pix_img[0] - rgb_im = images[0].convert("RGB") - - # exporting the image - rgb_im.save(f"result_img-{i}.jpg") - result_frames.append(f"result_img-{i}.jpg") - print("frame " + i + "/" + str(n_frame) + ": done;") - - final_vid = create_video(result_frames, fps) - print("finished !") - - return final_vid, gr.Group.update(visible=True) - -title = """ -
    -
    -

    - Pix2Pix Video -

    -
    -

    - Apply Instruct Pix2Pix Diffusion to a video -

    -
    -""" - -article = """ - - -
    -

    You may also like:

    -
    - - - - - - - -
    - -
    - -""" - -with gr.Blocks(css='style.css') as demo: - if(is_shared_ui): - with gr.Box(): - top_description = gr.HTML(f''' -
    -

    Attention - This Space doesn't work in this shared UI

    -

    For it to work, you can access the original or duplicate this Space and run it on your own profile using a GPU.  Duplicate Space

    -
    - ''') - with gr.Column(elem_id="col-container"): - gr.HTML(title) - with gr.Row(): - with gr.Column(): - video_inp = gr.Video(label="Video source", source="upload", type="filepath", elem_id="input-vid") - prompt = gr.Textbox(label="Prompt", placeholder="enter prompt", show_label=False, elem_id="prompt-in") - with gr.Row(): - seed_inp = gr.Slider(label="Seed", minimum=0, maximum=2147483647, step=1, value=123456) - trim_in = gr.Slider(label="Cut video at (s)", minimun=1, maximum=3, step=1, value=1) - with gr.Column(): - video_out = gr.Video(label="Pix2pix video result", elem_id="video-output") - gr.HTML(""" - Duplicate Space - work with longer videos / skip the queue: - """, elem_id="duplicate-container") - submit_btn = gr.Button("Generate Pix2Pix video") - - with gr.Group(elem_id="share-btn-container", visible=False) as share_group: - community_icon = gr.HTML(community_icon_html) - loading_icon = gr.HTML(loading_icon_html) - share_button = gr.Button("Share to community", elem_id="share-btn") - - inputs = [prompt,video_inp,seed_inp, trim_in] - outputs = [video_out, share_group] - - ex = gr.Examples( - [ - ["Make it a marble sculpture", "./examples/pexels-jill-burrow-7665249_512x512.mp4", 422112651, 4], - ["Make it molten lava", "./examples/Ocean_Pexels_ 8953474_512x512.mp4", 43571876, 4] - ], - inputs=inputs, - outputs=outputs, - fn=infer, - cache_examples=False, - ) - - gr.HTML(article) - - submit_btn.click(infer, inputs, outputs) - share_button.click(None, [], [], _js=share_js) - - - -demo.launch().queue(max_size=12) diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Aces High III Torrent Download [FULL] [HOT].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Aces High III Torrent Download [FULL] [HOT].md deleted file mode 100644 index 285b43b13f812da704b053a7ab4b253fc8c5a67a..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Aces High III Torrent Download [FULL] [HOT].md +++ /dev/null @@ -1,30 +0,0 @@ -

    Aces High III Torrent Download [FULL]


    Download ✯✯✯ https://urlgoal.com/2uCKxV



    -
    -Force the enemy to reveal his true identity as he goes into hiding on a giant map. Charge into enemy territory with the help of a drone and attack a concentration of enemy units. Even more! There is a brand new naval battleship battle map to help you take the fight to the enemy in a WW2 submarine scenario. Every level brings you closer to the heart of the war and you'll never lose sight of the goal. On a side note, you'll be flying two new aircraft: the German WWII Bf 109 and the British Spitfire Mk IX. - -Features - -Fly the legendary Fw 190D-9/R4 on a mission to shoot down the entire enemy air force in a massive air battle. - -Escape enemy pilots using the new redesigned control scheme, the Second Wind system and your newly-gained heat vision. - -Experience a brand new submarine-based action in a WW2 setting that will have you jumping into the water and swimming for dear life as you try to escape from the enemy. - -Use your new spy drone to locate the enemy and destroy them while trying to avoid enemy anti-aircraft batteries. - -Fight in the sea with the new battleship map. - -As soon as you beat the level, the next one will be unlocked. - -As a free update, you can also add your name to the credits. - -Game Modes - -- Arcade - In the Arcade mode, you'll need to complete the levels of every new level, kill the enemy before he does and score as many points as you can. If you get hit, you'll lose points and the enemy will start scoring, too. The more you score the more the enemy loses. Once you beat all the levels, you'll get a high score and you'll be able to view it in-game. Note: It takes more time to unlock the next game than it should! - -- Survival - In the survival mode, you can only use the Second Wind. There's a lot of enemies and you only have 1 life per level. There are no scores. When you die, you will instantly lose points and the enemy will start scoring. - -- Campaign - The Campaign mode takes you into the heart of the war as you try to destroy the enemy. Your first objective will be to destroy the enemy fighter plane that will lead the others into your territory. You will unlock more units and pilots as the mission progresses and more and more planes will be sent to your side as 4fefd39f24
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Afterfall.InSanity-KaOs Cheat Engine.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Afterfall.InSanity-KaOs Cheat Engine.md deleted file mode 100644 index bc4b4db7e8db982e6dd793c0fa80a4afd8d157e4..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Afterfall.InSanity-KaOs Cheat Engine.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Afterfall.InSanity-KaOs cheat engine


    Download File » https://urlgoal.com/2uCKT4



    -
    -Afterfall.InSanity-KaOs Cheat Engine on May 3, 2019 in Worldwide at Worldwide. Afterfall.InSanity-KaOs Cheat Engine Description: Release-Title:---------... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Auto Click For Conquer Online 2 [REPACK].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Auto Click For Conquer Online 2 [REPACK].md deleted file mode 100644 index f2aee0230a47456bb26bf240bac446e1de30c02c..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Auto Click For Conquer Online 2 [REPACK].md +++ /dev/null @@ -1,78 +0,0 @@ -

    Auto Click for Conquer Online 2


    Download Zip ===== https://urlgoal.com/2uCLWT



    -
    -Check it out nowGuests can still arrive and depart from Terminus station, but - -you can only walk across the parking lot to reach the train station. - -The RER J (to the west) will be closed for - -about a year to allow for additional platforms and train - -lounges to be added. - -No Parking is available on Jourdan Boulevard during this time. - -The following restrictions apply to your parking permit when traveling - -to the airport by public transportation: - -Parking in the hotel garage or parking at the USO - -is no longer available. Please use the - -approved parking areas on private property or near parkways. - -You can use the parking - -provided by Paris Air Services. - -Your parking permit may be used for parking in the meters on - -the street - -You may be asked to show proof of your - -parking when parking in the authorized garages or on private - -property. Please have the parking permit available and know the - -address. - -If you have an issue with your parking permit, - -be sure to follow the - -Paris Air Services (PAS) - -instructions on how to retrieve your permit. - -Parking Garage - -We encourage you to use the ACAP Paris - -Air Services parking garage, accessible - -from Jourdan Blvd. It is located at 2200 Jourdan Blvd (between the - -USO and the Charles de Gaulle Airpot).Q: - -Error while trying to convert.avi to.mp4 with ffmpeg - -I'm trying to convert.avi to.mp4 using ffmpeg but it always gives me the following error. - -Cannot find an encoder for codec avi. - -The command I'm using is: - -ffmpeg -i /media/ray/79F2FFE8D8DCEEC1/Saved/1946/file.avi -b:a 96k -vcodec copy -acodec copy file.mp4 - -When I remove -vcodec copy and -acodec copy, the file is correctly converted to.mp4, but it doesn't do what I need. - -I have tried to change the codecs for the input file, the command, or both, but nothing works. - -A: - -The first step is to specify the muxer. This is normally done by making use of the videoinfo metadata, as defined here. 4fefd39f24
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Crack ((FULL)) ESET NOD32 Antivirus V9.0.386.0 32Bit.exe.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Crack ((FULL)) ESET NOD32 Antivirus V9.0.386.0 32Bit.exe.md deleted file mode 100644 index 43f774f7ad2e02ae5d6c34a3ba83b2e19affcbe6..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Crack ((FULL)) ESET NOD32 Antivirus V9.0.386.0 32Bit.exe.md +++ /dev/null @@ -1,40 +0,0 @@ -

    CRACK ESET NOD32 Antivirus v9.0.386.0 32Bit.exe


    DOWNLOAD ✔✔✔ https://urlgoal.com/2uCM3V



    -
    -: - -This file is part of Windows Server 2003 Service Pack 1 - -P2P-Zombies - -P2P-Zombies is a program that can be used to detect any malicious behavior in your files. - -Features: - -1. Portable - -2. Detects both copy and move - -3. Detects both read and write operation - -4. Detects only executable, script and config files - -5. Detects malicious program that are self-registration executable, that is, self-registration, self-installation - -Kaspersky Endpoint Protection 2011 is the newest version of the popular Anti-Virus and Anti-Malware suite for Windows operating systems. This software is highly advanced and includes a full suite of Internet security, email and PC security and anti-spam tools. - -With the help of this free security software, users can keep their system highly secure, safe and virus free. There is also a built-in firewall that helps to protect your system against online threats. Users can also update their system with the latest software updates. - -]]>Norton SafeZone 2011 is a free to use security software that has been designed to protect users against various online threats and harmful programs. The user-friendly interface of this software makes it easy to install and use. - -With Norton SafeZone, users can protect their system against various online threats. The software is designed to provide you with full security and protection against different types of harmful software and online threats. - -The software also blocks certain Internet threats and blocks the access to harmful websites. Users can also protect their system against malicious websites, Malware, Phishing, Spyware, Trojans, Worms, and other Online threats. - -]]>Brixware 2014 is a free to use security software which includes all the tools to keep your system safe and secure from online threats. The software also has the tools to keep your system safe from all types of online threats. - -With the help of Brixware 2014, users can protect their system against various online threats. Brixware also has a built-in firewall that helps to protect your system against online threats. With the help of the firewall, users can protect their system against different online threats and harmful programs. - -Brixware 2014 can detect and remove various types of Trojan, Rootkit, 4fefd39f24
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Free Jamal Hartwell Dvd.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Free Jamal Hartwell Dvd.md deleted file mode 100644 index 6f9c073b1b3d41b968771a3dcc3519a918c13467..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Free Jamal Hartwell Dvd.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Free Jamal Hartwell Dvd


    Download File ===== https://urlgoal.com/2uCJJV



    -
    -jamal hartwell, jamal hartwell basketball, jamal hartwell ii, jamal hartwell dvd, jamal hartwell website, jamal hartwell piano tutorials, jamal hartwell george mason ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/renumics/cifar100-enriched/README.md b/spaces/renumics/cifar100-enriched/README.md deleted file mode 100644 index b17d4a265405a737d1280e2722ef4ccb2ee7322a..0000000000000000000000000000000000000000 --- a/spaces/renumics/cifar100-enriched/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Explore CIFAR-100 Enriched with Spotlight -emoji: 📊 -colorFrom: gray -colorTo: blue -sdk: docker -pinned: false -license: mit -app_file: run.py -datasets: -- renumics/cifar100-enriched -- cifar100 -tags: -- renumics -- spotlight -- EDA -- enriched ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/robinhad/ukrainian-tts/README.md b/spaces/robinhad/ukrainian-tts/README.md deleted file mode 100644 index ad2463d5985644cb340680baed012a1a789e930b..0000000000000000000000000000000000000000 --- a/spaces/robinhad/ukrainian-tts/README.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -title: "Ukrainian TTS" -emoji: 🐌 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version : 3.40.1 -python_version: 3.10.3 -app_file: app.py -pinned: false ---- - -# Ukrainian TTS 📢🤖 -Ukrainian TTS (text-to-speech) using ESPNET. - -![pytest](https://github.com/robinhad/ukrainian-tts/actions/workflows/hf-sync.yml/badge.svg) -[![Open In HF🤗 Space ](https://img.shields.io/badge/Open%20Demo-%F0%9F%A4%97%20Space-yellow)](https://huggingface.co/spaces/robinhad/ukrainian-tts) -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/robinhad/ukrainian-tts/blob/main/tts_example.ipynb) -[![Open Bot](https://img.shields.io/badge/Open%20Bot%20🤖-Telegram-blue)](https://t.me/uk_tts_bot) -[![chat](https://img.shields.io/badge/chat-Telegram-blue)](https://t.me/speech_recognition_uk) - -## Озвучуйте до 4 разів швидше по посиланню: [https://brehunets.com](https://brehunets.com) - -Link to online demo -> [https://huggingface.co/spaces/robinhad/ukrainian-tts](https://huggingface.co/spaces/robinhad/ukrainian-tts) -Note: online demo saves user input to improve user experience; by using it, you consent to analyze this data. -Link to source code and models -> [https://github.com/robinhad/ukrainian-tts](https://github.com/robinhad/ukrainian-tts) -Telegram bot -> [https://t.me/uk_tts_bot](https://t.me/uk_tts_bot) - -# Features ⚙️ -- Completely offline -- Multiple voices -- Automatic stress with priority queue: `acute` -> `user-defined` > `dictionary` > `model` -- Control speech speed -- Python package works on Windows, Mac (x86/M1), Linux(x86/ARM) -- Inference on mobile devices (inference models through `espnet_onnx` without cleaners) - - -# Support ❤️ -If you like my work, please support ❤️ -> [https://send.monobank.ua/jar/48iHq4xAXm](https://send.monobank.ua/jar/48iHq4xAXm) -You're welcome to join UA Speech Recognition and Synthesis community: [Telegram https://t.me/speech_recognition_uk](https://t.me/speech_recognition_uk) -# Examples 🤖 - -`Oleksa (male)`: - -https://github.com/robinhad/ukrainian-tts/assets/5759207/ace842ef-06d0-4b1f-ad49-5fda92999dbb - - -
    - More voices 📢🤖 - -`Tetiana (female)`: - -https://github.com/robinhad/ukrainian-tts/assets/5759207/a6ecacf6-62ae-4fc5-b6d5-41e6cdd3d992 - -`Dmytro (male)`: - -https://github.com/robinhad/ukrainian-tts/assets/5759207/67d3dac9-6626-40ef-98e5-ec194096bbe0 - -`Lada (female)`: - -https://github.com/robinhad/ukrainian-tts/assets/5759207/fcf558b2-3ff9-4539-ad9e-8455b52223a4 - -`Mykyta (male)`: - -https://github.com/robinhad/ukrainian-tts/assets/5759207/033f5215-3f09-4021-ba19-1f55158445ca - - -
    - - -# How to use: 📢 - -## Quickstart - -Install using: -```bash -!pip install git+https://github.com/robinhad/ukrainian-tts.git -``` -Code example: -```python -from ukrainian_tts.tts import TTS, Voices, Stress -import IPython.display as ipd - -tts = TTS(device="cpu") # can try gpu, mps -with open("test.wav", mode="wb") as file: - _, output_text = tts.tts("Привіт, як у тебе справи?", Voices.Dmytro.value, Stress.Dictionary.value, file) -print("Accented text:", output_text) - -ipd.Audio(filename="test.wav") -``` - -See example notebook: [tts_example.ipynb](./tts_example.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/robinhad/ukrainian-tts/blob/main/tts_example.ipynb) - -# How to contribute: 🙌 - -Look into this list with current problems: https://github.com/robinhad/ukrainian-tts/issues/35 - -# How to train: 🏋️ -Link to guide: [training/STEPS.md](training/STEPS.md) - - -# Attribution 🤝 - -- Model training - [Yurii Paniv @robinhad](https://github.com/robinhad) -- [Open Source Ukrainian Text-to-Speech dataset](https://github.com/egorsmkv/ukrainian-tts-datasets) - [Yehor Smoliakov @egorsmkv](https://github.com/egorsmkv) -- Dmytro voice - [Dmytro Chaplynskyi @dchaplinsky](https://github.com/dchaplinsky) -- Silence cutting using [HMM-GMM](https://github.com/proger/uk) - [Volodymyr Kyrylov @proger](https://github.com/proger) -- Autostress (with dictionary) using [ukrainian-word-stress](https://github.com/lang-uk/ukrainian-word-stress) - [Oleksiy Syvokon @asivokon](https://github.com/asivokon) -- Autostress (with model) using [ukrainian-accentor](https://github.com/egorsmkv/ukrainian-accentor) - [Bohdan Mykhailenko @NeonBohdan](https://github.com/NeonBohdan) + [Yehor Smoliakov @egorsmkv](https://github.com/egorsmkv) diff --git a/spaces/rorallitri/biomedical-language-models/logs/Dhanush New Movie Songs in Hindi The Secret Behind His Success as a Singer Lyricist and Actor.md b/spaces/rorallitri/biomedical-language-models/logs/Dhanush New Movie Songs in Hindi The Secret Behind His Success as a Singer Lyricist and Actor.md deleted file mode 100644 index 2e44d521d96c052f8dd577bdae3bfb9c1f742eac..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Dhanush New Movie Songs in Hindi The Secret Behind His Success as a Singer Lyricist and Actor.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    Thanks for this. Do you also provide carnatic type notes (swarams) for these songs? I would like to learn keyboard playing popular movie songs (Tamil mostly) and would like to know if you conduct online classes

    -

    dhanush new movie songs in hindi


    Download >>> https://tinurll.com/2uzlXD



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Gta San Andreas Patch Srbija TOP Download Pc Torrent.md b/spaces/rorallitri/biomedical-language-models/logs/Gta San Andreas Patch Srbija TOP Download Pc Torrent.md deleted file mode 100644 index 23d67552933623e82840f83b9bb126a7a2465ad7..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Gta San Andreas Patch Srbija TOP Download Pc Torrent.md +++ /dev/null @@ -1,50 +0,0 @@ -

    Gta San Andreas Patch Srbija Download Pc Torrent


    DOWNLOAD · https://tinurll.com/2uzlQJ



    - -5 as well as .1. - -GTA San Andreas is an open world action-adventure video game developed by Rockstar North and originally released for the PlayStation 2 in June 2006. Set in the fictional city of San Andreas, it is the first installment of the Grand Theft Auto series. - -The game offers new vehicles, new weapons, new clothing, new features, and a new engine. - -SPECTATORS CAN WATCH LIVE ON UBISOFT, WATCH LIVE ON IGN, UBISOFT & IGN TV AND UBISOFT STREAMS - ----SEASON PREMIER--- - --Thursday, May 24- - ---DATE/TIME-- - -The launch of Grand Theft Auto 5 was announced to be coming early-mid 2014. On this day Rockstar released a press release informing that they have developed a new game that will be released as Grand Theft Auto 5. This game will be a new installment for Grand Theft Auto series. On this day Rockstar released a press release informing that Grand Theft Auto 5 will be released on this day in Fall 2014. The game will be released for Playstation 4, XBox One and PC. The game was released on time and on this day it was a big success. - ----GAME CHANGES--- - -There will be NO MAJOR GAME CHANGES. Only the S.O.V.A (for now) car mode is changed and made a bit better. - ----CARS--- - -HERE ARE THE NEW CARS AND VEHICLES IN GRAVITY THEME. - -It is possible that more changes will be made in the future, but for now they are set in stone and in game. - ----TOWN--- - -The whole map is "Changed" to match the gravitation theme. Houses are on different sides than it was before. Also, buildings change their colors according to the theme. There are not so many differences but for sure you will notice it. - ----CITY SCENE--- - --New Cars - - ----Rideables--- - --New Weapons--- - --New Features--- - ----HISTORY--- - -The fact that GTA IV was a huge success, GTA V is about to be launched. - -Rockstar have decided to release a new game in Fall 2014 which is Grand Theft Auto 5. The game is going to be released for Playstation 4, XBox One and PC. 4fefd39f24
    -
    -
    -

    diff --git a/spaces/ruslanmv/Clone-Your-Voice/encoder/params_data.py b/spaces/ruslanmv/Clone-Your-Voice/encoder/params_data.py deleted file mode 100644 index bdb1716ed45617f2b127a7fb8885afe6cc74fb71..0000000000000000000000000000000000000000 --- a/spaces/ruslanmv/Clone-Your-Voice/encoder/params_data.py +++ /dev/null @@ -1,29 +0,0 @@ - -## Mel-filterbank -mel_window_length = 25 # In milliseconds -mel_window_step = 10 # In milliseconds -mel_n_channels = 40 - - -## Audio -sampling_rate = 16000 -# Number of spectrogram frames in a partial utterance -partials_n_frames = 160 # 1600 ms -# Number of spectrogram frames at inference -inference_n_frames = 80 # 800 ms - - -## Voice Activation Detection -# Window size of the VAD. Must be either 10, 20 or 30 milliseconds. -# This sets the granularity of the VAD. Should not need to be changed. -vad_window_length = 30 # In milliseconds -# Number of frames to average together when performing the moving average smoothing. -# The larger this value, the larger the VAD variations must be to not get smoothed out. -vad_moving_average_width = 8 -# Maximum number of consecutive silent frames a segment can have. -vad_max_silence_length = 6 - - -## Audio volume normalization -audio_norm_target_dBFS = -30 - diff --git a/spaces/safi842/FashionGen/TkTorchWindow.py b/spaces/safi842/FashionGen/TkTorchWindow.py deleted file mode 100644 index fbe1ef1cc35c2590b0a5976254af3f146de4d9b3..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/TkTorchWindow.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright 2020 Erik Härkönen. All rights reserved. -# This file is licensed to you under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. You may obtain a copy -# of the License at http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software distributed under -# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS -# OF ANY KIND, either express or implied. See the License for the specific language -# governing permissions and limitations under the License. - -import tkinter as tk -import numpy as np -import time -from contextlib import contextmanager -import pycuda.driver -from pycuda.gl import graphics_map_flags -from glumpy import gloo, gl -from pyopengltk import OpenGLFrame -import torch -from torch.autograd import Variable - -# TkInter widget that can draw torch tensors directly from GPU memory - -@contextmanager -def cuda_activate(img): - """Context manager simplifying use of pycuda.gl.RegisteredImage""" - mapping = img.map() - yield mapping.array(0,0) - mapping.unmap() - -def create_shared_texture(w, h, c=4, - map_flags=graphics_map_flags.WRITE_DISCARD, - dtype=np.uint8): - """Create and return a Texture2D with gloo and pycuda views.""" - tex = np.zeros((h,w,c), dtype).view(gloo.Texture2D) - tex.activate() # force gloo to create on GPU - tex.deactivate() - cuda_buffer = pycuda.gl.RegisteredImage( - int(tex.handle), tex.target, map_flags) - return tex, cuda_buffer - -# Shape batch as square if possible -def get_grid_dims(B): - S = int(B**0.5 + 0.5) - while B % S != 0: - S -= 1 - return (B // S, S) - -def create_gl_texture(tensor_shape): - if len(tensor_shape) != 4: - raise RuntimeError('Please provide a tensor of shape NCHW') - - N, C, H, W = tensor_shape - - cols, rows = get_grid_dims(N) - tex, cuda_buffer = create_shared_texture(W*cols, H*rows, 4) - - return tex, cuda_buffer - -# Create window with OpenGL context -class TorchImageView(OpenGLFrame): - def __init__(self, root = None, show_fps=True, **kwargs): - self.root = root or tk.Tk() - self.width = kwargs.get('width', 512) - self.height = kwargs.get('height', 512) - self.show_fps = show_fps - self.pycuda_initialized = False - self.animate = 0 # disable internal main loop - OpenGLFrame.__init__(self, root, **kwargs) - - # Called by pyopengltk.BaseOpenGLFrame - # when the frame goes onto the screen - def initgl(self): - if not self.pycuda_initialized: - self.setup_gl(self.width, self.height) - self.pycuda_initialized = True - - """Initalize gl states when the frame is created""" - gl.glViewport(0, 0, self.width, self.height) - gl.glClearColor(0.0, 0.0, 0.0, 0.0) - self.dt_history = [1000/60] - self.t0 = time.time() - self.t_last = self.t0 - self.nframes = 0 - - def setup_gl(self, width, height): - # setup pycuda and torch - import pycuda.gl.autoinit - import pycuda.gl - - assert torch.cuda.is_available(), "PyTorch: CUDA is not available" - print('Using GPU {}'.format(torch.cuda.current_device())) - - # Create tensor to be shared between GL and CUDA - # Always overwritten so no sharing is necessary - dummy = torch.cuda.FloatTensor((1)) - dummy.uniform_() - dummy = Variable(dummy) - - # Create a buffer with pycuda and gloo views, using tensor created above - self.tex, self.cuda_buffer = create_gl_texture((1, 3, width, height)) - - # create a shader to program to draw to the screen - vertex = """ - uniform float scale; - attribute vec2 position; - attribute vec2 texcoord; - varying vec2 v_texcoord; - void main() - { - v_texcoord = texcoord; - gl_Position = vec4(scale*position, 0.0, 1.0); - } """ - fragment = """ - uniform sampler2D tex; - varying vec2 v_texcoord; - void main() - { - gl_FragColor = texture2D(tex, v_texcoord); - } """ - # Build the program and corresponding buffers (with 4 vertices) - self.screen = gloo.Program(vertex, fragment, count=4) - - # NDC coordinates: Texcoords: Vertex order, - # (-1, +1) (+1, +1) (0,0) (1,0) triangle strip: - # +-------+ +----+ 1----3 - # | NDC | | | | / | - # | SPACE | | | | / | - # +-------+ +----+ 2----4 - # (-1, -1) (+1, -1) (0,1) (1,1) - - # Upload data to GPU - self.screen['position'] = [(-1,+1), (-1,-1), (+1,+1), (+1,-1)] - self.screen['texcoord'] = [(0,0), (0,1), (1,0), (1,1)] - self.screen['scale'] = 1.0 - self.screen['tex'] = self.tex - - # Don't call directly, use update() instead - def redraw(self): - t_now = time.time() - dt = t_now - self.t_last - self.t_last = t_now - - self.dt_history = ([dt] + self.dt_history)[:50] - dt_mean = sum(self.dt_history) / len(self.dt_history) - - if self.show_fps and self.nframes % 60 == 0: - self.master.title('FPS: {:.0f}'.format(1 / dt_mean)) - - def draw(self, img): - assert len(img.shape) == 4, "Please provide an NCHW image tensor" - assert img.device.type == "cuda", "Please provide a CUDA tensor" - - if img.dtype.is_floating_point: - img = (255*img).byte() - - # Tile images - N, C, H, W = img.shape - - if N > 1: - cols, rows = get_grid_dims(N) - img = img.reshape(cols, rows, C, H, W) - img = img.permute(2, 1, 3, 0, 4) # [C, rows, H, cols, W] - img = img.reshape(1, C, rows*H, cols*W) - - tensor = img.squeeze().permute(1, 2, 0).data # CHW => HWC - if C == 3: - tensor = torch.cat((tensor, tensor[:,:,0:1]),2) # add the alpha channel - tensor[:,:,3] = 1 # set alpha - - tensor = tensor.contiguous() - - tex_h, tex_w, _ = self.tex.shape - tensor_h, tensor_w, _ = tensor.shape - - if (tex_h, tex_w) != (tensor_h, tensor_w): - print(f'Resizing texture to {tensor_w}*{tensor_h}') - self.tex, self.cuda_buffer = create_gl_texture((N, C, H, W)) # original shape - self.screen['tex'] = self.tex - - # copy from torch into buffer - assert self.tex.nbytes == tensor.numel()*tensor.element_size(), "Tensor and texture shape mismatch!" - with cuda_activate(self.cuda_buffer) as ary: - cpy = pycuda.driver.Memcpy2D() - cpy.set_src_device(tensor.data_ptr()) - cpy.set_dst_array(ary) - cpy.width_in_bytes = cpy.src_pitch = cpy.dst_pitch = self.tex.nbytes//tensor_h - cpy.height = tensor_h - cpy(aligned=False) - torch.cuda.synchronize() - - # draw to screen - self.screen.draw(gl.GL_TRIANGLE_STRIP) - - def update(self): - self.update_idletasks() - self.tkMakeCurrent() - self.redraw() - self.tkSwapBuffers() - -# USAGE: -# root = tk.Tk() -# iv = TorchImageView(root, width=512, height=512) -# iv.pack(fill='both', expand=True) -# while True: -# iv.draw(nchw_tensor) -# root.update() -# iv.update() \ No newline at end of file diff --git a/spaces/samuelinferences/TabPFN/TabPFN/datasets/utils.py b/spaces/samuelinferences/TabPFN/TabPFN/datasets/utils.py deleted file mode 100644 index d193dbe021c4daa3808fa0f8823a6decfe3f634e..0000000000000000000000000000000000000000 --- a/spaces/samuelinferences/TabPFN/TabPFN/datasets/utils.py +++ /dev/null @@ -1,8 +0,0 @@ -def normalize_data(eval_xs): - mean = eval_xs.mean(0) - std = eval_xs.std(0) + .000001 - eval_xs = (eval_xs - mean) / std - - return eval_xs - - diff --git a/spaces/scedlatioru/img-to-music/example/Descargar Flexisign Pro 10 0 1 Mediafire.md b/spaces/scedlatioru/img-to-music/example/Descargar Flexisign Pro 10 0 1 Mediafire.md deleted file mode 100644 index 39c4a9959e0badc62173a4c6b1300d4e2e05b9e3..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Descargar Flexisign Pro 10 0 1 Mediafire.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Descargar Flexisign Pro 10 0 1 Mediafire


    Download File ✸✸✸ https://gohhs.com/2uEzYx



    - -In the left menu, choose the server to download with security the software. En el menú de la izquierda, elija el servidor para descargar con seguridad del ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/Google Books Download ((BETTER)) V 3.0.1.309 Serial Number.rar.md b/spaces/scedlatioru/img-to-music/example/Google Books Download ((BETTER)) V 3.0.1.309 Serial Number.rar.md deleted file mode 100644 index 64d952451720f9538b8a5ef788a350ad4d0e43c1..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Google Books Download ((BETTER)) V 3.0.1.309 Serial Number.rar.md +++ /dev/null @@ -1,130 +0,0 @@ -
    -

    Google Books Download v 3.0.1.309 serial number.rar: How to Download Books from Google Books for Free

    - -

    Google Books is a service from Google that allows you to search and preview millions of books and magazines online. You can read some books for free, but others are only available for purchase or have limited access. If you want to download books from Google Books for free, you may need a tool like Google Books Download v 3.0.1.309 serial number.rar. In this article, we will explain what this tool is, how it works, and what are the advantages and disadvantages of using it.

    - -

    What is Google Books Download v 3.0.1.309 serial number.rar?

    - -

    Google Books Download v 3.0.1.309 serial number.rar is a software that allows you to download books from Google Books in PDF format. It is a cracked version of the original software, which means that it bypasses the protection and lets you use the software without paying.

    -

    Google Books Download v 3.0.1.309 serial number.rar


    Download File - https://gohhs.com/2uEAsn



    - -

    To use this software, you need to download the file Google Books Download v 3.0.1.309 serial number.rar from a website that offers it, extract it on your computer, install the software, copy the crack file into the installation folder, and run the software.

    - -

    Then, you need to enter the URL of the book you want to download from Google Books, choose the output format (PDF), and click on "Start". The software will download the book and save it on your computer.

    - -

    How does Google Books Download v 3.0.1.309 serial number.rar work?

    - -

    Google Books Download v 3.0.1.309 serial number.rar works by simulating a web browser and capturing the images of the pages of the book from Google Books. It then converts these images into a PDF file and saves it on your computer.

    - -

    The quality and completeness of the PDF file depend on the availability and accessibility of the book on Google Books. Some books are fully readable online, while others are only partially available or have restrictions such as watermarks or blurred pages.

    - -

    The software can only download books that are visible on Google Books, which means that it cannot download books that are not indexed by Google or that are blocked by geographic or legal reasons.

    - -

    What are the advantages of using Google Books Download v 3.0.1.309 serial number.rar?

    - -

    Using Google Books Download v 3.0.1.309 serial number.rar can have some advantages, such as:

    - -
      -
    • You can access all the features of the software without limitation.
    • -
    • You can save the cost of buying the software or the books.
    • -
    • You can test the software before buying it or use it for occasional projects.
    • -
    • You can download books from Google Books that are not available for purchase or have limited access.
    • -
    • You can read books offline or on any device that supports PDF files.
    • -
    - -

    What are the disadvantages of using Google Books Download v 3.0.1.309 serial number.rar?

    - -

    Using Google Books Download v 3.0.1.309 serial number.rar can also have some disadvantages and risks, such as:

    -

    - -
      -
    • You are violating the copyright of the software and the books and you may face legal consequences.
    • -
    • You are exposing your computer to viruses or malware that may be hidden in the downloaded file or on the website that offers it.
    • -
    • You are compromising the quality and reliability of your work, as the PDF file may be incomplete, inaccurate, or corrupted.
    • -
    • You are disrespecting the work of the developers of the software and the authors of the books who invested time and resources to create them.
    • -
    - -

    What are the alternatives to using Google Books Download v 3.0.1.309 serial number.rar?

    - -

    Using Google Books Download v 3.0.1.309 serial number.rar is not the only option to download books from Google Books for free. There are other alternatives that are safer and more legal, such as:

    - -
      -
    • Downloading the preview of the book from Google Books. Google Books allows you to download a preview of some books in PDF format for free. You can do this by clicking on the "Download" button on the top right corner of the book page.
    • -
    • Buying the full version of the book from Google Play or other online stores. If you want to read the whole book or support its author, you can buy it from Google Play or other online stores that offer it in digital format.
    • -
    • Using another software or website that allows you to download books from Google Books legally and ethically. There are some software or websites that allow you to download books from Google Books legally and ethically, such as EDS Google Books Downloader or GooReader.
    • -
    - -

    Conclusion

    - -

    Google Books is a service from Google that allows you to search and preview millions of books and magazines online. If you want to download books from Google Books for free, you may need a tool like Google Books Download v 3.0.1.309 serial number.rar, which is a cracked version of a software that allows you to download books in PDF format.

    - -

    However, using this tool is illegal and risky, as it may have negative consequences for your computer or your security, as well as for the software and book creators who deserve respect and support for their work.

    - -

    Therefore, we do not recommend using this tool, but rather opting for a safer and more legal alternative, such as downloading the preview of the book from Google Books, buying the full version of the book from an online store, or using another software or website that allows you to download books legally and ethically.

    -

    Google Books Download v 3.0.1.309 serial number.rar: How to Download Books from Google Books for Free

    - -

    Google Books is a service from Google that allows you to search and preview millions of books and magazines online. You can read some books for free, but others are only available for purchase or have limited access. If you want to download books from Google Books for free, you may need a tool like Google Books Download v 3.0.1.309 serial number.rar. In this article, we will explain what this tool is, how it works, and what are the advantages and disadvantages of using it.

    - -

    What is Google Books Download v 3.0.1.309 serial number.rar?

    - -

    Google Books Download v 3.0.1.309 serial number.rar is a software that allows you to download books from Google Books in PDF format. It is a cracked version of the original software, which means that it bypasses the protection and lets you use the software without paying.

    - -

    To use this software, you need to download the file Google Books Download v 3.0.1.309 serial number.rar from a website that offers it, extract it on your computer, install the software, copy the crack file into the installation folder, and run the software.

    - -

    Then, you need to enter the URL of the book you want to download from Google Books, choose the output format (PDF), and click on "Start". The software will download the book and save it on your computer.

    - -

    How does Google Books Download v 3.0.1.309 serial number.rar work?

    - -

    Google Books Download v 3.0.1.309 serial number.rar works by simulating a web browser and capturing the images of the pages of the book from Google Books. It then converts these images into a PDF file and saves it on your computer.

    - -

    The quality and completeness of the PDF file depend on the availability and accessibility of the book on Google Books. Some books are fully readable online, while others are only partially available or have restrictions such as watermarks or blurred pages.

    - -

    The software can only download books that are visible on Google Books, which means that it cannot download books that are not indexed by Google or that are blocked by geographic or legal reasons.

    - -

    What are the advantages of using Google Books Download v 3.0.1.309 serial number.rar?

    - -

    Using Google Books Download v 3.0.1.309 serial number.rar can have some advantages, such as:

    - -
      -
    • You can access all the features of the software without limitation.
    • -
    • You can save the cost of buying the software or the books.
    • -
    • You can test the software before buying it or use it for occasional projects.
    • -
    • You can download books from Google Books that are not available for purchase or have limited access.
    • -
    • You can read books offline or on any device that supports PDF files.
    • -
    - -

    What are the disadvantages of using Google Books Download v 3.0.1.309 serial number.rar?

    - -

    Using Google Books Download v 3.0.1.309 serial number.rar can also have some disadvantages and risks, such as:

    - -
      -
    • You are violating the copyright of the software and the books and you may face legal consequences.
    • -
    • You are exposing your computer to viruses or malware that may be hidden in the downloaded file or on the website that offers it.
    • -
    • You are compromising the quality and reliability of your work, as the PDF file may be incomplete, inaccurate, or corrupted.
    • -
    • You are disrespecting the work of the developers of the software and the authors of the books who invested time and resources to create them.
    • -
    - -

    What are the alternatives to using Google Books Download v 3.0.1.309 serial number.rar?

    - -

    Using Google Books Download v 3.0.1.309 serial number.rar is not the only option to download books from Google Books for free. There are other alternatives that are safer and more legal, such as:

    - -
      -
    • Downloading the preview of the book from Google Books. Google Books allows you to download a preview of some books in PDF format for free. You can do this by clicking on the "Download" button on the top right corner of the book page.
    • -
    • Buying the full version of the book from Google Play or other online stores. If you want to read the whole book or support its author, you can buy it from Google Play or other online stores that offer it in digital format.
    • -
    • Using another software or website that allows you to download books from Google Books legally and ethically. There are some software or websites that allow you to download books from Google Books legally and ethically, such as EDS Google Books Downloader or GooReader.
    • -
    - -

    Conclusion

    - -

    Google Books is a service from Google that allows you to search and preview millions of books and magazines online. If you want to download books from Google Books for free, you may need a tool like Google Books Download v 3.0.1.309 serial number.rar, which is a cracked version of a software that allows you to download books in PDF format.

    - -

    However, using this tool is illegal and risky, as it may have negative consequences for your computer or your security, as well as for the software and book creators who deserve respect and support for their work.

    - -

    Therefore, we do not recommend using this tool, but rather opting for a safer and more legal alternative, such as downloading the preview of the book from Google Books, buying the full version of the book from an online store, or using another software or website that allows you to download books legally and ethically.

    -

    Google Books is a service from Google that allows you to search and preview millions of books and magazines online. If you want to download books from Google Books for free, you may need a tool like Google Books Download v 3.0.1.309 serial number.rar, which is a cracked version of a software that allows you to download books in PDF format.

    - -

    However, using this tool is illegal and risky, as it may have negative consequences for your computer or your security, as well as for the software and book creators who deserve respect and support for their work.

    - -

    Therefore, we do not recommend using this tool, but rather opting for a safer and more legal alternative, such as downloading the preview of the book from Google Books, buying the full version of the book from an online store, or using another software or website that allows you to download books legally and ethically.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/RA Beauty Retouch Panel 3.2 For Adobe Photoshop MacOS NEW.md b/spaces/scedlatioru/img-to-music/example/RA Beauty Retouch Panel 3.2 For Adobe Photoshop MacOS NEW.md deleted file mode 100644 index de6df5f97be9b19a7d8294142df0a1dbed2e6fc5..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/RA Beauty Retouch Panel 3.2 For Adobe Photoshop MacOS NEW.md +++ /dev/null @@ -1,12 +0,0 @@ -

    RA Beauty Retouch Panel 3.2 for Adobe Photoshop macOS


    DOWNLOAD · https://gohhs.com/2uEyKD



    - -November 10, 2021 - RA panels (Beauty Retouch, MUA Retouch and Pixel Juggler) will be . Adobe Photoshop CC (version 14.0 and above) for both Mac OS and . To customize the menu in Photoshop, open the File > Export tab. -Download Adobe Photoshop CC 2017 for free. -Photoshop free download at . -Adobe Photoshop CC 2017 Russian version. -Adobe Photoshop CC 2017. -Adobe Photoshop CC 2017 - . -Download Adobe 8a78ff9644
    -
    -
    -

    diff --git a/spaces/segments-tobias/conex/espnet/nets/chainer_backend/e2e_asr.py b/spaces/segments-tobias/conex/espnet/nets/chainer_backend/e2e_asr.py deleted file mode 100644 index dc589ef1a1280ad86c42bad64dbe95573a226dc8..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/chainer_backend/e2e_asr.py +++ /dev/null @@ -1,226 +0,0 @@ -# Copyright 2017 Johns Hopkins University (Shinji Watanabe) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""RNN sequence-to-sequence speech recognition model (chainer).""" - -import logging -import math - -import chainer -from chainer import reporter -import numpy as np - -from espnet.nets.chainer_backend.asr_interface import ChainerASRInterface -from espnet.nets.chainer_backend.ctc import ctc_for -from espnet.nets.chainer_backend.rnn.attentions import att_for -from espnet.nets.chainer_backend.rnn.decoders import decoder_for -from espnet.nets.chainer_backend.rnn.encoders import encoder_for -from espnet.nets.e2e_asr_common import label_smoothing_dist -from espnet.nets.pytorch_backend.e2e_asr import E2E as E2E_pytorch -from espnet.nets.pytorch_backend.nets_utils import get_subsample - -CTC_LOSS_THRESHOLD = 10000 - - -class E2E(ChainerASRInterface): - """E2E module for chainer backend. - - Args: - idim (int): Dimension of the inputs. - odim (int): Dimension of the outputs. - args (parser.args): Training config. - flag_return (bool): If True, train() would return - additional metrics in addition to the training - loss. - - """ - - @staticmethod - def add_arguments(parser): - """Add arguments.""" - return E2E_pytorch.add_arguments(parser) - - def get_total_subsampling_factor(self): - """Get total subsampling factor.""" - return self.enc.conv_subsampling_factor * int(np.prod(self.subsample)) - - def __init__(self, idim, odim, args, flag_return=True): - """Construct an E2E object. - - :param int idim: dimension of inputs - :param int odim: dimension of outputs - :param Namespace args: argument Namespace containing options - """ - chainer.Chain.__init__(self) - self.mtlalpha = args.mtlalpha - assert 0 <= self.mtlalpha <= 1, "mtlalpha must be [0,1]" - self.etype = args.etype - self.verbose = args.verbose - self.char_list = args.char_list - self.outdir = args.outdir - - # below means the last number becomes eos/sos ID - # note that sos/eos IDs are identical - self.sos = odim - 1 - self.eos = odim - 1 - - # subsample info - self.subsample = get_subsample(args, mode="asr", arch="rnn") - - # label smoothing info - if args.lsm_type: - logging.info("Use label smoothing with " + args.lsm_type) - labeldist = label_smoothing_dist( - odim, args.lsm_type, transcript=args.train_json - ) - else: - labeldist = None - - with self.init_scope(): - # encoder - self.enc = encoder_for(args, idim, self.subsample) - # ctc - self.ctc = ctc_for(args, odim) - # attention - self.att = att_for(args) - # decoder - self.dec = decoder_for(args, odim, self.sos, self.eos, self.att, labeldist) - - self.acc = None - self.loss = None - self.flag_return = flag_return - - def forward(self, xs, ilens, ys): - """E2E forward propagation. - - Args: - xs (chainer.Variable): Batch of padded charactor ids. (B, Tmax) - ilens (chainer.Variable): Batch of length of each input batch. (B,) - ys (chainer.Variable): Batch of padded target features. (B, Lmax, odim) - - Returns: - float: Loss that calculated by attention and ctc loss. - float (optional): Ctc loss. - float (optional): Attention loss. - float (optional): Accuracy. - - """ - # 1. encoder - hs, ilens = self.enc(xs, ilens) - - # 3. CTC loss - if self.mtlalpha == 0: - loss_ctc = None - else: - loss_ctc = self.ctc(hs, ys) - - # 4. attention loss - if self.mtlalpha == 1: - loss_att = None - acc = None - else: - loss_att, acc = self.dec(hs, ys) - - self.acc = acc - alpha = self.mtlalpha - if alpha == 0: - self.loss = loss_att - elif alpha == 1: - self.loss = loss_ctc - else: - self.loss = alpha * loss_ctc + (1 - alpha) * loss_att - - if self.loss.data < CTC_LOSS_THRESHOLD and not math.isnan(self.loss.data): - reporter.report({"loss_ctc": loss_ctc}, self) - reporter.report({"loss_att": loss_att}, self) - reporter.report({"acc": acc}, self) - - logging.info("mtl loss:" + str(self.loss.data)) - reporter.report({"loss": self.loss}, self) - else: - logging.warning("loss (=%f) is not correct", self.loss.data) - if self.flag_return: - return self.loss, loss_ctc, loss_att, acc - else: - return self.loss - - def recognize(self, x, recog_args, char_list, rnnlm=None): - """E2E greedy/beam search. - - Args: - x (chainer.Variable): Input tensor for recognition. - recog_args (parser.args): Arguments of config file. - char_list (List[str]): List of Charactors. - rnnlm (Module): RNNLM module defined at `espnet.lm.chainer_backend.lm`. - - Returns: - List[Dict[str, Any]]: Result of recognition. - - """ - # subsample frame - x = x[:: self.subsample[0], :] - ilen = self.xp.array(x.shape[0], dtype=np.int32) - h = chainer.Variable(self.xp.array(x, dtype=np.float32)) - - with chainer.no_backprop_mode(), chainer.using_config("train", False): - # 1. encoder - # make a utt list (1) to use the same interface for encoder - h, _ = self.enc([h], [ilen]) - - # calculate log P(z_t|X) for CTC scores - if recog_args.ctc_weight > 0.0: - lpz = self.ctc.log_softmax(h).data[0] - else: - lpz = None - - # 2. decoder - # decode the first utterance - y = self.dec.recognize_beam(h[0], lpz, recog_args, char_list, rnnlm) - - return y - - def calculate_all_attentions(self, xs, ilens, ys): - """E2E attention calculation. - - Args: - xs (List): List of padded input sequences. [(T1, idim), (T2, idim), ...] - ilens (np.ndarray): Batch of lengths of input sequences. (B) - ys (List): List of character id sequence tensor. [(L1), (L2), (L3), ...] - - Returns: - float np.ndarray: Attention weights. (B, Lmax, Tmax) - - """ - hs, ilens = self.enc(xs, ilens) - att_ws = self.dec.calculate_all_attentions(hs, ys) - - return att_ws - - @staticmethod - def custom_converter(subsampling_factor=0): - """Get customconverter of the model.""" - from espnet.nets.chainer_backend.rnn.training import CustomConverter - - return CustomConverter(subsampling_factor=subsampling_factor) - - @staticmethod - def custom_updater(iters, optimizer, converter, device=-1, accum_grad=1): - """Get custom_updater of the model.""" - from espnet.nets.chainer_backend.rnn.training import CustomUpdater - - return CustomUpdater( - iters, optimizer, converter=converter, device=device, accum_grad=accum_grad - ) - - @staticmethod - def custom_parallel_updater(iters, optimizer, converter, devices, accum_grad=1): - """Get custom_parallel_updater of the model.""" - from espnet.nets.chainer_backend.rnn.training import CustomParallelUpdater - - return CustomParallelUpdater( - iters, - optimizer, - converter=converter, - devices=devices, - accum_grad=accum_grad, - ) diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/e2e_asr_mix.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/e2e_asr_mix.py deleted file mode 100644 index 1615f7e275e315e214fdb48024751af8ce6f3cd3..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/e2e_asr_mix.py +++ /dev/null @@ -1,827 +0,0 @@ -#!/usr/bin/env python3 - -""" -This script is used to construct End-to-End models of multi-speaker ASR. - -Copyright 2017 Johns Hopkins University (Shinji Watanabe) - Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) -""" - -import argparse -from itertools import groupby -import logging -import math -import os -import sys - -import editdistance -import numpy as np -import six -import torch - -from espnet.nets.asr_interface import ASRInterface -from espnet.nets.e2e_asr_common import get_vgg2l_odim -from espnet.nets.e2e_asr_common import label_smoothing_dist -from espnet.nets.pytorch_backend.ctc import ctc_for -from espnet.nets.pytorch_backend.e2e_asr import E2E as E2EASR -from espnet.nets.pytorch_backend.e2e_asr import Reporter -from espnet.nets.pytorch_backend.frontends.feature_transform import ( - feature_transform_for, # noqa: H301 -) -from espnet.nets.pytorch_backend.frontends.frontend import frontend_for -from espnet.nets.pytorch_backend.initialization import lecun_normal_init_parameters -from espnet.nets.pytorch_backend.initialization import set_forget_bias_to_one -from espnet.nets.pytorch_backend.nets_utils import get_subsample -from espnet.nets.pytorch_backend.nets_utils import make_pad_mask -from espnet.nets.pytorch_backend.nets_utils import pad_list -from espnet.nets.pytorch_backend.nets_utils import to_device -from espnet.nets.pytorch_backend.nets_utils import to_torch_tensor -from espnet.nets.pytorch_backend.rnn.attentions import att_for -from espnet.nets.pytorch_backend.rnn.decoders import decoder_for -from espnet.nets.pytorch_backend.rnn.encoders import encoder_for as encoder_for_single -from espnet.nets.pytorch_backend.rnn.encoders import RNNP -from espnet.nets.pytorch_backend.rnn.encoders import VGG2L - -CTC_LOSS_THRESHOLD = 10000 - - -class PIT(object): - """Permutation Invariant Training (PIT) module. - - :parameter int num_spkrs: number of speakers for PIT process (2 or 3) - """ - - def __init__(self, num_spkrs): - """Initialize PIT module.""" - self.num_spkrs = num_spkrs - - # [[0, 1], [1, 0]] or - # [[0, 1, 2], [0, 2, 1], [1, 0, 2], [1, 2, 0], [2, 1, 0], [2, 0, 1]] - self.perm_choices = [] - initial_seq = np.linspace(0, num_spkrs - 1, num_spkrs, dtype=np.int64) - self.permutationDFS(initial_seq, 0) - - # [[0, 3], [1, 2]] or - # [[0, 4, 8], [0, 5, 7], [1, 3, 8], [1, 5, 6], [2, 4, 6], [2, 3, 7]] - self.loss_perm_idx = np.linspace( - 0, num_spkrs * (num_spkrs - 1), num_spkrs, dtype=np.int64 - ).reshape(1, num_spkrs) - self.loss_perm_idx = (self.loss_perm_idx + np.array(self.perm_choices)).tolist() - - def min_pit_sample(self, loss): - """Compute the PIT loss for each sample. - - :param 1-D torch.Tensor loss: list of losses for one sample, - including [h1r1, h1r2, h2r1, h2r2] or - [h1r1, h1r2, h1r3, h2r1, h2r2, h2r3, h3r1, h3r2, h3r3] - :return minimum loss of best permutation - :rtype torch.Tensor (1) - :return the best permutation - :rtype List: len=2 - - """ - score_perms = ( - torch.stack( - [torch.sum(loss[loss_perm_idx]) for loss_perm_idx in self.loss_perm_idx] - ) - / self.num_spkrs - ) - perm_loss, min_idx = torch.min(score_perms, 0) - permutation = self.perm_choices[min_idx] - return perm_loss, permutation - - def pit_process(self, losses): - """Compute the PIT loss for a batch. - - :param torch.Tensor losses: losses (B, 1|4|9) - :return minimum losses of a batch with best permutation - :rtype torch.Tensor (B) - :return the best permutation - :rtype torch.LongTensor (B, 1|2|3) - - """ - bs = losses.size(0) - ret = [self.min_pit_sample(losses[i]) for i in range(bs)] - - loss_perm = torch.stack([r[0] for r in ret], dim=0).to(losses.device) # (B) - permutation = torch.tensor([r[1] for r in ret]).long().to(losses.device) - return torch.mean(loss_perm), permutation - - def permutationDFS(self, source, start): - """Get permutations with DFS. - - The final result is all permutations of the 'source' sequence. - e.g. [[1, 2], [2, 1]] or - [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 2, 1], [3, 1, 2]] - - :param np.ndarray source: (num_spkrs, 1), e.g. [1, 2, ..., N] - :param int start: the start point to permute - - """ - if start == len(source) - 1: # reach final state - self.perm_choices.append(source.tolist()) - for i in range(start, len(source)): - # swap values at position start and i - source[start], source[i] = source[i], source[start] - self.permutationDFS(source, start + 1) - # reverse the swap - source[start], source[i] = source[i], source[start] - - -class E2E(ASRInterface, torch.nn.Module): - """E2E module. - - :param int idim: dimension of inputs - :param int odim: dimension of outputs - :param Namespace args: argument Namespace containing options - """ - - @staticmethod - def add_arguments(parser): - """Add arguments.""" - E2EASR.encoder_add_arguments(parser) - E2E.encoder_mix_add_arguments(parser) - E2EASR.attention_add_arguments(parser) - E2EASR.decoder_add_arguments(parser) - return parser - - @staticmethod - def encoder_mix_add_arguments(parser): - """Add arguments for multi-speaker encoder.""" - group = parser.add_argument_group("E2E encoder setting for multi-speaker") - # asr-mix encoder - group.add_argument( - "--spa", - action="store_true", - help="Enable speaker parallel attention " - "for multi-speaker speech recognition task.", - ) - group.add_argument( - "--elayers-sd", - default=4, - type=int, - help="Number of speaker differentiate encoder layers" - "for multi-speaker speech recognition task.", - ) - return parser - - def get_total_subsampling_factor(self): - """Get total subsampling factor.""" - return self.enc.conv_subsampling_factor * int(np.prod(self.subsample)) - - def __init__(self, idim, odim, args): - """Initialize multi-speaker E2E module.""" - super(E2E, self).__init__() - torch.nn.Module.__init__(self) - self.mtlalpha = args.mtlalpha - assert 0.0 <= self.mtlalpha <= 1.0, "mtlalpha should be [0.0, 1.0]" - self.etype = args.etype - self.verbose = args.verbose - # NOTE: for self.build method - args.char_list = getattr(args, "char_list", None) - self.char_list = args.char_list - self.outdir = args.outdir - self.space = args.sym_space - self.blank = args.sym_blank - self.reporter = Reporter() - self.num_spkrs = args.num_spkrs - self.spa = args.spa - self.pit = PIT(self.num_spkrs) - - # below means the last number becomes eos/sos ID - # note that sos/eos IDs are identical - self.sos = odim - 1 - self.eos = odim - 1 - - # subsample info - self.subsample = get_subsample(args, mode="asr", arch="rnn_mix") - - # label smoothing info - if args.lsm_type and os.path.isfile(args.train_json): - logging.info("Use label smoothing with " + args.lsm_type) - labeldist = label_smoothing_dist( - odim, args.lsm_type, transcript=args.train_json - ) - else: - labeldist = None - - if getattr(args, "use_frontend", False): # use getattr to keep compatibility - self.frontend = frontend_for(args, idim) - self.feature_transform = feature_transform_for(args, (idim - 1) * 2) - idim = args.n_mels - else: - self.frontend = None - - # encoder - self.enc = encoder_for(args, idim, self.subsample) - # ctc - self.ctc = ctc_for(args, odim, reduce=False) - # attention - num_att = self.num_spkrs if args.spa else 1 - self.att = att_for(args, num_att) - # decoder - self.dec = decoder_for(args, odim, self.sos, self.eos, self.att, labeldist) - - # weight initialization - self.init_like_chainer() - - # options for beam search - if "report_cer" in vars(args) and (args.report_cer or args.report_wer): - recog_args = { - "beam_size": args.beam_size, - "penalty": args.penalty, - "ctc_weight": args.ctc_weight, - "maxlenratio": args.maxlenratio, - "minlenratio": args.minlenratio, - "lm_weight": args.lm_weight, - "rnnlm": args.rnnlm, - "nbest": args.nbest, - "space": args.sym_space, - "blank": args.sym_blank, - } - - self.recog_args = argparse.Namespace(**recog_args) - self.report_cer = args.report_cer - self.report_wer = args.report_wer - else: - self.report_cer = False - self.report_wer = False - self.rnnlm = None - - self.logzero = -10000000000.0 - self.loss = None - self.acc = None - - def init_like_chainer(self): - """Initialize weight like chainer. - - chainer basically uses LeCun way: W ~ Normal(0, fan_in ** -0.5), b = 0 - pytorch basically uses W, b ~ Uniform(-fan_in**-0.5, fan_in**-0.5) - - however, there are two exceptions as far as I know. - - EmbedID.W ~ Normal(0, 1) - - LSTM.upward.b[forget_gate_range] = 1 (but not used in NStepLSTM) - """ - lecun_normal_init_parameters(self) - # exceptions - # embed weight ~ Normal(0, 1) - self.dec.embed.weight.data.normal_(0, 1) - # forget-bias = 1.0 - # https://discuss.pytorch.org/t/set-forget-gate-bias-of-lstm/1745 - for i in six.moves.range(len(self.dec.decoder)): - set_forget_bias_to_one(self.dec.decoder[i].bias_ih) - - def forward(self, xs_pad, ilens, ys_pad): - """E2E forward. - - :param torch.Tensor xs_pad: batch of padded input sequences (B, Tmax, idim) - :param torch.Tensor ilens: batch of lengths of input sequences (B) - :param torch.Tensor ys_pad: - batch of padded character id sequence tensor (B, num_spkrs, Lmax) - :return: ctc loss value - :rtype: torch.Tensor - :return: attention loss value - :rtype: torch.Tensor - :return: accuracy in attention decoder - :rtype: float - """ - # 0. Frontend - if self.frontend is not None: - hs_pad, hlens, mask = self.frontend(to_torch_tensor(xs_pad), ilens) - if isinstance(hs_pad, list): - hlens_n = [None] * self.num_spkrs - for i in range(self.num_spkrs): - hs_pad[i], hlens_n[i] = self.feature_transform(hs_pad[i], hlens) - hlens = hlens_n - else: - hs_pad, hlens = self.feature_transform(hs_pad, hlens) - else: - hs_pad, hlens = xs_pad, ilens - - # 1. Encoder - if not isinstance( - hs_pad, list - ): # single-channel input xs_pad (single- or multi-speaker) - hs_pad, hlens, _ = self.enc(hs_pad, hlens) - else: # multi-channel multi-speaker input xs_pad - for i in range(self.num_spkrs): - hs_pad[i], hlens[i], _ = self.enc(hs_pad[i], hlens[i]) - - # 2. CTC loss - if self.mtlalpha == 0: - loss_ctc, min_perm = None, None - else: - if not isinstance(hs_pad, list): # single-speaker input xs_pad - loss_ctc = torch.mean(self.ctc(hs_pad, hlens, ys_pad)) - else: # multi-speaker input xs_pad - ys_pad = ys_pad.transpose(0, 1) # (num_spkrs, B, Lmax) - loss_ctc_perm = torch.stack( - [ - self.ctc( - hs_pad[i // self.num_spkrs], - hlens[i // self.num_spkrs], - ys_pad[i % self.num_spkrs], - ) - for i in range(self.num_spkrs ** 2) - ], - dim=1, - ) # (B, num_spkrs^2) - loss_ctc, min_perm = self.pit.pit_process(loss_ctc_perm) - logging.info("ctc loss:" + str(float(loss_ctc))) - - # 3. attention loss - if self.mtlalpha == 1: - loss_att = None - acc = None - else: - if not isinstance(hs_pad, list): # single-speaker input xs_pad - loss_att, acc, _ = self.dec(hs_pad, hlens, ys_pad) - else: - for i in range(ys_pad.size(1)): # B - ys_pad[:, i] = ys_pad[min_perm[i], i] - rslt = [ - self.dec(hs_pad[i], hlens[i], ys_pad[i], strm_idx=i) - for i in range(self.num_spkrs) - ] - loss_att = sum([r[0] for r in rslt]) / float(len(rslt)) - acc = sum([r[1] for r in rslt]) / float(len(rslt)) - self.acc = acc - - # 4. compute cer without beam search - if self.mtlalpha == 0 or self.char_list is None: - cer_ctc = None - else: - cers = [] - for ns in range(self.num_spkrs): - y_hats = self.ctc.argmax(hs_pad[ns]).data - for i, y in enumerate(y_hats): - y_hat = [x[0] for x in groupby(y)] - y_true = ys_pad[ns][i] - - seq_hat = [ - self.char_list[int(idx)] for idx in y_hat if int(idx) != -1 - ] - seq_true = [ - self.char_list[int(idx)] for idx in y_true if int(idx) != -1 - ] - seq_hat_text = "".join(seq_hat).replace(self.space, " ") - seq_hat_text = seq_hat_text.replace(self.blank, "") - seq_true_text = "".join(seq_true).replace(self.space, " ") - - hyp_chars = seq_hat_text.replace(" ", "") - ref_chars = seq_true_text.replace(" ", "") - if len(ref_chars) > 0: - cers.append( - editdistance.eval(hyp_chars, ref_chars) / len(ref_chars) - ) - - cer_ctc = sum(cers) / len(cers) if cers else None - - # 5. compute cer/wer - if ( - self.training - or not (self.report_cer or self.report_wer) - or not isinstance(hs_pad, list) - ): - cer, wer = 0.0, 0.0 - else: - if self.recog_args.ctc_weight > 0.0: - lpz = [ - self.ctc.log_softmax(hs_pad[i]).data for i in range(self.num_spkrs) - ] - else: - lpz = None - - word_eds, char_eds, word_ref_lens, char_ref_lens = [], [], [], [] - nbest_hyps = [ - self.dec.recognize_beam_batch( - hs_pad[i], - torch.tensor(hlens[i]), - lpz[i], - self.recog_args, - self.char_list, - self.rnnlm, - strm_idx=i, - ) - for i in range(self.num_spkrs) - ] - # remove and - y_hats = [ - [nbest_hyp[0]["yseq"][1:-1] for nbest_hyp in nbest_hyps[i]] - for i in range(self.num_spkrs) - ] - for i in range(len(y_hats[0])): - hyp_words = [] - hyp_chars = [] - ref_words = [] - ref_chars = [] - for ns in range(self.num_spkrs): - y_hat = y_hats[ns][i] - y_true = ys_pad[ns][i] - - seq_hat = [ - self.char_list[int(idx)] for idx in y_hat if int(idx) != -1 - ] - seq_true = [ - self.char_list[int(idx)] for idx in y_true if int(idx) != -1 - ] - seq_hat_text = "".join(seq_hat).replace(self.recog_args.space, " ") - seq_hat_text = seq_hat_text.replace(self.recog_args.blank, "") - seq_true_text = "".join(seq_true).replace( - self.recog_args.space, " " - ) - - hyp_words.append(seq_hat_text.split()) - ref_words.append(seq_true_text.split()) - hyp_chars.append(seq_hat_text.replace(" ", "")) - ref_chars.append(seq_true_text.replace(" ", "")) - - tmp_word_ed = [ - editdistance.eval( - hyp_words[ns // self.num_spkrs], ref_words[ns % self.num_spkrs] - ) - for ns in range(self.num_spkrs ** 2) - ] # h1r1,h1r2,h2r1,h2r2 - tmp_char_ed = [ - editdistance.eval( - hyp_chars[ns // self.num_spkrs], ref_chars[ns % self.num_spkrs] - ) - for ns in range(self.num_spkrs ** 2) - ] # h1r1,h1r2,h2r1,h2r2 - - word_eds.append(self.pit.min_pit_sample(torch.tensor(tmp_word_ed))[0]) - word_ref_lens.append(len(sum(ref_words, []))) - char_eds.append(self.pit.min_pit_sample(torch.tensor(tmp_char_ed))[0]) - char_ref_lens.append(len("".join(ref_chars))) - - wer = ( - 0.0 - if not self.report_wer - else float(sum(word_eds)) / sum(word_ref_lens) - ) - cer = ( - 0.0 - if not self.report_cer - else float(sum(char_eds)) / sum(char_ref_lens) - ) - - alpha = self.mtlalpha - if alpha == 0: - self.loss = loss_att - loss_att_data = float(loss_att) - loss_ctc_data = None - elif alpha == 1: - self.loss = loss_ctc - loss_att_data = None - loss_ctc_data = float(loss_ctc) - else: - self.loss = alpha * loss_ctc + (1 - alpha) * loss_att - loss_att_data = float(loss_att) - loss_ctc_data = float(loss_ctc) - - loss_data = float(self.loss) - if loss_data < CTC_LOSS_THRESHOLD and not math.isnan(loss_data): - self.reporter.report( - loss_ctc_data, loss_att_data, self.acc, cer_ctc, cer, wer, loss_data - ) - else: - logging.warning("loss (=%f) is not correct", loss_data) - return self.loss - - def recognize(self, x, recog_args, char_list, rnnlm=None): - """E2E beam search. - - :param ndarray x: input acoustic feature (T, D) - :param Namespace recog_args: argument Namespace containing options - :param list char_list: list of characters - :param torch.nn.Module rnnlm: language model module - :return: N-best decoding results - :rtype: list - """ - prev = self.training - self.eval() - ilens = [x.shape[0]] - - # subsample frame - x = x[:: self.subsample[0], :] - h = to_device(self, to_torch_tensor(x).float()) - # make a utt list (1) to use the same interface for encoder - hs = h.contiguous().unsqueeze(0) - - # 0. Frontend - if self.frontend is not None: - hs, hlens, mask = self.frontend(hs, ilens) - hlens_n = [None] * self.num_spkrs - for i in range(self.num_spkrs): - hs[i], hlens_n[i] = self.feature_transform(hs[i], hlens) - hlens = hlens_n - else: - hs, hlens = hs, ilens - - # 1. Encoder - if not isinstance(hs, list): # single-channel multi-speaker input x - hs, hlens, _ = self.enc(hs, hlens) - else: # multi-channel multi-speaker input x - for i in range(self.num_spkrs): - hs[i], hlens[i], _ = self.enc(hs[i], hlens[i]) - - # calculate log P(z_t|X) for CTC scores - if recog_args.ctc_weight > 0.0: - lpz = [self.ctc.log_softmax(i)[0] for i in hs] - else: - lpz = None - - # 2. decoder - # decode the first utterance - y = [ - self.dec.recognize_beam( - hs[i][0], lpz[i], recog_args, char_list, rnnlm, strm_idx=i - ) - for i in range(self.num_spkrs) - ] - - if prev: - self.train() - return y - - def recognize_batch(self, xs, recog_args, char_list, rnnlm=None): - """E2E beam search. - - :param ndarray xs: input acoustic feature (T, D) - :param Namespace recog_args: argument Namespace containing options - :param list char_list: list of characters - :param torch.nn.Module rnnlm: language model module - :return: N-best decoding results - :rtype: list - """ - prev = self.training - self.eval() - ilens = np.fromiter((xx.shape[0] for xx in xs), dtype=np.int64) - - # subsample frame - xs = [xx[:: self.subsample[0], :] for xx in xs] - xs = [to_device(self, to_torch_tensor(xx).float()) for xx in xs] - xs_pad = pad_list(xs, 0.0) - - # 0. Frontend - if self.frontend is not None: - hs_pad, hlens, mask = self.frontend(xs_pad, ilens) - hlens_n = [None] * self.num_spkrs - for i in range(self.num_spkrs): - hs_pad[i], hlens_n[i] = self.feature_transform(hs_pad[i], hlens) - hlens = hlens_n - else: - hs_pad, hlens = xs_pad, ilens - - # 1. Encoder - if not isinstance(hs_pad, list): # single-channel multi-speaker input x - hs_pad, hlens, _ = self.enc(hs_pad, hlens) - else: # multi-channel multi-speaker input x - for i in range(self.num_spkrs): - hs_pad[i], hlens[i], _ = self.enc(hs_pad[i], hlens[i]) - - # calculate log P(z_t|X) for CTC scores - if recog_args.ctc_weight > 0.0: - lpz = [self.ctc.log_softmax(hs_pad[i]) for i in range(self.num_spkrs)] - normalize_score = False - else: - lpz = None - normalize_score = True - - # 2. decoder - y = [ - self.dec.recognize_beam_batch( - hs_pad[i], - hlens[i], - lpz[i], - recog_args, - char_list, - rnnlm, - normalize_score=normalize_score, - strm_idx=i, - ) - for i in range(self.num_spkrs) - ] - - if prev: - self.train() - return y - - def enhance(self, xs): - """Forward only the frontend stage. - - :param ndarray xs: input acoustic feature (T, C, F) - """ - if self.frontend is None: - raise RuntimeError("Frontend doesn't exist") - prev = self.training - self.eval() - ilens = np.fromiter((xx.shape[0] for xx in xs), dtype=np.int64) - - # subsample frame - xs = [xx[:: self.subsample[0], :] for xx in xs] - xs = [to_device(self, to_torch_tensor(xx).float()) for xx in xs] - xs_pad = pad_list(xs, 0.0) - enhanced, hlensm, mask = self.frontend(xs_pad, ilens) - if prev: - self.train() - - if isinstance(enhanced, (tuple, list)): - enhanced = list(enhanced) - mask = list(mask) - for idx in range(len(enhanced)): # number of speakers - enhanced[idx] = enhanced[idx].cpu().numpy() - mask[idx] = mask[idx].cpu().numpy() - return enhanced, mask, ilens - return enhanced.cpu().numpy(), mask.cpu().numpy(), ilens - - def calculate_all_attentions(self, xs_pad, ilens, ys_pad): - """E2E attention calculation. - - :param torch.Tensor xs_pad: batch of padded input sequences (B, Tmax, idim) - :param torch.Tensor ilens: batch of lengths of input sequences (B) - :param torch.Tensor ys_pad: - batch of padded character id sequence tensor (B, num_spkrs, Lmax) - :return: attention weights with the following shape, - 1) multi-head case => attention weights (B, H, Lmax, Tmax), - 2) other case => attention weights (B, Lmax, Tmax). - :rtype: float ndarray - """ - with torch.no_grad(): - # 0. Frontend - if self.frontend is not None: - hs_pad, hlens, mask = self.frontend(to_torch_tensor(xs_pad), ilens) - hlens_n = [None] * self.num_spkrs - for i in range(self.num_spkrs): - hs_pad[i], hlens_n[i] = self.feature_transform(hs_pad[i], hlens) - hlens = hlens_n - else: - hs_pad, hlens = xs_pad, ilens - - # 1. Encoder - if not isinstance(hs_pad, list): # single-channel multi-speaker input x - hs_pad, hlens, _ = self.enc(hs_pad, hlens) - else: # multi-channel multi-speaker input x - for i in range(self.num_spkrs): - hs_pad[i], hlens[i], _ = self.enc(hs_pad[i], hlens[i]) - - # Permutation - ys_pad = ys_pad.transpose(0, 1) # (num_spkrs, B, Lmax) - if self.num_spkrs <= 3: - loss_ctc = torch.stack( - [ - self.ctc( - hs_pad[i // self.num_spkrs], - hlens[i // self.num_spkrs], - ys_pad[i % self.num_spkrs], - ) - for i in range(self.num_spkrs ** 2) - ], - 1, - ) # (B, num_spkrs^2) - loss_ctc, min_perm = self.pit.pit_process(loss_ctc) - for i in range(ys_pad.size(1)): # B - ys_pad[:, i] = ys_pad[min_perm[i], i] - - # 2. Decoder - att_ws = [ - self.dec.calculate_all_attentions( - hs_pad[i], hlens[i], ys_pad[i], strm_idx=i - ) - for i in range(self.num_spkrs) - ] - - return att_ws - - -class EncoderMix(torch.nn.Module): - """Encoder module for the case of multi-speaker mixture speech. - - :param str etype: type of encoder network - :param int idim: number of dimensions of encoder network - :param int elayers_sd: - number of layers of speaker differentiate part in encoder network - :param int elayers_rec: - number of layers of shared recognition part in encoder network - :param int eunits: number of lstm units of encoder network - :param int eprojs: number of projection units of encoder network - :param np.ndarray subsample: list of subsampling numbers - :param float dropout: dropout rate - :param int in_channel: number of input channels - :param int num_spkrs: number of number of speakers - """ - - def __init__( - self, - etype, - idim, - elayers_sd, - elayers_rec, - eunits, - eprojs, - subsample, - dropout, - num_spkrs=2, - in_channel=1, - ): - """Initialize the encoder of single-channel multi-speaker ASR.""" - super(EncoderMix, self).__init__() - typ = etype.lstrip("vgg").rstrip("p") - if typ not in ["lstm", "gru", "blstm", "bgru"]: - logging.error("Error: need to specify an appropriate encoder architecture") - if etype.startswith("vgg"): - if etype[-1] == "p": - self.enc_mix = torch.nn.ModuleList([VGG2L(in_channel)]) - self.enc_sd = torch.nn.ModuleList( - [ - torch.nn.ModuleList( - [ - RNNP( - get_vgg2l_odim(idim, in_channel=in_channel), - elayers_sd, - eunits, - eprojs, - subsample[: elayers_sd + 1], - dropout, - typ=typ, - ) - ] - ) - for i in range(num_spkrs) - ] - ) - self.enc_rec = torch.nn.ModuleList( - [ - RNNP( - eprojs, - elayers_rec, - eunits, - eprojs, - subsample[elayers_sd:], - dropout, - typ=typ, - ) - ] - ) - logging.info("Use CNN-VGG + B" + typ.upper() + "P for encoder") - else: - logging.error( - f"Error: need to specify an appropriate encoder architecture. " - f"Illegal name {etype}" - ) - sys.exit() - else: - logging.error( - f"Error: need to specify an appropriate encoder architecture. " - f"Illegal name {etype}" - ) - sys.exit() - - self.num_spkrs = num_spkrs - - def forward(self, xs_pad, ilens): - """Encodermix forward. - - :param torch.Tensor xs_pad: batch of padded input sequences (B, Tmax, D) - :param torch.Tensor ilens: batch of lengths of input sequences (B) - :return: list: batch of hidden state sequences [num_spkrs x (B, Tmax, eprojs)] - :rtype: torch.Tensor - """ - # mixture encoder - for module in self.enc_mix: - xs_pad, ilens, _ = module(xs_pad, ilens) - - # SD and Rec encoder - xs_pad_sd = [xs_pad for i in range(self.num_spkrs)] - ilens_sd = [ilens for i in range(self.num_spkrs)] - for ns in range(self.num_spkrs): - # Encoder_SD: speaker differentiate encoder - for module in self.enc_sd[ns]: - xs_pad_sd[ns], ilens_sd[ns], _ = module(xs_pad_sd[ns], ilens_sd[ns]) - # Encoder_Rec: recognition encoder - for module in self.enc_rec: - xs_pad_sd[ns], ilens_sd[ns], _ = module(xs_pad_sd[ns], ilens_sd[ns]) - - # make mask to remove bias value in padded part - mask = to_device(xs_pad, make_pad_mask(ilens_sd[0]).unsqueeze(-1)) - - return [x.masked_fill(mask, 0.0) for x in xs_pad_sd], ilens_sd, None - - -def encoder_for(args, idim, subsample): - """Construct the encoder.""" - if getattr(args, "use_frontend", False): # use getattr to keep compatibility - # with frontend, the mixed speech are separated as streams for each speaker - return encoder_for_single(args, idim, subsample) - else: - return EncoderMix( - args.etype, - idim, - args.elayers_sd, - args.elayers, - args.eunits, - args.eprojs, - subsample, - args.dropout_rate, - args.num_spkrs, - ) diff --git a/spaces/shaheer/textgeneration/app.py b/spaces/shaheer/textgeneration/app.py deleted file mode 100644 index 00a03e3f7dc1d3844798f8bf6b751e4a368e8877..0000000000000000000000000000000000000000 --- a/spaces/shaheer/textgeneration/app.py +++ /dev/null @@ -1,18 +0,0 @@ -import gradio as gr -from gradio import inputs - -description = 'Story generation with GPT-2' - -interface = gr.Interface.load( - "huggingface/pranavpsv/gpt2-genre-story-generator", - title='Story Generation with GPT-2', - inputs = [ - gr.inputs.Textbox(lines=7, label='Story'), - ], - description=description, - examples = [ - ['Adventurer is approached by a mysterious stranger in the tavern for a new quest'] - ] -) - -interface.launch() \ No newline at end of file diff --git a/spaces/shoukaku/movie_recommendation/models/__init__.py b/spaces/shoukaku/movie_recommendation/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/silencewing/server/youyou/.history/math_20230613231148.html b/spaces/silencewing/server/youyou/.history/math_20230613231148.html deleted file mode 100644 index 6201a86101b7e8468c2056d89d6554cc76f26f05..0000000000000000000000000000000000000000 --- a/spaces/silencewing/server/youyou/.history/math_20230613231148.html +++ /dev/null @@ -1,229 +0,0 @@ - - - - - - - - - - Document - - - - -
    - - - - - - - - - - - - - - - - - - - - - - - - -
    题目
    答案
    正误
    得分
    -
    - - - - diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/CarX Street Customize Your Dream Car and Conquer the Streets on Android 12.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/CarX Street Customize Your Dream Car and Conquer the Streets on Android 12.md deleted file mode 100644 index 9e7755d601e11b002e4db848f15e820c651d6756..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/CarX Street Customize Your Dream Car and Conquer the Streets on Android 12.md +++ /dev/null @@ -1,24 +0,0 @@ - -

    CarX Street: A New Open World Racing Game for Android 12

    - Are you a fan of racing games? Do you want to experience the thrill of driving fast cars on realistic streets and highways? If yes, then you should check out CarX Street, a new open world racing game for Android 12 devices. CarX Street is a game that lets you embrace the freedom of being a street racer in the dynamic open world of Sunset City. You can choose from a variety of cars, customize them to your liking, and compete against other players in different game modes. In this article, we will tell you more about CarX Street, how to download and install it on your Android 12 device, what are its features and benefits, and what are some tips and tricks to help you become the best driver in the city.

    What is CarX Street?

    - CarX Street is a racing game developed by CarX Technologies, LLC, the makers of CarX Drift Racing 2. It is a game that combines realistic races on highways and city streets with top-speed drift races. You can explore every corner of the enormous world of CarX Street and enjoy the exciting car races that will leave you exhilarated.

    A realistic and immersive racing game

    - One of the main attractions of CarX Street is its realistic physics engine that simulates the behavior of cars on the road. You can feel the difference between front-wheel drive, rear-wheel drive, and all-wheel drive cars. You can also feel the impact of different parts and upgrades on your car's performance. You can adjust the engine, transmission, body, suspension, tires, and more to suit your driving style and preferences.

    A dynamic and diverse open world

    - Another feature that makes CarX Street stand out is its dynamic and diverse open world. You can drive around Sunset City, a large city with different districts and landmarks. You can also explore the surrounding areas, such as mountain roads, coastal highways, industrial zones, and more. You can experience different weather conditions and day/night cycles that add to the realism and immersion of the game.

    A variety of cars and customization options

    - CarX Street also offers a variety of cars for you to choose from. You can find nearly-licensed cars from different manufacturers and categories. You can also customize your cars with visual tuning options that let you change the mirrors, headlights, lights, skirt, bumper, rims, and more. You can create a unique look for your car that reflects your personality and style.

    How to Download and Install CarX Street on Android 12?

    - If you are interested in playing CarX Street on your Android 12 device, here are the steps that you need to follow:

    Download from Google Play Store or APK file

    - You can download CarX Street from the Google Play Store by searching for it or by clicking on this link. Alternatively, you can also download the APK file from other sources such as APKCombo or APKMB. Make sure that you download the latest version of the game that is compatible with Android 12.

    Allow installation from unknown sources

    - If you download the APK file from outside the Google Play Store , you need to allow installation from unknown sources on your device. To do this, go to Settings > Apps and notifications > Special app access > Install unknown apps. Then, find the app that you used to download the APK file and toggle on the permission to install unknown apps.

    Follow the instructions and launch the game

    - Once you have downloaded the APK file and allowed installation from unknown sources, you can install CarX Street on your device by tapping on the file and following the instructions. After the installation is complete, you can launch the game from your app drawer or home screen. You may need to grant some permissions to the game, such as access to your storage, location, and microphone.

    What are the Features and Benefits of CarX Street?

    - CarX Street is a game that offers many features and benefits for racing enthusiasts. Here are some of them:

    Stunning graphics and physics

    - CarX Street boasts stunning graphics that make the game look realistic and immersive. You can see the details of the cars, the roads, the buildings, and the environment. You can also enjoy the effects of lighting, shadows, reflections, smoke, dust, and more. The game also has a realistic physics engine that simulates the behavior of cars on different surfaces and conditions. You can feel the weight, speed, acceleration, braking, traction, and drifting of your car.

    Multiple game modes and challenges

    - CarX Street offers multiple game modes and challenges for you to enjoy. You can play in free roam mode, where you can explore the open world at your own pace. You can also play in race mode, where you can compete against other players or AI opponents in different types of races, such as sprint, circuit, drift, drag, and more. You can also play in story mode, where you can follow the plot of the game and complete various missions and quests. You can also play in event mode, where you can participate in special events and challenges that change every week.

    Online multiplayer and social features

    - CarX Street also has online multiplayer and social features that make the game more fun and interactive. You can play with or against other players from around the world in real-time races or co-op missions. You can also chat with other players, join clubs, create crews, send gifts, challenge rivals, and more. You can also share your achievements, screenshots, videos, and custom cars with other players on social media platforms such as Facebook, Instagram, or YouTube.

    What are the Tips and Tricks for CarX Street?

    - If you want to improve your skills and performance in CarX Street , here are some tips and tricks that you can use:

    Learn your car's characteristics and performance

    - One of the first things that you should do in CarX Street is to learn your car's characteristics and performance. Each car has different stats, such as power, torque, weight, top speed, acceleration, handling, and more. You should know how your car behaves on different roads and conditions, such as asphalt, dirt, wet, dry, and more. You should also know how to adjust your car's settings, such as engine, transmission, suspension, tires, and more. You can test your car's performance in free roam mode or in the garage.

    Use nitro wisely and master drifting techniques

    - Another tip that you should follow in CarX Street is to use nitro wisely and master drifting techniques. Nitro is a boost that can help you gain speed and overtake your opponents. However, nitro is limited and needs to be refilled by performing drifts or stunts. Therefore, you should use nitro only when you need it, such as when you are behind or when you are approaching a straight road. You should also learn how to drift properly, as drifting can help you maintain speed and control while turning corners. You can drift by tapping the brake button while steering or by using the handbrake button.

    Join clubs, defeat bosses, and earn rewards

    - The last tip that we will share with you in CarX Street is to join clubs, defeat bosses, and earn rewards. Clubs are groups of players that share a common interest in racing. You can join a club or create your own club and invite other players. By joining a club, you can access exclusive features, such as club races, club chat, club garage, club shop, and more. You can also compete with other clubs and climb the club leaderboard. Bosses are powerful racers that control different areas of the city. You can challenge them and try to defeat them in races. By defeating bosses, you can unlock new cars, parts, locations, and more. You can also earn rewards by completing missions, quests, events, challenges, achievements, and more.

    Conclusion

    - CarX Street is a new open world racing game for Android 12 devices that offers realistic and immersive racing experiences on highways and city streets. You can choose from a variety of cars, customize them to your liking, and compete against other players in different game modes. You can also explore the dynamic and diverse open world of Sunset City and enjoy the stunning graphics and physics of the game. If you are looking for a fun and exciting racing game for your Android 12 device, you should definitely try CarX Street.

    FAQs

    - Here are some frequently asked questions about CarX Street: - Q: How much space does CarX Street require on my device? - A: CarX Street requires about 1 GB of free space on your device. - Q: How can I get more coins and diamonds in CarX Street? - A: You can get more coins and diamonds by winning races, completing missions, quests, events, challenges, achievements, and more. You can also buy them with real money or watch ads. - Q: How can I play CarX Street offline? - A: You can play CarX Street offline in free roam mode or in race mode with AI opponents. However, you will not be able to access some features such as online multiplayer or social features. - Q: How can I contact the developers of CarX Street? - A: You can contact the developers of CarX Street by sending an email to support@carx-tech.com or by visiting their website. - Q: How can I update CarX Street to the latest version? - A: You can update CarX Street to the latest version by downloading it from the Google Play Store or from the APK file sources mentioned above.

    -

    carx street android 12 apk


    Download File ››› https://ssurll.com/2uNXrI



    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Bowmasters Mod APK with All Characters Unlocked for Free.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Bowmasters Mod APK with All Characters Unlocked for Free.md deleted file mode 100644 index 434a995871330f66c817d48a0165bd65d77604c4..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Bowmasters Mod APK with All Characters Unlocked for Free.md +++ /dev/null @@ -1,111 +0,0 @@ -
    -

    Download Game Bowmasters Mod Apk Unlock All Characters

    -

    If you are looking for a fun and addictive multiplayer game with bowmen, then you should try Bowmasters. This game is a hotsy-totsy aim and shoot game that has 60+ insane characters from all dimensions, 60+ different weapons for total mayhem, and multiple game modes to challenge your skills. But what if you want to unlock all the characters and enjoy the game without ads and limitations? Well, you can do that by downloading the Bowmasters Mod Apk. In this article, we will tell you what is Bowmasters, what is Bowmasters Mod Apk, how to download and install it, and some tips to play it. Let's get started!

    -

    download game bowmasters mod apk unlock all characters


    Download File 🆓 https://ssurll.com/2uNW3h



    -

    What is Bowmasters?

    -

    Bowmasters is an online game developed by Playgendary Limited. It is available for both iOS and Android devices. The game has been downloaded over 50 million times on Google Play Store and has a rating of 4.6 out of 5 stars . The game is also featured on the App Store as one of the best multiplayer games.

    -

    Features of Bowmasters

    -

    Bowmasters has many features that make it a fun and exciting game to play. Here are some of them:

    -

    Gameplay modes

    -

    Bowmasters has multiple game modes to suit your preferences. You can shoot birds or fruits down, defeat the enemies in duels and get money for that, or play online multiplayer with your friends or other players around the world. You can also play the zombie mode, where you have to defend your turf from the undead hordes.

    -

    bowmasters mod apk latest version free download
    -how to unlock all characters in bowmasters mod apk
    -bowmasters hack mod apk unlimited money and gems
    -download bowmasters mod apk for android
    -bowmasters mod apk offline play
    -bowmasters mod apk no ads
    -bowmasters mod apk all weapons unlocked
    -bowmasters mod apk unlimited coins and diamonds
    -bowmasters mod apk revdl
    -bowmasters mod apk rexdl
    -bowmasters mod apk happymod
    -download game bowmasters hack apk
    -download game bowmasters cheat apk
    -download game bowmasters premium apk
    -download game bowmasters pro apk
    -download game bowmasters cracked apk
    -download game bowmasters full version apk
    -download game bowmasters unlocked apk
    -download game bowmasters mega mod apk
    -download game bowmasters god mode apk
    -download game bowmasters unlimited everything apk
    -download game bowmasters free shopping apk
    -download game bowmasters vip mod apk
    -download game bowmasters super mod apk
    -download game bowmasters extreme mod apk
    -download game bowmasters 2 mod apk
    -download game bowmasters 3d mod apk
    -download game bowmasters online mod apk
    -download game bowmasters multiplayer mod apk
    -download game bowmasters pvp mod apk
    -download game bowmasters zombie mode mod apk
    -download game bowmasters adventure mode mod apk
    -download game bowmasters tournament mode mod apk
    -download game bowmasters duel mode mod apk
    -download game bowmasters classic mode mod apk
    -download game bowmasters new update mod apk
    -download game bowmasters new characters mod apk
    -download game bowmasters new weapons mod apk
    -download game bowmasters new levels mod apk
    -download game bowmasters new modes mod apk
    -download game bowmasters original version with all characters unlocked
    -how to install and play bowmasters mod apk with all characters unlocked
    -how to get free coins and gems in bowmasters mod apk with all characters unlocked
    -how to upgrade weapons and skills in bowmasters mod apk with all characters unlocked
    -how to complete challenges and achievements in bowmasters mod apk with all characters unlocked
    -how to win duels and tournaments in bowmasters mod apk with all characters unlocked
    -how to have fun fighting in different modes in bowmasters mod apk with all characters unlocked
    -how to enjoy the graphics and sound effects in bowmasters mod apk with all characters unlocked

    -

    Characters and weapons

    -

    Bowmasters has 60+ insane characters from all dimensions that you can choose from. Each character has a unique weapon and a special skill that can help you win the battles. Some of the characters are inspired by famous movies, cartoons, games, or celebrities. For example, you can play as Thor, Deadpool, Spider-Man, Batman, or even Donald Trump. You can also unlock new characters by completing achievements or watching ads.

    -

    Graphics and sound

    -

    Bowmasters has colorful and cartoonish graphics that make the game look appealing and funny. The game also has rag-doll physics that make the characters fly and bounce around when they get hit by the weapons. The sound effects and music are also well-made and match the theme of the game. You can hear the screams, explosions, cheers, and taunts of the characters as you play.

    -

    What is Bowmasters Mod Apk?

    -

    Bowmasters Mod Apk is a modified version of the original Bowmasters game that gives you some extra benefits and features that are not available in the official version. The mod apk is created by third-party developers who modify the original game files to unlock some features or add some cheats.

    -

    Benefits of Bowmasters Mod Apk

    -

    Some of the benefits of using Bowmasters Mod Apk are:

    -

    Unlock all characters

    -

    One of the main benefits of using Bowmasters Mod Apk is that you can unlock all the characters in the game without spending any money or watching any ads. You can access all the 60+ insane characters from all dimensions and use their weapons and skills in any game mode. This way, you can enjoy the game more and have more variety and fun.

    -

    Unlimited coins and gems

    -

    Another benefit of using Bowmasters Mod Apk is that you can get unlimited coins and gems in the game. Coins and gems are the currencies in the game that you can use to buy new characters, weapons, or chests. You can also use them to upgrade your characters and weapons to make them more powerful and effective. With Bowmasters Mod Apk, you can get unlimited coins and gems for free and spend them as much as you want without worrying about running out of them.

    -

    Remove ads

    -

    The last benefit of using Bowmasters Mod Apk is that you can remove all the ads in the game. Ads can be annoying and distracting when you are playing the game, especially when they pop up in the middle of the action or when you are trying to unlock a new character. With Bowmasters Mod Apk, you can enjoy the game without any ads and have a smooth and uninterrupted gaming experience.

    -

    How to download and install Bowmasters Mod Apk?

    -

    If you want to download and install Bowmasters Mod Apk on your device, you need to follow some simple steps. Here are the steps to download and install Bowmasters Mod Apk:

    -

    Steps to download and install Bowmasters Mod Apk

    -
      -
    1. First, you need to uninstall the original Bowmasters game from your device if you have it installed.
    2. -
    3. Second, you need to enable the unknown sources option on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
    4. -
    5. Third, you need to download the Bowmasters Mod Apk file from a reliable source. You can search for it on Google or use this link to download it.
    6. -
    7. Fourth, you need to locate the downloaded file on your device and tap on it to start the installation process.
    8. -
    9. Fifth, you need to follow the instructions on the screen and wait for the installation to finish.
    10. -
    11. Sixth, you need to launch the game and enjoy the mod features.
    12. -
    -

    Tips to play Bowmasters Mod Apk

    -

    Now that you have downloaded and installed Bowmasters Mod Apk, you might want some tips to play it better. Here are some tips to play Bowmasters Mod Apk:

    -
      -
    • Try different characters and weapons and find the ones that suit your style and preference.
    • -
    • Use your special skills wisely and at the right time to gain an advantage over your opponents.
    • -
    • Aim carefully and adjust your angle and power according to the distance and wind direction.
    • -
    • Collect coins and gems by winning duels, shooting birds or fruits, or opening chests.
    • -
    • Upgrade your characters and weapons to increase their damage, range, accuracy, and speed.
    • -
    • Play online multiplayer mode and challenge other players around the world or invite your friends to join you.
    • -
    -

    Conclusion

    -

    Bowmasters is a fun and addictive multiplayer game with bowmen that has 60+ insane characters from all dimensions, 60+ different weapons for total mayhem, and multiple game modes to challenge your skills. You can download Bowmasters Mod Apk to unlock all the characters, get unlimited coins and gems, and remove all the ads in the game. You can also follow some tips to play Bowmasters Mod Apk better and have more fun. So what are you waiting for? Download Bowmasters Mod Apk now and enjoy the game!

    -

    FAQs

    -

    Here are some frequently asked questions about Bowmasters Mod Apk:

    -
      -
    1. Is Bowmasters Mod Apk safe to use?
    2. -

      Yes, Bowmasters Mod Apk is safe to use as long as you download it from a trusted source. However, you should be careful not to use it on your main account or on any online platform that may ban you for using modded apps.

      -
    3. Do I need root access or jailbreak to use Bowmasters Mod Apk?
    4. -

      No, you do not need root access or jailbreak to use Bowmasters Mod Apk. You just need to enable the unknown sources option on your device and install the mod apk file as instructed above.

      -
    5. Can I play online multiplayer mode with Bowmasters Mod Apk?
    6. -

      Yes, you can play online multiplayer mode with Bowmasters Mod Apk. However, you may face some issues or errors while connecting with other players who are using the official version of the game. You may also get banned by the game developers if they detect that you are using a modded app.

      -
    7. Can I update Bowmasters Mod Apk?
    8. -

      No, you cannot update Bowmasters Mod Apk through the Google Play Store or the App Store. You need to uninstall the mod apk and download the latest version of the mod apk from the same source or a different one. You may also lose your progress and data if you update the mod apk.

      -
    9. Where can I download Bowmasters Mod Apk?
    10. -

      You can download Bowmasters Mod Apk from various sources on the internet. However, you should be careful and only download it from reliable and trusted sources that do not contain any viruses or malware. You can also use this link to download Bowmasters Mod Apk.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/sino72/Passenger_Reconization/detectMotion.py b/spaces/sino72/Passenger_Reconization/detectMotion.py deleted file mode 100644 index 89fd2de7ac0dce5a6622375cd48d577722c9e026..0000000000000000000000000000000000000000 --- a/spaces/sino72/Passenger_Reconization/detectMotion.py +++ /dev/null @@ -1,71 +0,0 @@ -#修改自https://blog.csdn.net/qq_29367075/article/details/122933407 -import cv2 -import numpy as np -import tempfile -import os -import matplotlib -import matplotlib.pyplot as plt -matplotlib.use('Agg') - -kernel1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) -mog = cv2.createBackgroundSubtractorMOG2() # 创建混合高斯模型来用于北京建模 - -def motionDetection(inputPath): - cap = cv2.VideoCapture(inputPath)#从inputPath读入视频 - fps = cap.get(cv2.CAP_PROP_FPS) #获取视频的帧率 - size = (int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), - int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)))#获取视频的大小 - output_viedo_frame = cv2.VideoWriter()#初始化视频写入 - output_viedo_fmask = cv2.VideoWriter()#初始化视频写入 - outputPath=tempfile.mkdtemp()#创建输出视频的临时文件夹的路径 - fourcc = cv2.VideoWriter_fourcc('X','V','I','D')#视频编码:h264,只有h264格式的mp4文件才能在浏览器直接播放 - video_save_path_frame = os.path.join(outputPath,"frame.mp4")#创建输出视频路径 - video_save_path_fmask = os.path.join(outputPath,"fmask.mp4")#创建输出视频路径 - output_viedo_frame.open(video_save_path_frame , fourcc, fps, size, True) - output_viedo_fmask.open(video_save_path_fmask , fourcc, fps, size, True) - #对每一帧图片进行读取和处理 - while True: - ret, frame = cap.read()#将每一帧图片读取到img当中 - if frame is None: - print("camera is over...") - break - - fmask = mog.apply(frame) # 判断哪些是前景和背景 - - - MORPH_OPEN_1 = cv2.morphologyEx(fmask, cv2.MORPH_OPEN, kernel1) # 开运算,去除噪声和毛刺 - - contours, _ = cv2.findContours(fmask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # 只检测外边框 - - for cont in contours: - # 计算各个轮廓的面积 - len = cv2.arcLength(cont, True) - if len > 300: # 去除一些小的噪声点 - # 找到一个轮廓 - x,y,w,h = cv2.boundingRect(cont) - # 画出这个矩形 - cv2.rectangle(frame, (x,y), (x+w, y+h), color=(0,255,0), thickness=3) - fmask=cv2.cvtColor(fmask,cv2.COLOR_BGR2RGB) - #print(fmask) - #image_np = np.squeeze(img.render())#用np.squeeze将输出结果降维 - output_viedo_frame.write(frame)#将处理后的图像写入视频 - output_viedo_fmask.write(fmask)#将处理后的图像写入视频 - output_viedo_frame.release()#释放 - output_viedo_fmask.release()#释放 - cap.release()#释放 - return video_save_path_frame,video_save_path_fmask,video_save_path_frame,video_save_path_fmask - - -#下面是直方图相关 -def image_histogram(img): - plt.close() - fig = plt.figure(figsize=(15,12)) - plt.hist(img.ravel(),256, [0, 255]); - return fig - -#img = cv2.imread("C:\\Users\\sino\\Pictures\\fan.jpg") -#image_histogram(img) - - - - diff --git a/spaces/sirfindcent/skimlit/setup.sh b/spaces/sirfindcent/skimlit/setup.sh deleted file mode 100644 index d8f97044be8894928d03fa6fe7e79af09be7edba..0000000000000000000000000000000000000000 --- a/spaces/sirfindcent/skimlit/setup.sh +++ /dev/null @@ -1,11 +0,0 @@ -mkdir -p ~/.streamlit/ -echo "\ -[general]\n\ -email = \"your-email@domain.com\"\n\ -" > ~/.streamlit/credentials.toml -echo "\ -[server]\n\ -headless = true\n\ -enableCORS=false\n\ -port = $PORT\n\ -" > ~/.streamlit/config.toml diff --git a/spaces/skimai/DragGAN_Streamlit/stylegan2/torch_utils/custom_ops.py b/spaces/skimai/DragGAN_Streamlit/stylegan2/torch_utils/custom_ops.py deleted file mode 100644 index 4cc4e43fc6f6ce79f2bd68a44ba87990b9b8564e..0000000000000000000000000000000000000000 --- a/spaces/skimai/DragGAN_Streamlit/stylegan2/torch_utils/custom_ops.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import glob -import torch -import torch.utils.cpp_extension -import importlib -import hashlib -import shutil -from pathlib import Path - -from torch.utils.file_baton import FileBaton - -#---------------------------------------------------------------------------- -# Global options. - -verbosity = 'brief' # Verbosity level: 'none', 'brief', 'full' - -#---------------------------------------------------------------------------- -# Internal helper funcs. - -def _find_compiler_bindir(): - patterns = [ - 'C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio */vc/bin', - ] - for pattern in patterns: - matches = sorted(glob.glob(pattern)) - if len(matches): - return matches[-1] - return None - -#---------------------------------------------------------------------------- -# Main entry point for compiling and loading C++/CUDA plugins. - -_cached_plugins = dict() - -def get_plugin(module_name, sources, **build_kwargs): - assert verbosity in ['none', 'brief', 'full'] - - # Already cached? - if module_name in _cached_plugins: - return _cached_plugins[module_name] - - # Print status. - if verbosity == 'full': - print(f'Setting up PyTorch plugin "{module_name}"...') - elif verbosity == 'brief': - print(f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True) - - try: # pylint: disable=too-many-nested-blocks - # Make sure we can find the necessary compiler binaries. - if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0: - compiler_bindir = _find_compiler_bindir() - if compiler_bindir is None: - raise RuntimeError(f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".') - os.environ['PATH'] += ';' + compiler_bindir - - # Compile and load. - verbose_build = (verbosity == 'full') - - # Incremental build md5sum trickery. Copies all the input source files - # into a cached build directory under a combined md5 digest of the input - # source files. Copying is done only if the combined digest has changed. - # This keeps input file timestamps and filenames the same as in previous - # extension builds, allowing for fast incremental rebuilds. - # - # This optimization is done only in case all the source files reside in - # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR - # environment variable is set (we take this as a signal that the user - # actually cares about this.) - source_dirs_set = set(os.path.dirname(source) for source in sources) - if len(source_dirs_set) == 1 and ('TORCH_EXTENSIONS_DIR' in os.environ): - all_source_files = sorted(list(x for x in Path(list(source_dirs_set)[0]).iterdir() if x.is_file())) - - # Compute a combined hash digest for all source files in the same - # custom op directory (usually .cu, .cpp, .py and .h files). - hash_md5 = hashlib.md5() - for src in all_source_files: - with open(src, 'rb') as f: - hash_md5.update(f.read()) - build_dir = torch.utils.cpp_extension._get_build_directory(module_name, verbose=verbose_build) # pylint: disable=protected-access - digest_build_dir = os.path.join(build_dir, hash_md5.hexdigest()) - - if not os.path.isdir(digest_build_dir): - os.makedirs(digest_build_dir, exist_ok=True) - baton = FileBaton(os.path.join(digest_build_dir, 'lock')) - if baton.try_acquire(): - try: - for src in all_source_files: - shutil.copyfile(src, os.path.join(digest_build_dir, os.path.basename(src))) - finally: - baton.release() - else: - # Someone else is copying source files under the digest dir, - # wait until done and continue. - baton.wait() - digest_sources = [os.path.join(digest_build_dir, os.path.basename(x)) for x in sources] - torch.utils.cpp_extension.load(name=module_name, build_directory=build_dir, - verbose=verbose_build, sources=digest_sources, **build_kwargs) - else: - torch.utils.cpp_extension.load(name=module_name, verbose=verbose_build, sources=sources, **build_kwargs) - module = importlib.import_module(module_name) - - except: - if verbosity == 'brief': - print('Failed!') - raise - - # Print status and add to cache. - if verbosity == 'full': - print(f'Done setting up PyTorch plugin "{module_name}".') - elif verbosity == 'brief': - print('Done.') - _cached_plugins[module_name] = module - return module - -#---------------------------------------------------------------------------- diff --git a/spaces/sklearn-docs/SGD-convex-loss/README.md b/spaces/sklearn-docs/SGD-convex-loss/README.md deleted file mode 100644 index 9bf3ea09f6ff4c8688f56668d7d3b6a8d532047f..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/SGD-convex-loss/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SGD Convex Loss -emoji: 🌍 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_phone.sh b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_phone.sh deleted file mode 100644 index 947342a0b7d8f50bcf4164b284ef3303a1247b64..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_phone.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/bin/bash - -# decode into phones (and prepare a new data directory for HMM outputs) - -. ./path.sh - -set -eu - -out_dir= # same as in train.sh -dec_lmparam= # LM hyperparameters (e.g., 7.0.0) -dec_exp= -dec_script= -dec_splits="train valid" -dec_data_dir=$out_dir/dec_data # where to write HMM output - -data_dir=${out_dir}/data - -local/decode.sh --nj 40 --graph_name graph \ - --val_sets "$dec_splits" --decode_script $dec_script \ - $out_dir/exp/$dec_exp $data_dir $data_dir/lang_test - -if [ ! -z $dec_lmparam ]; then - for x in $dec_splits; do - mkdir -p $dec_data_dir/$x - cp $data_dir/$x/{feats.scp,cmvn.scp,utt2spk,spk2utt} $dec_data_dir/$x/ - - tra=$out_dir/exp/$dec_exp/decode_${x}/scoring/${dec_lmparam}.tra - cat $tra | utils/int2sym.pl -f 2- $data_dir/lang/words.txt | \ - sed 's:::g' | sed 's:::g' > $dec_data_dir/${x}/text - utils/fix_data_dir.sh $dec_data_dir/${x} - echo "WER on ${x} is" $(compute-wer ark:$data_dir/${x}_gt/text ark:$dec_data_dir/$x/text | cut -d" " -f2-) - done -fi diff --git a/spaces/sriramelango/Social_Classification_Public/models/ofa/unify_multihead_attention.py b/spaces/sriramelango/Social_Classification_Public/models/ofa/unify_multihead_attention.py deleted file mode 100644 index 428daf0f9a74be58f9d7d00a4a61c682492e8780..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/models/ofa/unify_multihead_attention.py +++ /dev/null @@ -1,518 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, Optional, Tuple - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor, nn -from torch.nn import Parameter - - -@with_incremental_state -class MultiheadAttention(nn.Module): - """Multi-headed attention. - - See "Attention Is All You Need" for more details. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - q_noise=0.0, - qn_block_size=8, - scale_factor=2, - scale_heads=False - ): - super().__init__() - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim - - self.num_heads = num_heads - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - - self.head_dim = embed_dim // num_heads - assert ( - self.head_dim * num_heads == self.embed_dim - ), "embed_dim must be divisible by num_heads" - self.scaling = float(self.head_dim * scale_factor) ** -0.5 - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - self.c_attn = nn.Parameter(torch.ones((self.num_heads,)), requires_grad=True) if scale_heads else None - - assert not self.self_attention or self.qkv_same_dim, ( - "Self-attention requires query, key and " "value to be of the same size" - ) - - self.k_proj = quant_noise( - nn.Linear(self.kdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.v_proj = quant_noise( - nn.Linear(self.vdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.q_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - self.out_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - if add_bias_kv: - self.bias_k = Parameter(torch.Tensor(1, 1, embed_dim)) - self.bias_v = Parameter(torch.Tensor(1, 1, embed_dim)) - else: - self.bias_k = self.bias_v = None - - self.add_zero_attn = add_zero_attn - - self.reset_parameters() - - self.onnx_trace = False - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def reset_parameters(self): - if self.qkv_same_dim: - # Empirically observed the convergence to be much better with - # the scaled initialization - nn.init.xavier_uniform_(self.k_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.v_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.q_proj.weight, gain=1 / math.sqrt(2)) - else: - nn.init.xavier_uniform_(self.k_proj.weight) - nn.init.xavier_uniform_(self.v_proj.weight) - nn.init.xavier_uniform_(self.q_proj.weight) - - nn.init.xavier_uniform_(self.out_proj.weight) - if self.out_proj.bias is not None: - nn.init.constant_(self.out_proj.bias, 0.0) - if self.bias_k is not None: - nn.init.xavier_normal_(self.bias_k) - if self.bias_v is not None: - nn.init.xavier_normal_(self.bias_v) - - def forward( - self, - query, - key: Optional[Tensor], - value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - need_weights: bool = True, - static_kv: bool = False, - attn_mask: Optional[Tensor] = None, - self_attn_mask: Optional[Tensor] = None, - before_softmax: bool = False, - need_head_weights: bool = False, - attn_bias: Optional[Tensor] = None - ) -> Tuple[Tensor, Optional[Tensor]]: - """Input shape: Time x Batch x Channel - - Args: - key_padding_mask (ByteTensor, optional): mask to exclude - keys that are pads, of shape `(batch, src_len)`, where - padding elements are indicated by 1s. - need_weights (bool, optional): return the attention weights, - averaged over heads (default: False). - attn_mask (ByteTensor, optional): typically used to - implement causal attention, where the mask prevents the - attention from looking forward in time (default: None). - before_softmax (bool, optional): return the raw attention - weights and values before the attention softmax. - need_head_weights (bool, optional): return the attention - weights for each head. Implies *need_weights*. Default: - return the average attention weights over all heads. - """ - if need_head_weights: - need_weights = True - - is_tpu = query.device.type == "xla" - - tgt_len, bsz, embed_dim = query.size() - src_len = tgt_len - assert embed_dim == self.embed_dim, f"query dim {embed_dim} != {self.embed_dim}" - assert list(query.size()) == [tgt_len, bsz, embed_dim] - if key is not None: - src_len, key_bsz, _ = key.size() - if not torch.jit.is_scripting(): - assert key_bsz == bsz - assert value is not None - assert src_len, bsz == value.shape[:2] - - if ( - not self.onnx_trace - and not is_tpu # don't use PyTorch version on TPUs - and incremental_state is None - and not static_kv - # A workaround for quantization to work. Otherwise JIT compilation - # treats bias in linear module as method. - and not torch.jit.is_scripting() - and self_attn_mask is None - and attn_bias is None - ): - assert key is not None and value is not None - return F.multi_head_attention_forward( - query, - key, - value, - self.embed_dim, - self.num_heads, - torch.empty([0]), - torch.cat((self.q_proj.bias, self.k_proj.bias, self.v_proj.bias)), - self.bias_k, - self.bias_v, - self.add_zero_attn, - self.dropout_module.p, - self.out_proj.weight, - self.out_proj.bias, - self.training or self.dropout_module.apply_during_inference, - key_padding_mask, - need_weights, - attn_mask, - use_separate_proj_weight=True, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - ) - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if saved_state is not None and "prev_key" in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention and self_attn_mask is None: - q = self.q_proj(query) - k = self.k_proj(query) - v = self.v_proj(query) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.q_proj(query) - if key is None: - assert value is None - k = v = None - else: - k = self.k_proj(key) - v = self.v_proj(key) - - else: - assert key is not None and value is not None - q = self.q_proj(query) - k = self.k_proj(key) - v = self.v_proj(value) - q *= self.scaling - - if self.bias_k is not None: - assert self.bias_v is not None - k = torch.cat([k, self.bias_k.repeat(1, bsz, 1)]) - v = torch.cat([v, self.bias_v.repeat(1, bsz, 1)]) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - key_padding_mask.new_zeros(key_padding_mask.size(0), 1), - ], - dim=1, - ) - - q = ( - q.contiguous() - .view(tgt_len, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if k is not None: - k = ( - k.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if v is not None: - v = ( - v.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads, seq_len, head_dim) - if "prev_key" in saved_state: - _prev_key = saved_state["prev_key"] - assert _prev_key is not None - prev_key = _prev_key.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - k = prev_key - else: - assert k is not None - k = torch.cat([prev_key, k], dim=1) - src_len = k.size(1) - if "prev_value" in saved_state: - _prev_value = saved_state["prev_value"] - assert _prev_value is not None - prev_value = _prev_value.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - v = prev_value - else: - assert v is not None - v = torch.cat([prev_value, v], dim=1) - prev_key_padding_mask: Optional[Tensor] = None - if "prev_key_padding_mask" in saved_state: - prev_key_padding_mask = saved_state["prev_key_padding_mask"] - assert k is not None and v is not None - key_padding_mask = MultiheadAttention._append_prev_key_padding_mask( - key_padding_mask=key_padding_mask, - prev_key_padding_mask=prev_key_padding_mask, - batch_size=bsz, - src_len=k.size(1), - static_kv=static_kv, - ) - - saved_state["prev_key"] = k.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_value"] = v.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_key_padding_mask"] = key_padding_mask - # In this branch incremental_state is never None - assert incremental_state is not None - incremental_state = self._set_input_buffer(incremental_state, saved_state) - assert k is not None - assert k.size(1) == src_len - - # This is part of a workaround to get around fork/join parallelism - # not supporting Optional types. - if key_padding_mask is not None and key_padding_mask.dim() == 0: - key_padding_mask = None - - if key_padding_mask is not None: - assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - if self.add_zero_attn: - assert v is not None - src_len += 1 - k = torch.cat([k, k.new_zeros((k.size(0), 1) + k.size()[2:])], dim=1) - v = torch.cat([v, v.new_zeros((v.size(0), 1) + v.size()[2:])], dim=1) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - torch.zeros(key_padding_mask.size(0), 1).type_as( - key_padding_mask - ), - ], - dim=1, - ) - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - attn_weights = self.apply_sparse_mask(attn_weights, tgt_len, src_len, bsz) - - assert list(attn_weights.size()) == [bsz * self.num_heads, tgt_len, src_len] - - if attn_bias is not None: - attn_weights += attn_bias - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - if self.onnx_trace: - attn_mask = attn_mask.repeat(attn_weights.size(0), 1, 1) - attn_weights += attn_mask - - if self_attn_mask is not None: - self_attn_mask = self_attn_mask.unsqueeze(1).expand(bsz, self.num_heads, tgt_len, src_len) - attn_weights += self_attn_mask.contiguous().view(bsz * self.num_heads, tgt_len, src_len) - - if key_padding_mask is not None: - # don't attend to padding symbols - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - if not is_tpu: - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), - float("-inf"), - ) - else: - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.masked_fill(key_padding_mask, float("-inf")) - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - if before_softmax: - return attn_weights, v - - attn_weights_float = utils.softmax( - attn_weights, dim=-1, onnx_trace=self.onnx_trace - ) - attn_weights = attn_weights_float.type_as(attn_weights) - attn_probs = self.dropout_module(attn_weights) - - assert v is not None - attn = torch.bmm(attn_probs, v) - assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim] - if self.onnx_trace and attn.size(1) == 1: - # when ONNX tracing a single decoder step (sequence length == 1) - # the transpose is a no-op copy before view, thus unnecessary - attn = attn.contiguous().view(tgt_len, bsz, embed_dim) - else: - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - if self.c_attn is not None: - attn = attn.view(tgt_len, bsz, self.num_heads, self.head_dim) - attn = torch.einsum('tbhd,h->tbhd', attn, self.c_attn) - attn = attn.reshape(tgt_len, bsz, self.embed_dim) - attn = self.out_proj(attn) - attn_weights: Optional[Tensor] = None - if need_weights: - attn_weights = attn_weights_float.view( - bsz, self.num_heads, tgt_len, src_len - ).transpose(1, 0) - if not need_head_weights: - # average attention weights over heads - attn_weights = attn_weights.mean(dim=0) - - return attn, attn_weights - - @staticmethod - def _append_prev_key_padding_mask( - key_padding_mask: Optional[Tensor], - prev_key_padding_mask: Optional[Tensor], - batch_size: int, - src_len: int, - static_kv: bool, - ) -> Optional[Tensor]: - # saved key padding masks have shape (bsz, seq_len) - if prev_key_padding_mask is not None and static_kv: - new_key_padding_mask = prev_key_padding_mask - elif prev_key_padding_mask is not None and key_padding_mask is not None: - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), key_padding_mask.float()], dim=1 - ) - # During incremental decoding, as the padding token enters and - # leaves the frame, there will be a time when prev or current - # is None - elif prev_key_padding_mask is not None: - if src_len > prev_key_padding_mask.size(1): - filler = torch.zeros( - (batch_size, src_len - prev_key_padding_mask.size(1)), - device=prev_key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), filler.float()], dim=1 - ) - else: - new_key_padding_mask = prev_key_padding_mask.float() - elif key_padding_mask is not None: - if src_len > key_padding_mask.size(1): - filler = torch.zeros( - (batch_size, src_len - key_padding_mask.size(1)), - device=key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [filler.float(), key_padding_mask.float()], dim=1 - ) - else: - new_key_padding_mask = key_padding_mask.float() - else: - new_key_padding_mask = prev_key_padding_mask - return new_key_padding_mask - - @torch.jit.export - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - """Reorder buffered internal state (for incremental generation).""" - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - for k in input_buffer.keys(): - input_buffer_k = input_buffer[k] - if input_buffer_k is not None: - if self.encoder_decoder_attention and input_buffer_k.size( - 0 - ) == new_order.size(0): - break - input_buffer[k] = input_buffer_k.index_select(0, new_order) - incremental_state = self._set_input_buffer(incremental_state, input_buffer) - return incremental_state - - def _get_input_buffer( - self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ) -> Dict[str, Optional[Tensor]]: - result = self.get_incremental_state(incremental_state, "attn_state") - if result is not None: - return result - else: - empty_result: Dict[str, Optional[Tensor]] = {} - return empty_result - - def _set_input_buffer( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - buffer: Dict[str, Optional[Tensor]], - ): - return self.set_incremental_state(incremental_state, "attn_state", buffer) - - def apply_sparse_mask(self, attn_weights, tgt_len: int, src_len: int, bsz: int): - return attn_weights - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - items_to_add = {} - keys_to_remove = [] - for k in state_dict.keys(): - if k.endswith(prefix + "in_proj_weight"): - # in_proj_weight used to be q + k + v with same dimensions - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.weight"] = state_dict[k][:dim] - items_to_add[prefix + "k_proj.weight"] = state_dict[k][dim : 2 * dim] - items_to_add[prefix + "v_proj.weight"] = state_dict[k][2 * dim :] - - keys_to_remove.append(k) - - k_bias = prefix + "in_proj_bias" - if k_bias in state_dict.keys(): - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.bias"] = state_dict[k_bias][:dim] - items_to_add[prefix + "k_proj.bias"] = state_dict[k_bias][ - dim : 2 * dim - ] - items_to_add[prefix + "v_proj.bias"] = state_dict[k_bias][2 * dim :] - - keys_to_remove.append(prefix + "in_proj_bias") - - for k in keys_to_remove: - del state_dict[k] - - for key, value in items_to_add.items(): - state_dict[key] = value diff --git a/spaces/stomexserde/gpt4-ui/Examples/6 Is Mine J-pop Hits Punk-covers 2 Zip.md b/spaces/stomexserde/gpt4-ui/Examples/6 Is Mine J-pop Hits Punk-covers 2 Zip.md deleted file mode 100644 index 7b7dcf684f2b84bdacd3dc8ac4648b804cea940e..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/6 Is Mine J-pop Hits Punk-covers 2 Zip.md +++ /dev/null @@ -1,12 +0,0 @@ -
    -

    Idoltic Punk-Covers: A Review of the Album by 6% is MINE (IDOL)

    -

    6% is MINE is a mysterious masked band that covers J-POP songs with a punk rock twist. Their latest album, Idoltic Punk-Covers, features a collaboration with a secret idol group that sings along to some of the most iconic idol songs in history. The result is a wild and energetic ride that mixes pop melodies with distorted guitars and drums.

    -

    The album consists of nine tracks, each one a cover of a different idol song from various eras and genres. Some of the songs are well-known classics, such as "Akai Sweet Pea" by Seiko Matsuda, "Valentine Kiss" by Sayuri Kokusho, and "Sailor Fuku wo Nugasanai de" by Onyanko Club. Others are more recent hits, such as "Shonichi" by AKB48 and "SECRET BASE ~Kimi ga Kureta Mono~" by ZONE. The album also includes some surprises, such as "Southpaw" by Pink Lady and "Momoiro Kataomoi" by Aya Matsuura.

    -

    6 is mine j-pop hits punk-covers 2 zip


    DOWNLOAD ☆☆☆☆☆ https://urlgoal.com/2uI5Wm



    -

    The band does not shy away from adding their own flair to the songs, changing the tempo, key, and arrangement to suit their punk style. The vocals are also varied, ranging from sweet and cute to rough and raspy. The idol group that joins them adds another layer of contrast and harmony to the songs, creating a unique blend of voices that sometimes clash and sometimes complement each other.

    -

    The album is a fun and refreshing take on some of the most beloved idol songs in Japan. It is a tribute to the diversity and legacy of idol music, as well as a showcase of the band's creativity and talent. Fans of J-POP and punk rock alike will enjoy this album, as it offers something for everyone.

    One of the most intriguing aspects of 6% is MINE is their mysterious identity. The band members wear masks and costumes that conceal their faces and bodies, and they use pseudonyms instead of their real names. The band has never revealed their true identities, and they have kept their fans guessing and speculating for years. Some rumors suggest that they are members of other famous bands, such as SiM, [^2^] while others claim that they are former idols or actors. The band has also collaborated with other masked groups, such as the idol group that appears on Idoltic Punk-Covers, whose members are also unknown.

    -

    Another interesting aspect of 6% is MINE is their diverse and prolific discography. The band has released several albums and singles that cover various genres and themes, such as J-POP, anime, Ghibli, Disney, rock, metal, and more. The band has also performed live at various venues and festivals, such as Summer Sonic, Countdown Japan, and Vans Warped Tour Japan. The band has gained a loyal fan base both in Japan and overseas, thanks to their catchy and energetic songs and their charismatic and humorous stage presence.

    -

    6% is MINE is a band that defies conventions and expectations. They are not afraid to experiment with different styles and sounds, and they are not bound by any labels or categories. They are a band that celebrates music as a form of expression and entertainment, and they invite their listeners to join them in their musical journey. They are a band that proves that punk rock is not dead, but alive and kicking.

    -

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/A Recent Acquisition !LINK!.md b/spaces/stomexserde/gpt4-ui/Examples/A Recent Acquisition !LINK!.md deleted file mode 100644 index e4d20f6cfa21ec6c2bf9b42ddfeaacf340cfb745..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/A Recent Acquisition !LINK!.md +++ /dev/null @@ -1,18 +0,0 @@ - -

    A recent acquisition: How XYZ Inc. bought ABC Ltd. for $1.2 billion

    -

    XYZ Inc., a leading global provider of software solutions, announced on Monday that it has completed the acquisition of ABC Ltd., a UK-based company that specializes in cloud computing and artificial intelligence. The deal, which was valued at $1.2 billion, is expected to boost XYZ's revenue and market share in the fast-growing cloud and AI sectors.

    -

    A recent acquisition


    DOWNLOAD ►►►►► https://urlgoal.com/2uI7U6



    -

    According to XYZ's CEO, John Smith, the acquisition of ABC is a strategic move that will enhance XYZ's product portfolio and customer base. "We are thrilled to welcome ABC to the XYZ family. ABC has a proven track record of delivering innovative and scalable cloud and AI solutions to clients across various industries. By combining our strengths and expertise, we will be able to offer more value and choice to our customers and accelerate our growth in the digital transformation market."

    -

    ABC's founder and CEO, Jane Doe, also expressed her excitement about joining forces with XYZ. "We are proud of what we have achieved at ABC over the past decade. We have built a strong reputation for delivering cutting-edge cloud and AI solutions that help our customers solve complex problems and optimize their performance. Joining XYZ will enable us to leverage their global reach and resources to expand our offerings and serve more customers around the world."

    -

    The acquisition of ABC is the latest in a series of strategic moves by XYZ to strengthen its position as a leader in the software industry. In the past year, XYZ has also acquired DEF Inc., a US-based company that provides data analytics and business intelligence solutions, and GHI Ltd., a Canadian company that offers cybersecurity and blockchain services.

    - -

    As a result of the acquisition, ABC will operate as a subsidiary of XYZ and retain its brand name and identity. ABC's current management team will continue to lead the company and report to John Smith. The integration process is expected to be completed by the end of the year.

    -

    Both XYZ and ABC have a strong commitment to innovation and customer satisfaction. Together, they will leverage their complementary capabilities and technologies to create more value for their customers and stakeholders. They will also continue to invest in research and development to deliver new and improved solutions that meet the evolving needs of the market.

    -

    The acquisition of ABC is a win-win situation for both companies and their customers. It will create a more diversified and competitive software company that can offer a wider range of solutions and services to clients across various sectors and regions. It will also create new opportunities for growth and collaboration for both XYZ and ABC employees.

    - -

    The acquisition of ABC has also received positive feedback from industry experts and analysts. According to Mark Jones, a senior analyst at XYZ Research, the deal is a smart move by XYZ that will give it a competitive edge in the cloud and AI markets. "XYZ has made a bold and strategic decision to acquire ABC, which is one of the most innovative and respected players in the cloud and AI space. This acquisition will not only boost XYZ's revenue and market share, but also enhance its reputation and credibility as a leader in the software industry."

    -

    -

    Similarly, Lisa Lee, a principal consultant at ABC Consulting, praised the deal as a win-win situation for both companies and their customers. "ABC has been a pioneer and leader in the cloud and AI sector for years, delivering high-quality and cost-effective solutions to clients across various industries. By joining forces with XYZ, ABC will be able to access more resources and opportunities to expand its offerings and reach more customers around the world. At the same time, XYZ will benefit from ABC's expertise and experience in the cloud and AI domain, which will enable it to offer more value and choice to its customers."

    -

    The acquisition of ABC is expected to close in the third quarter of 2023, subject to regulatory approvals and customary closing conditions. XYZ and ABC will provide more details about the deal and its impact on their customers and employees in the coming weeks.

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Bamboo Tablet Software Download [UPDATED] Mac.md b/spaces/stomexserde/gpt4-ui/Examples/Bamboo Tablet Software Download [UPDATED] Mac.md deleted file mode 100644 index 3ba894d1fe286eab86e0ddea1d76372b3ed05cbf..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Bamboo Tablet Software Download [UPDATED] Mac.md +++ /dev/null @@ -1,25 +0,0 @@ - -

    How to Download and Install Bamboo Tablet Software on Mac

    -

    If you have a Bamboo tablet and want to use it on your Mac, you might be wondering how to download and install the software that makes it work. In this article, we will guide you through the steps to get your Bamboo tablet up and running on your Mac.

    -

    Bamboo Tablet Software Download Mac


    Download Zip »»» https://urlgoal.com/2uIbJU



    -

    Step 1: Download the driver from Wacom website

    -

    The driver is the software that allows your Bamboo tablet to communicate with your Mac. To download the driver, go to https://www.wacom.com/en-us/support/product-support/drivers and search for your product name or model number. You can also select your product category from the list. For example, if you have a Bamboo Pen tablet, you can select "Pen Tablets" from the category menu.[^1^]

    -

    Once you find your product, click on "Download" and choose the driver that matches your Mac operating system. For example, if you have macOS 13, you can download Driver 6.4.1-2 (macOS 10.15 - 13). Save the file to your computer and remember where you saved it.

    -

    Step 2: Install the driver on your Mac

    -

    Before you install the driver, make sure your Bamboo tablet is not connected to your Mac. Then, locate the file you downloaded and double-click on it to start the installation process. Follow the instructions on the screen to complete the installation.[^1^]

    -

    Note: Installing the driver on Mac requires Security & Privacy settings. You might need to allow Wacom Center app to access your system preferences and input monitoring. To do this, go to System Preferences > Security & Privacy > Privacy and check the boxes for Wacom Center under Accessibility and Input Monitoring.[^2^]

    -

    -

    Step 3: Restart your Mac and connect your Bamboo tablet

    -

    After installing the driver, restart your Mac and plug the USB cable into your tablet and computer. You should see a blue light on your tablet indicating that it is connected.[^2^]

    -

    If you want to connect your Bamboo tablet via Bluetooth, you can also do that by following these steps:[^2^]

    -
      -
    • Unplug your tablet from your Mac.
    • -
    • Open the Bluetooth settings/preferences on your Mac.
    • -
    • Press the power (middle) button of your Bamboo tablet and the LED will start blinking blue.
    • -
    • On your Mac, select "Wacom Intuos" and then "Pair".
    • -
    -

    Step 4: Get started with your Bamboo tablet

    -

    Congratulations! You have successfully downloaded and installed Bamboo tablet software on your Mac. Now you can start creating with your new Wacom Intuos. To get started, you can check out some tutorials and FAQs on https://www.wacom.com/en-us/getting-started/wacom-intuos. You can also get some complimentary software by signing in or creating a Wacom ID and registering your Intuos.[^2^]

    -

    We hope this article was helpful for you. If you have any questions or issues with your Bamboo tablet software, you can visit https://support.wacom.com/hc/en-us for more support.[^1^]

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Dishpointer License Key __HOT__.md b/spaces/stomexserde/gpt4-ui/Examples/Dishpointer License Key __HOT__.md deleted file mode 100644 index 926c5756e0f66205e64e6e5e9bde36ae7f252976..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Dishpointer License Key __HOT__.md +++ /dev/null @@ -1,25 +0,0 @@ - -

    How to Get a Dishpointer License Key for Free

    -

    If you are looking for a way to align your satellite dish with the best accuracy, you might have heard of Dishpointer, a website that helps you find the optimal elevation and azimuth for your dish based on your location and satellite data. Dishpointer also offers a mobile app that uses augmented reality to show you where the satellites are in the sky, making it easier to point your dish.

    -

    However, to use the full features of the app, you need a dishpointer license key, which costs $19.99 on the Google Play Store or the App Store. That might seem like a lot of money for an app that you might only use once or twice. So, is there a way to get a dishpointer license key for free?

    -

    dishpointer license key


    Download File ✺✺✺ https://urlgoal.com/2uIaZz



    -

    The answer is yes, but you have to be careful. There are some websites and apps that claim to offer free dishpointer license keys, but they might be scams or malware that could harm your device or steal your personal information. For example, one website that claims to offer free dishpointer license keys is soundcloud.com/mainulbrego/dishpointer-license-key-best[^2^], but it is actually a phishing site that tries to trick you into downloading a malicious file.

    -

    The safest way to get a dishpointer license key for free is to use a legitimate website that offers giveaways or discounts for apps. For example, you can check out appsumo.com, which often has deals for popular apps and software. You can also look for coupon codes or promo codes on sites like retailmenot.com or slickdeals.net. Sometimes, you might find a code that gives you a free or discounted dishpointer license key.

    -

    Another option is to use an alternative app that does not require a license key. For example, you can try Satellite-AR, which is also an augmented reality app that shows you where the satellites are in the sky. It is free and available for both Android and iOS devices. However, it might not be as accurate or reliable as Dishpointer, so you should always double-check your dish alignment with a signal meter or receiver.

    -

    In conclusion, getting a dishpointer license key for free is possible, but you have to be careful and avoid scams or malware. You can use legitimate websites that offer giveaways or discounts for apps, or you can use alternative apps that do not require a license key. However, if you want the best accuracy and reliability for your dish alignment, you might want to consider buying a dishpointer license key from the official website or app store.

    - -

    How to Use Dishpointer to Align Your Satellite Dish

    -

    Now that you have a dishpointer license key, you might be wondering how to use it to align your satellite dish. Here are the steps you need to follow:

    -

    -
      -
    1. Download and install the Dishpointer app on your Android or iOS device. You can find it on the Google Play Store or the App Store.
    2. -
    3. Open the app and enter your dishpointer license key when prompted. You can also enter your email address and password if you have an account on the Dishpointer website.
    4. -
    5. Select your satellite from the list of available satellites. You can also search for your satellite by name or frequency.
    6. -
    7. Enter your location by using GPS, entering your address, or selecting a point on the map.
    8. -
    9. The app will show you the elevation and azimuth for your dish based on your location and satellite data. You can also see the LNB skew and dish skew angles, which are the rotations of your LNB and dish to match the polarization of the satellite signal.
    10. -
    11. Use the augmented reality feature of the app to see where the satellite is in the sky. You can use your device's camera to scan the horizon and find the satellite. You can also use a compass mode to see the direction of the satellite.
    12. -
    13. Adjust your dish according to the app's instructions. You can use a signal meter or receiver to fine-tune your dish alignment and get the best signal quality.
    14. -
    -

    Congratulations, you have successfully aligned your satellite dish with Dishpointer!

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download ((INSTALL)) Film Wazir Full Movie 3gp Download ((INSTALL)).md b/spaces/stomexserde/gpt4-ui/Examples/Download ((INSTALL)) Film Wazir Full Movie 3gp Download ((INSTALL)).md deleted file mode 100644 index 33e2716d30fd03022a8cd8c3bc8454d0dd76921e..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Download ((INSTALL)) Film Wazir Full Movie 3gp Download ((INSTALL)).md +++ /dev/null @@ -1,30 +0,0 @@ - -

    How to Download Film Wazir Full Movie 3GP Download

    -

    If you are looking for a way to download film Wazir full movie 3gp download, you have come to the right place. Wazir is a 2016 Indian thriller film starring Amitabh Bachchan and Farhan Akhtar. The film revolves around a chess master and a suspended cop who team up to take down a mysterious enemy.

    -

    download film Wazir full movie 3gp download


    Downloadhttps://urlgoal.com/2uI9tl



    -

    Downloading film Wazir full movie 3gp download is easy and fast. You just need to follow these simple steps:

    -
      -
    1. Go to the website www.example.com. This is a reliable and safe site that offers high-quality 3gp movies for free.
    2. -
    3. Search for the film Wazir in the search box. You can also browse through the categories or genres to find the film.
    4. -
    5. Click on the film title and you will be redirected to the download page. There you will see the details of the film, such as the synopsis, cast, rating, and runtime.
    6. -
    7. Choose the 3gp format from the available options. You can also select the resolution and size of the file according to your preference.
    8. -
    9. Click on the download button and wait for the file to be downloaded. You can also use a download manager to speed up the process.
    10. -
    11. Enjoy watching film Wazir full movie 3gp download on your mobile device or computer.
    12. -
    -

    That's it! You have successfully downloaded film Wazir full movie 3gp download. You can also share the file with your friends or family. Remember to always use a trusted and legal site to download movies and avoid any viruses or malware.

    - -

    If you liked film Wazir full movie 3gp download, you might also enjoy these other movies in the same genre:

    -
      -
    • Drishyam: A 2015 Indian thriller film starring Ajay Devgn and Tabu. The film follows a man who tries to protect his family from a corrupt cop who suspects them of a murder.
    • -
    • Badla: A 2019 Indian thriller film starring Amitabh Bachchan and Taapsee Pannu. The film revolves around a lawyer who helps a woman accused of killing her lover.
    • -
    • Andhadhun: A 2018 Indian black comedy thriller film starring Ayushmann Khurrana and Tabu. The film tells the story of a blind pianist who witnesses a murder and gets entangled in a web of lies.
    • -
    -

    You can download these movies in 3gp format from the same website www.example.com. You will find a wide range of movies in different languages and genres to suit your taste.

    -

    We hope you enjoyed this article on how to download film Wazir full movie 3gp download. If you have any questions or feedback, please leave a comment below. Happy watching!

    -

    - -

    Downloading film Wazir full movie 3gp download is not only convenient but also economical. You can save a lot of money by avoiding the expensive tickets and popcorn at the cinema. You can also watch the movie at your own pace and comfort. You can pause, rewind, or fast-forward the movie as you wish. You can also watch the movie with subtitles or dubbing if you prefer.

    -

    Downloading film Wazir full movie 3gp download is also a great way to support the film industry. By downloading the movie from a legal and authorized site, you are showing your appreciation for the hard work and creativity of the filmmakers and actors. You are also helping them to earn revenue and recognition for their efforts. You are also encouraging them to make more quality movies in the future.

    -

    Downloading film Wazir full movie 3gp download is a simple and smart choice for any movie lover. You can enjoy the thrilling and engaging story of Wazir on your mobile device or computer anytime and anywhere. You can also share the movie with your friends or family and have a fun time together. Downloading film Wazir full movie 3gp download is a win-win situation for everyone.

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Dr Fone Toolkit Android Data Recovery [HOT].md b/spaces/stomexserde/gpt4-ui/Examples/Dr Fone Toolkit Android Data Recovery [HOT].md deleted file mode 100644 index 51e2624e7f3b8b933fab0496bcbc497c9558c077..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Dr Fone Toolkit Android Data Recovery [HOT].md +++ /dev/null @@ -1,37 +0,0 @@ - -

    How to Recover Deleted Data on Android with Dr.Fone Toolkit

    -

    If you have accidentally deleted some important data on your Android device, such as photos, contacts, messages, or videos, you may be wondering how to get them back. Fortunately, there is a powerful and easy-to-use tool that can help you recover deleted data on Android in just a few steps. It is called Dr.Fone Toolkit for Android Data Recovery.

    -

    Dr fone toolkit android data recovery


    Download Filehttps://urlgoal.com/2uIbWK



    -

    Dr.Fone Toolkit for Android Data Recovery is a professional and reliable software that can scan your Android device's internal memory and SD card for deleted data and restore them to your computer or device. It supports over 6000 Android devices and various types of data, such as photos, contacts, messages, call logs, WhatsApp, documents, audio, video, and more. It also has a high success rate and a user-friendly interface.

    -

    Here is how to use Dr.Fone Toolkit for Android Data Recovery to recover deleted data on Android:

    -
      -
    1. Download and install Dr.Fone Toolkit for Android Data Recovery on your computer from here.
    2. -
    3. Launch Dr.Fone and select Data Recovery from the main menu.
    4. -
    5. Connect your Android device to the computer using a USB cable. Make sure you have enabled USB debugging on your device.
    6. -
    7. Select the types of data you want to recover from the supported file types and then select the scan mode. You can choose Standard Mode for a quick scan or Advanced Mode for a deeper scan.
    8. -
    9. Dr.Fone will start to scan the files on your Android device's internal memory and SD card. This may take some time depending on the size of your data.
    10. -
    11. When the scan is completed, you can preview the found data and select the ones you want to recover. You can also use the filter options to narrow down the results.
    12. -
    13. Click on Recover to save the selected data to your computer or restore them to your device.
    14. -
    -

    Congratulations! You have successfully recovered deleted data on Android with Dr.Fone Toolkit for Android Data Recovery. You can now enjoy your precious data again.

    Dr.Fone Toolkit for Android Data Recovery Reviews

    -

    If you are wondering whether Dr.Fone Toolkit for Android Data Recovery is worth buying, you may want to check out some of the reviews from real users who have tried it. Here are some of the pros and cons of Dr.Fone Toolkit for Android Data Recovery based on the reviews from different sources:

    -

    -

    Pros

    -
      -
    • It can recover various types of data, such as photos, contacts, messages, WhatsApp, documents, audio, video, and more.
    • -
    • It supports over 6000 Android devices and different Android versions.
    • -
    • It has a high success rate and a user-friendly interface.
    • -
    • It allows you to preview the found data before recovering them.
    • -
    • It offers a free trial version that lets you scan your device and see the recoverable data.
    • -
    -

    Cons

    -
      -
    • It requires a USB cable to connect your device to your computer.
    • -
    • It may take some time to scan your device depending on the size of your data.
    • -
    • It may not be able to recover some data due to overwriting or encryption.
    • -
    • It is not cheap compared to some other data recovery software.
    • -
    • It may not work well with some devices or situations.
    • -
    -

    In conclusion, Dr.Fone Toolkit for Android Data Recovery is a powerful and reliable software that can help you recover deleted data on Android in most cases. However, it is not perfect and may have some limitations or drawbacks. Therefore, it is advisable to backup your data regularly and use Dr.Fone Toolkit for Android Data Recovery as a last resort.

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Freedownloadarchicad10fullversion.md b/spaces/stomexserde/gpt4-ui/Examples/Freedownloadarchicad10fullversion.md deleted file mode 100644 index 02301781a5cb8d3f762edbd33078687b17a1d31b..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Freedownloadarchicad10fullversion.md +++ /dev/null @@ -1,30 +0,0 @@ -
    -

    How to Download Archicad 10 Full Version for Free

    -

    Archicad is a popular architectural design software that offers a wide range of features for both 2D and 3D design, as well as BIM modeling. If you are an architecture student, teacher, or school, you can get a free license of Archicad for educational purposes. Here is how to download Archicad 10 full version for free.

    -
      -
    1. Go to https://myarchicad.graphisoft.com/, the official website of Archicad[^1^].
    2. -
    3. Choose your role from the options: student, teacher, or school[^1^].
    4. -
    5. Register with your email address and create a password[^1^].
    6. -
    7. Verify your email address and log in to your account[^1^].
    8. -
    9. Download the 30-day version of Archicad 10 for your operating system (Windows or Mac)[^1^].
    10. -
    11. Install Archicad 10 on your computer and activate it with the serial number you received in your email[^1^].
    12. -
    13. Apply for a full year extension of your license before the 30-day period expires[^1^]. You can do this by logging in to your account and clicking on the "Extend" button next to your license[^1^].
    14. -
    15. Enjoy using Archicad 10 full version for free for one year[^1^]. You can renew your license every year until the end of your studies[^1^].
    16. -
    -

    Note: Archicad 10 is an older version of Archicad that was released in 2006. The latest version of Archicad is Archicad 25, which was released in 2021. You can also download Archicad 25 for free from the same website, but you will need a more powerful computer to run it. Archicad 25 has more features and improvements than Archicad 10, such as better rendering, collaboration, and documentation[^2^]. If you want to learn more about Archicad 25, you can check out the official website or the online tutorials[^2^].

    -

    freedownloadarchicad10fullversion


    DOWNLOADhttps://urlgoal.com/2uI6z9



    Archicad 10 is an older version of Archicad that was released in 2006[^3^]. It has many features that make it a powerful and versatile tool for architectural design. Some of the main features of Archicad 10 are:

    -
      -
    • New wall types: Archicad 10 introduced complex profiles for walls, allowing users to create custom shapes and cross-sections for their walls[^4^]. Users can also apply different materials and composites to different parts of the wall[^4^].
    • -
    • Mesh tool: Archicad 10 added a new tool for creating and editing meshes, which are useful for modeling terrain, roofs, floors, and other irregular surfaces[^4^]. Users can manipulate the mesh points, edges, and polygons in 2D or 3D views[^4^].
    • -
    • Trimming elements to roof: Archicad 10 made it easier to trim elements such as walls, columns, beams, and slabs to the shape of a roof[^4^]. Users can also adjust the trimming distance and angle[^4^].
    • -
    • Editing in 3D: Archicad 10 improved the 3D editing capabilities of Archicad, allowing users to perform operations such as move, rotate, mirror, stretch, offset, and align on elements in 3D views[^4^]. Users can also snap to existing elements or use guides and grids in 3D[^4^].
    • -
    • New relative construction methods: Archicad 10 introduced new ways to create elements relative to other elements, such as parallel, perpendicular, tangent, concentric, and equidistant[^4^]. Users can also use these methods to modify existing elements[^4^].
    • -
    • New snap points: Archicad 10 added new snap points for elements such as arcs, circles, ellipses, splines, and hotspots[^4^]. Users can also customize the snap point settings and preferences[^4^].
    • -
    • Magic wand: Archicad 10 enhanced the magic wand function, which allows users to create elements by tracing the outline of an existing element or a drawing[^4^]. Users can also use the magic wand to fill an area with a pattern or a hatch[^4^].
    • -
    • Ellipse generation: Archicad 10 added a new option to generate ellipses by specifying three points on the ellipse curve[^4^]. Users can also edit the ellipse by moving the points or changing the axis ratio[^4^].
    • -
    • Explode: Archicad 10 added a new command to explode elements into their basic components, such as lines, arcs, polygons, fills, etc.[^4^] Users can also explode imported drawings or library parts[^4^].
    • -
    • Boolean: Archicad 10 added a new command to perform boolean operations on elements such as union, subtraction, intersection, and exclusion[^4^]. Users can also use boolean operations on imported drawings or library parts[^4^].
    • -
    -

    These are some of the main features of Archicad 10 that make it a useful software for architectural design. However, Archicad 10 is not compatible with newer operating systems and may not support newer file formats or standards. Therefore, users who want to benefit from the latest features and improvements of Archicad should consider upgrading to Archicad 25 or later versions.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/sunshineatnoon/TextureScraping/data/___init__.py b/spaces/sunshineatnoon/TextureScraping/data/___init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/supertori/files/lycoris/kohya.py b/spaces/supertori/files/lycoris/kohya.py deleted file mode 100644 index 42c1a7dbf74253ec2364db6afc2e673c940098f2..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/lycoris/kohya.py +++ /dev/null @@ -1,272 +0,0 @@ -# network module for kohya -# reference: -# https://github.com/microsoft/LoRA/blob/main/loralib/layers.py -# https://github.com/cloneofsimo/lora/blob/master/lora_diffusion/lora.py -# https://github.com/kohya-ss/sd-scripts/blob/main/networks/lora.py - -import math -from warnings import warn -import os -from typing import List -import torch - -from .kohya_utils import * -from .locon import LoConModule -from .loha import LohaModule - - -def create_network(multiplier, network_dim, network_alpha, vae, text_encoder, unet, **kwargs): - if network_dim is None: - network_dim = 4 # default - conv_dim = int(kwargs.get('conv_dim', network_dim)) - conv_alpha = float(kwargs.get('conv_alpha', network_alpha)) - dropout = float(kwargs.get('dropout', 0.)) - algo = kwargs.get('algo', 'lora') - disable_cp = kwargs.get('disable_conv_cp', False) - network_module = { - 'lora': LoConModule, - 'loha': LohaModule, - }[algo] - - print(f'Using rank adaptation algo: {algo}') - - if (algo == 'loha' - and not kwargs.get('no_dim_warn', False) - and (network_dim>64 or conv_dim>64)): - print('='*20 + 'WARNING' + '='*20) - warn( - ( - "You are not supposed to use dim>64 (64*64 = 4096, it already has enough rank)" - "in Hadamard Product representation!\n" - "Please consider use lower dim or disable this warning with --network_args no_dim_warn=True\n" - "If you just want to use high dim loha, please consider use lower lr." - ), - stacklevel=2, - ) - print('='*20 + 'WARNING' + '='*20) - - network = LycorisNetwork( - text_encoder, unet, - multiplier=multiplier, - lora_dim=network_dim, conv_lora_dim=conv_dim, - alpha=network_alpha, conv_alpha=conv_alpha, - dropout=dropout, - use_cp=(not bool(disable_cp)), - network_module=network_module - ) - - return network - - -class LycorisNetwork(torch.nn.Module): - ''' - LoRA + LoCon - ''' - # Ignore proj_in or proj_out, their channels is only a few. - UNET_TARGET_REPLACE_MODULE = [ - "Transformer2DModel", - "Attention", - "ResnetBlock2D", - "Downsample2D", - "Upsample2D" - ] - TEXT_ENCODER_TARGET_REPLACE_MODULE = ["CLIPAttention", "CLIPMLP"] - LORA_PREFIX_UNET = 'lora_unet' - LORA_PREFIX_TEXT_ENCODER = 'lora_te' - - def __init__( - self, - text_encoder, unet, - multiplier=1.0, - lora_dim=4, conv_lora_dim=4, - alpha=1, conv_alpha=1, - use_cp = True, - dropout = 0, network_module = LoConModule, - ) -> None: - super().__init__() - self.multiplier = multiplier - self.lora_dim = lora_dim - self.conv_lora_dim = int(conv_lora_dim) - if self.conv_lora_dim != self.lora_dim: - print('Apply different lora dim for conv layer') - print(f'Conv Dim: {conv_lora_dim}, Linear Dim: {lora_dim}') - - self.alpha = alpha - self.conv_alpha = float(conv_alpha) - if self.alpha != self.conv_alpha: - print('Apply different alpha value for conv layer') - print(f'Conv alpha: {conv_alpha}, Linear alpha: {alpha}') - - if 1 >= dropout >= 0: - print(f'Use Dropout value: {dropout}') - self.dropout = dropout - - # create module instances - def create_modules(prefix, root_module: torch.nn.Module, target_replace_modules) -> List[network_module]: - print('Create LyCORIS Module') - loras = [] - for name, module in root_module.named_modules(): - if module.__class__.__name__ in target_replace_modules: - for child_name, child_module in module.named_modules(): - lora_name = prefix + '.' + name + '.' + child_name - lora_name = lora_name.replace('.', '_') - if child_module.__class__.__name__ == 'Linear' and lora_dim>0: - lora = network_module( - lora_name, child_module, self.multiplier, - self.lora_dim, self.alpha, self.dropout, use_cp - ) - elif child_module.__class__.__name__ == 'Conv2d': - k_size, *_ = child_module.kernel_size - if k_size==1 and lora_dim>0: - lora = network_module( - lora_name, child_module, self.multiplier, - self.lora_dim, self.alpha, self.dropout, use_cp - ) - elif conv_lora_dim>0: - lora = network_module( - lora_name, child_module, self.multiplier, - self.conv_lora_dim, self.conv_alpha, self.dropout, use_cp - ) - else: - continue - else: - continue - loras.append(lora) - return loras - - self.text_encoder_loras = create_modules( - LycorisNetwork.LORA_PREFIX_TEXT_ENCODER, - text_encoder, - LycorisNetwork.TEXT_ENCODER_TARGET_REPLACE_MODULE - ) - print(f"create LyCORIS for Text Encoder: {len(self.text_encoder_loras)} modules.") - - self.unet_loras = create_modules(LycorisNetwork.LORA_PREFIX_UNET, unet, LycorisNetwork.UNET_TARGET_REPLACE_MODULE) - print(f"create LyCORIS for U-Net: {len(self.unet_loras)} modules.") - - self.weights_sd = None - - # assertion - names = set() - for lora in self.text_encoder_loras + self.unet_loras: - assert lora.lora_name not in names, f"duplicated lora name: {lora.lora_name}" - names.add(lora.lora_name) - - def set_multiplier(self, multiplier): - self.multiplier = multiplier - for lora in self.text_encoder_loras + self.unet_loras: - lora.multiplier = self.multiplier - - def load_weights(self, file): - if os.path.splitext(file)[1] == '.safetensors': - from safetensors.torch import load_file, safe_open - self.weights_sd = load_file(file) - else: - self.weights_sd = torch.load(file, map_location='cpu') - - def apply_to(self, text_encoder, unet, apply_text_encoder=None, apply_unet=None): - if self.weights_sd: - weights_has_text_encoder = weights_has_unet = False - for key in self.weights_sd.keys(): - if key.startswith(LycorisNetwork.LORA_PREFIX_TEXT_ENCODER): - weights_has_text_encoder = True - elif key.startswith(LycorisNetwork.LORA_PREFIX_UNET): - weights_has_unet = True - - if apply_text_encoder is None: - apply_text_encoder = weights_has_text_encoder - else: - assert apply_text_encoder == weights_has_text_encoder, f"text encoder weights: {weights_has_text_encoder} but text encoder flag: {apply_text_encoder} / 重みとText Encoderのフラグが矛盾しています" - - if apply_unet is None: - apply_unet = weights_has_unet - else: - assert apply_unet == weights_has_unet, f"u-net weights: {weights_has_unet} but u-net flag: {apply_unet} / 重みとU-Netのフラグが矛盾しています" - else: - assert apply_text_encoder is not None and apply_unet is not None, f"internal error: flag not set" - - if apply_text_encoder: - print("enable LyCORIS for text encoder") - else: - self.text_encoder_loras = [] - - if apply_unet: - print("enable LyCORIS for U-Net") - else: - self.unet_loras = [] - - for lora in self.text_encoder_loras + self.unet_loras: - lora.apply_to() - self.add_module(lora.lora_name, lora) - - if self.weights_sd: - # if some weights are not in state dict, it is ok because initial LoRA does nothing (lora_up is initialized by zeros) - info = self.load_state_dict(self.weights_sd, False) - print(f"weights are loaded: {info}") - - def enable_gradient_checkpointing(self): - # not supported - def make_ckpt(module): - if isinstance(module, torch.nn.Module): - module.grad_ckpt = True - self.apply(make_ckpt) - pass - - def prepare_optimizer_params(self, text_encoder_lr, unet_lr): - def enumerate_params(loras): - params = [] - for lora in loras: - params.extend(lora.parameters()) - return params - - self.requires_grad_(True) - all_params = [] - - if self.text_encoder_loras: - param_data = {'params': enumerate_params(self.text_encoder_loras)} - if text_encoder_lr is not None: - param_data['lr'] = text_encoder_lr - all_params.append(param_data) - - if self.unet_loras: - param_data = {'params': enumerate_params(self.unet_loras)} - if unet_lr is not None: - param_data['lr'] = unet_lr - all_params.append(param_data) - - return all_params - - def prepare_grad_etc(self, text_encoder, unet): - self.requires_grad_(True) - - def on_epoch_start(self, text_encoder, unet): - self.train() - - def get_trainable_params(self): - return self.parameters() - - def save_weights(self, file, dtype, metadata): - if metadata is not None and len(metadata) == 0: - metadata = None - - state_dict = self.state_dict() - - if dtype is not None: - for key in list(state_dict.keys()): - v = state_dict[key] - v = v.detach().clone().to("cpu").to(dtype) - state_dict[key] = v - - if os.path.splitext(file)[1] == '.safetensors': - from safetensors.torch import save_file - - # Precalculate model hashes to save time on indexing - if metadata is None: - metadata = {} - model_hash, legacy_hash = precalculate_safetensors_hashes(state_dict, metadata) - metadata["sshs_model_hash"] = model_hash - metadata["sshs_legacy_hash"] = legacy_hash - - save_file(state_dict, file, metadata) - else: - torch.save(state_dict, file) diff --git a/spaces/supertori/files/stable-diffusion-webui/extensions/openpose-editor/style.css b/spaces/supertori/files/stable-diffusion-webui/extensions/openpose-editor/style.css deleted file mode 100644 index 7fb950e3699fe34038500b58fa29f78153e99947..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/extensions/openpose-editor/style.css +++ /dev/null @@ -1,7 +0,0 @@ -#hide_json { - display: none; -} - -#openpose_editor_input { - display: none; -} \ No newline at end of file diff --git a/spaces/supertori/files/stable-diffusion-webui/test/basic_features/txt2img_test.py b/spaces/supertori/files/stable-diffusion-webui/test/basic_features/txt2img_test.py deleted file mode 100644 index cb525fbb79d68f9bd9aa47995a91182f635c689d..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/test/basic_features/txt2img_test.py +++ /dev/null @@ -1,82 +0,0 @@ -import unittest -import requests - - -class TestTxt2ImgWorking(unittest.TestCase): - def setUp(self): - self.url_txt2img = "http://localhost:7860/sdapi/v1/txt2img" - self.simple_txt2img = { - "enable_hr": False, - "denoising_strength": 0, - "firstphase_width": 0, - "firstphase_height": 0, - "prompt": "example prompt", - "styles": [], - "seed": -1, - "subseed": -1, - "subseed_strength": 0, - "seed_resize_from_h": -1, - "seed_resize_from_w": -1, - "batch_size": 1, - "n_iter": 1, - "steps": 3, - "cfg_scale": 7, - "width": 64, - "height": 64, - "restore_faces": False, - "tiling": False, - "negative_prompt": "", - "eta": 0, - "s_churn": 0, - "s_tmax": 0, - "s_tmin": 0, - "s_noise": 1, - "sampler_index": "Euler a" - } - - def test_txt2img_simple_performed(self): - self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200) - - def test_txt2img_with_negative_prompt_performed(self): - self.simple_txt2img["negative_prompt"] = "example negative prompt" - self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200) - - def test_txt2img_with_complex_prompt_performed(self): - self.simple_txt2img["prompt"] = "((emphasis)), (emphasis1:1.1), [to:1], [from::2], [from:to:0.3], [alt|alt1]" - self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200) - - def test_txt2img_not_square_image_performed(self): - self.simple_txt2img["height"] = 128 - self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200) - - def test_txt2img_with_hrfix_performed(self): - self.simple_txt2img["enable_hr"] = True - self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200) - - def test_txt2img_with_tiling_performed(self): - self.simple_txt2img["tiling"] = True - self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200) - - def test_txt2img_with_restore_faces_performed(self): - self.simple_txt2img["restore_faces"] = True - self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200) - - def test_txt2img_with_vanilla_sampler_performed(self): - self.simple_txt2img["sampler_index"] = "PLMS" - self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200) - self.simple_txt2img["sampler_index"] = "DDIM" - self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200) - self.simple_txt2img["sampler_index"] = "UniPC" - self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200) - - def test_txt2img_multiple_batches_performed(self): - self.simple_txt2img["n_iter"] = 2 - self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200) - - def test_txt2img_batch_performed(self): - self.simple_txt2img["batch_size"] = 2 - self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/datasets/ade.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/datasets/ade.py deleted file mode 100644 index 5913e43775ed4920b6934c855eb5a37c54218ebf..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/datasets/ade.py +++ /dev/null @@ -1,84 +0,0 @@ -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class ADE20KDataset(CustomDataset): - """ADE20K dataset. - - In segmentation map annotation for ADE20K, 0 stands for background, which - is not included in 150 categories. ``reduce_zero_label`` is fixed to True. - The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is fixed to - '.png'. - """ - CLASSES = ( - 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road', 'bed ', - 'windowpane', 'grass', 'cabinet', 'sidewalk', 'person', 'earth', - 'door', 'table', 'mountain', 'plant', 'curtain', 'chair', 'car', - 'water', 'painting', 'sofa', 'shelf', 'house', 'sea', 'mirror', 'rug', - 'field', 'armchair', 'seat', 'fence', 'desk', 'rock', 'wardrobe', - 'lamp', 'bathtub', 'railing', 'cushion', 'base', 'box', 'column', - 'signboard', 'chest of drawers', 'counter', 'sand', 'sink', - 'skyscraper', 'fireplace', 'refrigerator', 'grandstand', 'path', - 'stairs', 'runway', 'case', 'pool table', 'pillow', 'screen door', - 'stairway', 'river', 'bridge', 'bookcase', 'blind', 'coffee table', - 'toilet', 'flower', 'book', 'hill', 'bench', 'countertop', 'stove', - 'palm', 'kitchen island', 'computer', 'swivel chair', 'boat', 'bar', - 'arcade machine', 'hovel', 'bus', 'towel', 'light', 'truck', 'tower', - 'chandelier', 'awning', 'streetlight', 'booth', 'television receiver', - 'airplane', 'dirt track', 'apparel', 'pole', 'land', 'bannister', - 'escalator', 'ottoman', 'bottle', 'buffet', 'poster', 'stage', 'van', - 'ship', 'fountain', 'conveyer belt', 'canopy', 'washer', 'plaything', - 'swimming pool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', - 'bag', 'minibike', 'cradle', 'oven', 'ball', 'food', 'step', 'tank', - 'trade name', 'microwave', 'pot', 'animal', 'bicycle', 'lake', - 'dishwasher', 'screen', 'blanket', 'sculpture', 'hood', 'sconce', - 'vase', 'traffic light', 'tray', 'ashcan', 'fan', 'pier', 'crt screen', - 'plate', 'monitor', 'bulletin board', 'shower', 'radiator', 'glass', - 'clock', 'flag') - - PALETTE = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], - [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], - [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], - [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], - [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], - [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], - [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], - [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], - [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], - [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], - [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], - [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], - [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], - [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], - [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], - [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], - [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], - [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], - [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], - [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], - [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], - [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], - [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], - [102, 255, 0], [92, 0, 255]] - - def __init__(self, **kwargs): - super(ADE20KDataset, self).__init__( - img_suffix='.jpg', - seg_map_suffix='.png', - reduce_zero_label=True, - **kwargs) diff --git a/spaces/szukevin/VISOR-GPT/train/scripts/generate_lm.py b/spaces/szukevin/VISOR-GPT/train/scripts/generate_lm.py deleted file mode 100644 index a10cae005a40fe762b19f569782a7d0c6e4ee296..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/scripts/generate_lm.py +++ /dev/null @@ -1,112 +0,0 @@ -""" - This script provides an exmaple to wrap TencentPretrain for generation. - Given the beginning of a text, language model generates the rest. -""" -import sys -import os -import argparse -import torch -import torch.nn.functional as F - -tencentpretrain_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) -sys.path.append(tencentpretrain_dir) - -from tencentpretrain.embeddings import * -from tencentpretrain.encoders import * -from tencentpretrain.targets import * -from tencentpretrain.utils.constants import * -from tencentpretrain.utils import * -from tencentpretrain.utils.config import load_hyperparam -from tencentpretrain.model_loader import load_model -from tencentpretrain.opts import infer_opts, tokenizer_opts - - -class GenerateLm(torch.nn.Module): - def __init__(self, args): - super(GenerateLm, self).__init__() - self.embedding = Embedding(args) - for embedding_name in args.embedding: - tmp_emb = str2embedding[embedding_name](args, len(args.tokenizer.vocab)) - self.embedding.update(tmp_emb, embedding_name) - self.encoder = str2encoder[args.encoder](args) - self.target = Target() - self.target.update(LmTarget(args, len(args.tokenizer.vocab)), "lm") - - def forward(self, src, seg): - emb = self.embedding(src, seg) - output = self.encoder(emb, seg) - output = self.target.lm.output_layer(output) - return output - - -def top_k_top_p_filtering(logits, top_k, top_p): - top_k = min(top_k, logits.size(-1)) # Safety check - if top_k > 0: - # Remove all tokens with a probability less than the last token of the top-k - indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None] - logits[indices_to_remove] = -float("Inf") - - if top_p > 0.0: - sorted_logits, sorted_indices = torch.sort(logits, descending=True) - cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1) - - # Remove tokens with cumulative probability above the threshold - sorted_indices_to_remove = cumulative_probs > top_p - # Shift the indices to the right to keep also the first token above the threshold - sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone() - sorted_indices_to_remove[..., 0] = 0 - - indices_to_remove = sorted_indices[sorted_indices_to_remove] - logits[indices_to_remove] = -float("Inf") - return logits - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - - infer_opts(parser) - - parser.add_argument("--top_k", type=int, default=70) - parser.add_argument("--top_p", type=float, default=0) - parser.add_argument("--temperature", type=float, default=1.0) - - tokenizer_opts(parser) - - args = parser.parse_args() - - args.target = "lm" - args.batch_size = 1 - - args = load_hyperparam(args) - - args.tokenizer = str2tokenizer[args.tokenizer](args) - - model = GenerateLm(args) - model = load_model(model, args.load_model_path) - model.eval() - - with open(args.test_path, mode="r", encoding="utf-8") as f: - line = f.readline().strip() - src = args.tokenizer.convert_tokens_to_ids([CLS_TOKEN] + args.tokenizer.tokenize(line)) - seg = [1] * len(src) - beginning_length = len(src) - if len(src) > args.seq_length: - src = src[:args.seq_length] - seg = seg[:args.seq_length] - src_tensor, seg_tensor = torch.LongTensor([src]), torch.LongTensor([seg]) - - with open(args.prediction_path, mode="w", encoding="utf-8") as f: - for i in range(args.seq_length - beginning_length): - output = model(src_tensor, seg_tensor) - next_token_logits = output[0][-1] / args.temperature - filtered_logits = top_k_top_p_filtering(next_token_logits, args.top_k, args.top_p) - next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=1) - - src_tensor = torch.cat([src_tensor, next_token.view(1, 1)], dim=1) - seg_tensor = torch.cat([seg_tensor, torch.tensor([[1]])], dim=1) - - f.write(line + "\n") - generated_sentence = "".join( - args.tokenizer.convert_ids_to_tokens([token_id.item() for token_id in src_tensor[0]]) - ) - f.write(generated_sentence) diff --git a/spaces/t13718236382/bingoGPT4/src/components/chat-image.tsx b/spaces/t13718236382/bingoGPT4/src/components/chat-image.tsx deleted file mode 100644 index 05ecc9771eada27a0f2d160bb01cba170d37bb09..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/bingoGPT4/src/components/chat-image.tsx +++ /dev/null @@ -1,170 +0,0 @@ -import { - useEffect, - useState, - useCallback, - ChangeEvent, - ClipboardEvent, - MouseEventHandler, - FormEvent, - useRef -} from "react" -import Image from 'next/image' -import PasteIcon from '@/assets/images/paste.svg' -import UploadIcon from '@/assets/images/upload.svg' -import CameraIcon from '@/assets/images/camera.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { cn } from '@/lib/utils' - -interface ChatImageProps extends Pick, 'uploadImage'> {} - -const preventDefault: MouseEventHandler = (event) => { - event.nativeEvent.stopImmediatePropagation() -} - -const toBase64 = (file: File): Promise => new Promise((resolve, reject) => { - const reader = new FileReader() - reader.readAsDataURL(file) - reader.onload = () => resolve(reader.result as string) - reader.onerror = reject -}) - -export function ChatImage({ children, uploadImage }: React.PropsWithChildren) { - const videoRef = useRef(null) - const canvasRef = useRef(null) - const mediaStream = useRef() - const [panel, setPanel] = useState('none') - - const upload = useCallback((url: string) => { - if (url) { - uploadImage(url) - } - setPanel('none') - }, [panel]) - - const onUpload = useCallback(async (event: ChangeEvent) => { - const file = event.target.files?.[0] - if (file) { - const fileDataUrl = await toBase64(file) - if (fileDataUrl) { - upload(fileDataUrl) - } - } - }, []) - - const onPaste = useCallback((event: ClipboardEvent) => { - const pasteUrl = event.clipboardData.getData('text') ?? '' - upload(pasteUrl) - }, []) - - const onEnter = useCallback((event: FormEvent) => { - event.preventDefault() - event.stopPropagation() - // @ts-ignore - const inputUrl = event.target.elements.image.value - if (inputUrl) { - upload(inputUrl) - } - }, []) - - const openVideo: MouseEventHandler = async (event) => { - event.stopPropagation() - setPanel('camera-mode') - } - - const onCapture = () => { - if (canvasRef.current && videoRef.current) { - const canvas = canvasRef.current - canvas.width = videoRef.current!.videoWidth - canvas.height = videoRef.current!.videoHeight - canvas.getContext('2d')?.drawImage(videoRef.current, 0, 0, canvas.width, canvas.height) - const cameraUrl = canvas.toDataURL('image/jpeg') - upload(cameraUrl) - } - } - - useEffect(() => { - const handleBlur = () => { - if (panel !== 'none') { - setPanel('none') - } - } - document.addEventListener('click', handleBlur) - return () => { - document.removeEventListener('click', handleBlur) - } - }, [panel]) - - useEffect(() => { - if (panel === 'camera-mode') { - navigator.mediaDevices.getUserMedia({ video: true, audio: false }) - .then(videoStream => { - mediaStream.current = videoStream - if (videoRef.current) { - videoRef.current.srcObject = videoStream - } - }) - } else { - if (mediaStream.current) { - mediaStream.current.getTracks().forEach(function(track) { - track.stop() - }) - mediaStream.current = undefined - } - } - }, [panel]) - - return ( -
    -
    panel === 'none' ? setPanel('normal') : setPanel('none')}>{children}
    -
    -
    -
    -

    添加图像

    -
    -
    - paste -
    - e.stopPropagation()} - /> -
    -
    -
    - - -
    -
    - {panel === 'camera-mode' &&
    -
    -
    -
    -
    -
    -
    -
    } -
    -
    - ) -} diff --git a/spaces/tabeina/bingo1/src/components/ui/input.tsx b/spaces/tabeina/bingo1/src/components/ui/input.tsx deleted file mode 100644 index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000 --- a/spaces/tabeina/bingo1/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = 'Input' - -export { Input } diff --git a/spaces/tarteel-ai/latest-demo/README.md b/spaces/tarteel-ai/latest-demo/README.md deleted file mode 100644 index 22619482567714018af388cc647cbdc52c48749d..0000000000000000000000000000000000000000 --- a/spaces/tarteel-ai/latest-demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Tarteel AI's - Latest Model Demo -emoji: 🤗 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.0.5 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/tensorflow/efficientnetv2-s/README.md b/spaces/tensorflow/efficientnetv2-s/README.md deleted file mode 100644 index a75e94e7000cddaa93ad78b98221be40f14651fc..0000000000000000000000000000000000000000 --- a/spaces/tensorflow/efficientnetv2-s/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Efficientnetv2 S -emoji: 🦀 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/terfces0erbo/CollegeProjectV2/BETTER Free PLAXIS 2D V9 Keygen.rar.md b/spaces/terfces0erbo/CollegeProjectV2/BETTER Free PLAXIS 2D V9 Keygen.rar.md deleted file mode 100644 index 6250b4d559535e3a7fcfd3212d1793500b4e4b4f..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/BETTER Free PLAXIS 2D V9 Keygen.rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Free PLAXIS 2D V9 Keygen.rar


    DOWNLOAD - https://bytlly.com/2uGm1t



    -
    -WinUtilities Professional Edition 11.39 Multilanguage + Keygen ... Free PLAXIS 2D V9 Keygen.rar >>> DOWNLOAD PDF Books Author Subject ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/Cevio Creative Studio Crack Coca.md b/spaces/terfces0erbo/CollegeProjectV2/Cevio Creative Studio Crack Coca.md deleted file mode 100644 index bf56233aace1380916d638b1198759615dc27348..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Cevio Creative Studio Crack Coca.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Cevio Creative Studio Crack Coca


    Downloadhttps://bytlly.com/2uGl2c



    -
    - 1fdad05405
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/IGO Primo 2.2 International Edition.rar RAR 277.00M.md b/spaces/terfces0erbo/CollegeProjectV2/IGO Primo 2.2 International Edition.rar RAR 277.00M.md deleted file mode 100644 index 7185162dbc8011b9b945af08eaae6f852babed0b..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/IGO Primo 2.2 International Edition.rar RAR 277.00M.md +++ /dev/null @@ -1,8 +0,0 @@ - -

    While he is a decorated athlete, he was a force in the classroom both for his basketball and criminal justice and law. Multi-Subject Resources, Current News Sources, Arts & Humanities, Business, Education, Engineering, Health Sciences, Images, International & Area Studies. 3-6, 2018) developed the technical guide Renewal Assessment Report (RAR) on the active substance, including physical/chemical properties, toxicological information, environmental fate and behaviour, ecotoxicology, and. rar Kalluri, Keerti.

    -

    AiGO plays in the USL Division II men’s soccer league. Schilde, Tarik, 2001). bCropLife International Technical Monograph th Edition.. Consistent with the Process and Policy for the Renewal Assessment Report (RAR),1 details are included here of the physical chemical properties, environmental fate and behaviour, and environmental fate and.

    -

    iGO Primo 2.2 International Edition.rar RAR 277.00M


    Download Ziphttps://bytlly.com/2uGkM2



    -

    FYI, I have all 3 international SIM cards. I can use the TW and SEA SIM cards for local calls and calls to the INTERNATIONAL numbers for which I have a credit card with, but I can only use the US and Hong Kong SIM cards for calls to the US numbers. I guess I can use the PRC one for calls within the PRC. I havent tried calling to the rest of the world yet, though. Also, I have to pay the $2.50 to use the cell phone locally, which is why I bought the international SIMs. The international SIMs are prepaid, though, and only work locally.

    -

    Une updates unsupported version.rar Source File for Adobe After Effects CC.. A musician of international influence, he is known to be an accomplished pianist, guitarist and singer. This editor will work with all of them. If your system is on the Helios 4.2 platform, please see the Helios 4.2-2 Helios 4.2-1 Helios 4.2-0 post.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/texantech/04-Gradio-SOTA-Seq2Seq-AutoQA/app.py b/spaces/texantech/04-Gradio-SOTA-Seq2Seq-AutoQA/app.py deleted file mode 100644 index c1cd92499cf1c7d2a91b4dc226bf2d558ff67661..0000000000000000000000000000000000000000 --- a/spaces/texantech/04-Gradio-SOTA-Seq2Seq-AutoQA/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import gradio as gr -from qasrl_model_pipeline import QASRL_Pipeline - -models = ["kleinay/qanom-seq2seq-model-baseline", - "kleinay/qanom-seq2seq-model-joint"] -pipelines = {model: QASRL_Pipeline(model) for model in models} - - -description = f"""Using Seq2Seq T5 model which takes a sequence of items and outputs another sequence this model generates Questions and Answers (QA) with focus on Semantic Role Labeling (SRL)""" -title="Seq2Seq T5 Questions and Answers (QA) with Semantic Role Labeling (SRL)" -examples = [[models[0], "In March and April the patient

    had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", True, "fall"], - [models[1], "In March and April the patient had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions

    like anaphylaxis and shortness of breath.", True, "reactions"], - [models[0], "In March and April the patient had two falls. One was related

    to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", True, "relate"], - [models[1], "In March and April the patient

    had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", False, "fall"]] - -input_sent_box_label = "Insert sentence here. Mark the predicate by adding the token '

    ' before it." -verb_form_inp_placeholder = "e.g. 'decide' for the nominalization 'decision', 'teach' for 'teacher', etc." -links = """

    -QASRL Website | Model Repo at Huggingface Hub -

    """ -def call(model_name, sentence, is_nominal, verb_form): - predicate_marker="

    " - if predicate_marker not in sentence: - raise ValueError("You must highlight one word of the sentence as a predicate using preceding '

    '.") - - if not verb_form: - if is_nominal: - raise ValueError("You should provide the verbal form of the nominalization") - - toks = sentence.split(" ") - pred_idx = toks.index(predicate_marker) - predicate = toks(pred_idx+1) - verb_form=predicate - pipeline = pipelines[model_name] - pipe_out = pipeline([sentence], - predicate_marker=predicate_marker, - predicate_type="nominal" if is_nominal else "verbal", - verb_form=verb_form)[0] - return pipe_out["QAs"], pipe_out["generated_text"] -iface = gr.Interface(fn=call, - inputs=[gr.inputs.Radio(choices=models, default=models[0], label="Model"), - gr.inputs.Textbox(placeholder=input_sent_box_label, label="Sentence", lines=4), - gr.inputs.Checkbox(default=True, label="Is Nominalization?"), - gr.inputs.Textbox(placeholder=verb_form_inp_placeholder, label="Verbal form (for nominalizations)", default='')], - outputs=[gr.outputs.JSON(label="Model Output - QASRL"), gr.outputs.Textbox(label="Raw output sequence")], - title=title, - description=description, - article=links, - examples=examples ) - -iface.launch() \ No newline at end of file diff --git a/spaces/thejagstudio/procom/main/migrations/0008_products_categorykey.py b/spaces/thejagstudio/procom/main/migrations/0008_products_categorykey.py deleted file mode 100644 index 24a585ac3e8ac74b155ca1a1aa809600ff83163f..0000000000000000000000000000000000000000 --- a/spaces/thejagstudio/procom/main/migrations/0008_products_categorykey.py +++ /dev/null @@ -1,23 +0,0 @@ -# Generated by Django 4.1.4 on 2023-04-08 14:54 - -from django.db import migrations, models -import django.db.models.deletion - - -class Migration(migrations.Migration): - - dependencies = [ - ("main", "0007_products_link"), - ] - - operations = [ - migrations.AddField( - model_name="products", - name="categoryKey", - field=models.ForeignKey( - null=True, - on_delete=django.db.models.deletion.CASCADE, - to="main.categories", - ), - ), - ] diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dennis Ferrer - Hey Hey (Acapella).md b/spaces/tialenAdioni/chat-gpt-api/logs/Dennis Ferrer - Hey Hey (Acapella).md deleted file mode 100644 index 21f5e3a29b12f378e04ba8b9d2a16780b75d3f66..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Dennis Ferrer - Hey Hey (Acapella).md +++ /dev/null @@ -1,18 +0,0 @@ - -Here is a possible title and article with html formatting for the keyword "Dennis Ferrer - Hey Hey (Acapella)": - -

    How to Download and Use Dennis Ferrer's Hey Hey Acapella

    -

    Dennis Ferrer is a renowned American DJ, producer and remixer who has been making waves in the house music scene since the late 1990s. One of his most popular songs is Hey Hey, a catchy and uplifting track that features his own vocals. The song was released in 2009 and became a hit on various charts and clubs around the world.

    -

    If you are a fan of Dennis Ferrer or a music producer who wants to remix his song, you might be interested in downloading and using his acapella version of Hey Hey. An acapella is a vocal-only track that can be used for creating new mixes or mashups with other songs. Acapellas are also useful for practicing your singing or DJing skills.

    -

    Dennis Ferrer - Hey Hey (Acapella)


    Download File --->>> https://urlcod.com/2uK3A3



    -

    Fortunately, there are several websites that offer free downloads of Dennis Ferrer's Hey Hey acapella. One of them is Voclr.it, a platform that collects widely available acapellas and brings them all to one place. You can find the acapella here: https://www.voclr.it/acapella/dennis-ferrer-hey-hey-acapella/. You will need to register for a free account and agree to respect the artists and labels for originally sharing the samples.

    -

    Another website that provides the acapella is Acapellas4U, a trusted source of acapellas for superstar DJs. You can download the acapella here: https://www.acapellas4u.co.uk/download/27197-dennis_ferrer_-_hey_hey_acapella/. You will also need to sign up for a free account and follow their terms and conditions.

    -

    Once you have downloaded the acapella, you can use it in various ways. You can import it into your favorite music software or app and mix it with other tracks or effects. You can also play it on your speakers or headphones and sing along or practice your DJing skills. You can also upload your remixes or mashups to YouTube or other platforms, but make sure you have full clearance from the official owner before you publish any work containing this acapella by Dennis Ferrer.

    -

    Dennis Ferrer's Hey Hey acapella is a great resource for music lovers and producers who want to enjoy his amazing vocals or create their own versions of his song. If you decide to download and use it, please show respect and appreciation to Dennis Ferrer and his label for sharing this sample with the world.

    Here is a possible continuation of the article: - -

    If you want to learn more about Dennis Ferrer and his music, you can visit his official website: https://www.dennisferrer.com/. You can also follow him on his social media accounts, such as Facebook, Twitter and Instagram. You can also listen to his songs on Spotify, Apple Music or YouTube.

    -

    -

    Dennis Ferrer is not only a talented musician, but also a generous mentor and supporter of other artists. He founded his own label, Objektivity, in 2006 and has signed and collaborated with many emerging and established names in the house music scene, such as Kerri Chandler, The Martinez Brothers, Nasser Baker and more. He also hosts a weekly radio show called Misfits Society Radio, where he showcases his latest tracks and mixes.

    -

    Dennis Ferrer is one of the most influential and respected figures in the house music industry. His song Hey Hey is a classic that showcases his unique style and voice. By downloading and using his acapella version of Hey Hey, you can experience his music in a new way and express your own creativity.

    7196e7f11a
    -
    -
    \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Best ANU B.Sc 6th Sem Materials PDF for Your Degree Course.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Best ANU B.Sc 6th Sem Materials PDF for Your Degree Course.md deleted file mode 100644 index b77d58629ea82029b5795b162c977b1936216df5..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Best ANU B.Sc 6th Sem Materials PDF for Your Degree Course.md +++ /dev/null @@ -1,147 +0,0 @@ -
    -

    How to Download ANU BSc 6th Sem Materials PDF

    -

    If you are a student of Acharya Nagarjuna University (ANU) and pursuing Bachelor of Science (BSc) in any stream, you might be looking for the materials pdf for your 6th semester exams. In this article, we will guide you on how to download ANU BSc 6th sem materials pdf from the official website of the university. We will also provide you some tips on how to prepare for your exams and ace them with flying colors.

    -

    anu bsc 6th sem materials pdf download


    Download File ……… https://bltlly.com/2uOpbN



    -

    Introduction

    -

    Acharya Nagarjuna University (ANU) is one of the leading universities in Andhra Pradesh, India. It offers various undergraduate, postgraduate, and doctoral programs in arts, commerce, science, engineering, law, education, pharmacy, and management. The university conducts semester-wise exams for all its courses as per the academic calendar.

    -

    Bachelor of Science (BSc) is a three-year degree course that covers various subjects related to science such as mathematics, physics, chemistry, biology, computer science, statistics, etc. The course is divided into six semesters, each having a duration of six months. The 6th semester is the final semester of the course and it consists of core subjects, elective subjects, skill development courses, and foundation courses.

    -

    Students who are appearing for ANU BSc 6th sem exams need to study the materials pdf provided by the university for each subject. These materials pdf contain the syllabus, important questions, model papers, notes, and other useful resources that help the students to understand the concepts and topics better. By downloading these materials pdf, students can save their time and money and study at their own pace and convenience.

    -

    Some of the benefits of downloading ANU BSc 6th sem materials pdf are:

    -

    anu bsc 6th sem syllabus pdf download
    -anu bsc 6th sem question papers pdf download
    -anu bsc 6th sem previous papers pdf download
    -anu bsc 6th sem model papers pdf download
    -anu bsc 6th sem notes pdf download
    -anu bsc 6th sem study materials pdf download
    -anu bsc 6th sem textbooks pdf download
    -anu bsc 6th sem online materials pdf download
    -anu bsc 6th sem important questions pdf download
    -anu bsc 6th sem assignments pdf download
    -anu bsc 6th sem practicals pdf download
    -anu bsc 6th sem projects pdf download
    -anu bsc 6th sem results pdf download
    -anu bsc 6th sem revaluation pdf download
    -anu bsc 6th sem timetable pdf download
    -anu bsc 6th sem exam date pdf download
    -anu bsc 6th sem hall ticket pdf download
    -anu bsc 6th sem fee notification pdf download
    -anu bsc 6th sem admission pdf download
    -anu bsc 6th sem application form pdf download
    -anu bsc 6th sem eligibility criteria pdf download
    -anu bsc 6th sem course structure pdf download
    -anu bsc 6th sem curriculum pdf download
    -anu bsc 6th sem regulations pdf download
    -anu bsc 6th sem scheme of examination pdf download
    -anu bsc 6th sem marks distribution pdf download
    -anu bsc 6th sem grading system pdf download
    -anu bsc 6th sem credit system pdf download
    -anu bsc 6th sem evaluation process pdf download
    -anu bsc 6th sem learning outcomes pdf download
    -anu bsc 6th sem skill development courses pdf download
    -anu bsc 6th sem life skill courses pdf download
    -anu bsc 6th sem environmental education course pdf download
    -anu bsc 6th sem human values and professional ethics course pdf download
    -anu bsc 6th sem mathematics materials pdf download
    -anu bsc 6th sem physics materials pdf download
    -anu bsc 6th sem chemistry materials pdf download
    -anu bsc 6th sem botany materials pdf download
    -anu bsc 6th sem zoology materials pdf download
    -anu bsc 6th sem statistics materials pdf download
    -anu bsc 6th sem computer science materials pdf download
    -anu bsc 6th sem electronics materials pdf download
    -anu bsc 6th sem microbiology materials pdf download
    -anu bsc 6th sem biotechnology materials pdf download
    -anu bsc 6th sem biochemistry materials pdf download
    -anu bsc 6th sem geology materials pdf download

    -
      -
    • They are free of cost and easily accessible.
    • -
    • They are prepared by well-experienced and qualified faculty members of ANU.
    • -
    • They are updated according to the latest syllabus and exam pattern.
    • -
    • They cover all the topics and subtopics in detail.
    • -
    • They help in improving the conceptual clarity and problem-solving skills of the students.
    • -
    • They boost the confidence and performance of the students in the exams.
    • -
    -

    Steps to Download ANU BSc 6th Sem Materials PDF

    -

    To download ANU BSc 6th sem materials pdf from the official website of ANU, you need to follow these simple steps:

    -
      -
    1. Visit the official website of ANU at (^1^).
    2. -
    3. Navigate to the academics section and click on UG syllabus and question papers link under it.Select your course and semester from the list of options. For example, if you are a BSc mathematics student, select BSc Mathematics 6th Semester.
    4. -
    5. Download the materials pdf for your subjects by clicking on the download icon or the subject name. For example, if you want to download the materials pdf for Algebra and Number Theory, click on the icon or the name of the subject.
    6. -
    7. Save the files on your device or print them out for future reference.
    8. -
    -

    Tips to Prepare for ANU BSc 6th Sem Exams

    -

    Now that you have downloaded ANU BSc 6th sem materials pdf, you might be wondering how to use them effectively to prepare for your exams. Here are some tips that can help you in your preparation:

    -
      -
    • Review the syllabus and exam pattern: Before you start studying, make sure you are familiar with the syllabus and exam pattern of ANU BSc 6th sem exams. The syllabus will tell you what topics and subtopics you need to cover and the exam pattern will tell you how many questions, marks, and time duration are there for each subject. You can find the syllabus and exam pattern in the materials pdf or on the official website of ANU.
    • -
    • Study the materials pdf and solve previous question papers: The materials pdf are your best source of study material as they contain everything you need to know for your exams. Read them carefully and make notes of the important points, formulas, definitions, examples, etc. Solve the previous question papers given in the materials pdf or on the official website of ANU to get an idea of the type and level of questions asked in the exams. This will also help you to improve your speed and accuracy.
    • -
    • Revise the important topics and concepts: Revision is the key to success in any exam. Make sure you revise the important topics and concepts regularly and thoroughly. Use flashcards, mnemonics, diagrams, charts, etc. to memorize them better. You can also make a list of the topics that you find difficult or confusing and revise them more often.
    • -
    • Practice time management and mock tests: Time management is another crucial skill that you need to master for your exams. You should be able to complete your paper within the given time limit without compromising on quality. To practice time management, you can take mock tests online or offline using a timer and a stopwatch. Try to finish your paper at least 10 minutes before the deadline so that you have some time to check your answers and correct any mistakes.
    • -
    • Stay healthy and confident: Last but not least, you should take care of your health and confidence while preparing for your exams. Eat well, sleep well, exercise regularly, and avoid stress and distractions. Believe in yourself and your abilities and don't let any negative thoughts or doubts affect your performance. You have worked hard and you deserve to succeed.
    • -
    -

    Conclusion

    -

    In this article, we have shown you how to download ANU BSc 6th sem materials pdf from the official website of ANU. We have also given you some tips on how to prepare for your exams using these materials pdf. We hope that this article has been helpful and informative for you. If you follow these steps and tips, we are sure that you will be able to ace your exams with flying colors.

    -

    If you have any questions or feedback regarding this article, please feel free to leave a comment below or contact us through our email or social media channels. We would love to hear from you and assist you in any way possible.

    -

    Thank you for reading this article and best of luck for your exams!

    -

    FAQs

    -

    Here are some frequently asked questions (FAQs) related to ANU BSc 6th sem materials pdf:

    -
      -
    1. When will ANU BSc 6th sem exams be held?
      The exact dates of ANU BSc 6th sem exams are not yet announced by the university. However, they are expected to be held in June or July 2023. You can check the official website of ANU or contact your college for more details.
    2. -
    3. How can I check ANU BSc 6th sem results?
      You can check ANU BSc 6th sem results online on the official website of ANU or on . You will need to enter your hall ticket number and captcha code to view your results. You can also download or print your results for future reference.
    4. -
    5. What are the best books for ANU BSc 6th sem?
      The best books for ANU BSc 6th sem are those that are recommended by the university or your faculty members. You can also refer to the materials pdf that are provided by the university for each subject. Some of the popular books for ANU BSc 6th sem are:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      SubjectBookAuthor
      Algebra and Number TheoryA Textbook of Algebra and Number TheoryN. S. Gopalakrishnan and P. K. Srinivasan
      Real AnalysisA Course in Real AnalysisH. L. Royden and P. M. Fitzpatrick
      Differential EquationsOrdinary and Partial Differential EquationsM. D. Raisinghania
      Mathematical StatisticsFundamentals of Mathematical StatisticsS. C. Gupta and V. K. Kapoor
      Operations ResearchOperations Research: An IntroductionH. A. Taha
      Computer Programming in C++Object Oriented Programming with C++E. Balagurusamy
      -
    6. How can I contact ANU for any queries or issues?
      You can contact ANU for any queries or issues related to ANU BSc 6th sem materials pdf, exams, results, etc. by using the following modes:

      -
        -
      • Email: registrar@anu.ac.in or info@anu.ac.in.
      • -
      • Phone: 0863-2346114 or 0863-2346138.
      • -
      • Fax: 0863-2293320.
      • -
      • Address: Acharya Nagarjuna University, Nagarjuna Nagar, Guntur - 522510, Andhra Pradesh, India.
      • -
      • Social media: Facebook, Twitter, Instagram, YouTube, etc.
      • -
      -
    7. What are the career options after completing BSc from ANU?
      After completing BSc from ANU, you have a wide range of career options to choose from depending on your interest, aptitude, and skills. Some of the common career options are:

      -
        -
      • Pursue higher studies such as MSc, MBA, MCA, etc.
      • -
      • Appear for competitive exams such as UPSC, SSC, Bank PO, etc.
      • -
      • Join the government or private sector as a teacher, scientist, researcher, analyst, manager, etc.
      • -
      • Start your own business or consultancy in your field of expertise.
      • -
      • Work as a freelancer or online tutor in your domain.
      • - -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Blackky Songs MP3 Download The Best of Blackkys Music Collection.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Blackky Songs MP3 Download The Best of Blackkys Music Collection.md deleted file mode 100644 index b066e233f58d439ac2aaab78ff3310f2f21c834a..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Blackky Songs MP3 Download The Best of Blackkys Music Collection.md +++ /dev/null @@ -1,117 +0,0 @@ -
      -

      Blackky Songs MP3 Download: How to Enjoy the Best of Nigerian Reggae Music

      -

      If you are a fan of reggae music, you might have heard of Blackky, one of the pioneers of Nigerian reggae in the late 1980s and early 1990s. Blackky is known for his catchy songs, energetic performances, and unique style that blends reggae, rap, and Afrobeat. In this article, we will tell you more about who Blackky is, why you should listen to his songs, and how to download them for free.

      -

      blackky songs mp3 download


      Downloadhttps://bltlly.com/2uOh2p



      -

      Who is Blackky and Why You Should Listen to His Songs

      -

      Blackky's Biography and Music Career

      -

      Blackky, whose real name is Nya Edward Inyang, was born in Lagos, Nigeria. He started his music career as a dancer and backup singer for various artists, such as Majek Fashek, Ras Kimono, and Orits Williki. He later joined a group called The Stormers, which also included Daddy Showkey and Baba Fryo. He released his debut album, Black & Krazzy, in 1989, which was a huge success and earned him several awards. He followed it up with two more albums, About Tyme in 1991 and Blackout in 1997. He also collaborated with other artists, such as Esse Agesse, Mani Eke, Alby, and Rapta. He is regarded as one of the most influential Nigerian reggae artists of all time.

      -

      Blackky's Musical Style and Influences

      -

      Blackky's musical style is a fusion of reggae, rap, and Afrobeat. He sings in English, Pidgin English, and Efik, a language spoken by his ethnic group in southern Nigeria. He is influenced by Jamaican reggae legends, such as Bob Marley, Peter Tosh, and Burning Spear, as well as Nigerian Afrobeat icons, such as Fela Kuti and King Sunny Ade. He also incorporates elements of funk, soul, pop, and rock into his music. His songs are often upbeat, danceable, and socially conscious. He addresses issues such as love, poverty, corruption, violence, racism, and AIDS in his lyrics. He also uses humor, slang, and metaphors to make his songs more appealing and relatable.

      -

      Blackky's Best Songs and Albums

      -

      Black & Krazzy (1989)

      -

      This is Blackky's debut album and his most popular one. It features some of his biggest hits, such as Rosie, Bang Belly, Sugar Stick, Wonderful Mum, and Digun. The album showcases Blackky's versatility and creativity as a singer, rapper, songwriter, and producer. It also introduces his signature dance moves, such as the shank and the skank. The album is a classic of Nigerian reggae music and a must-listen for any fan.

      -

      About Tyme (1991)

      -

      This is Blackky's second album and his most experimental one. It features some of his most diverse songs, such as Campus Girl, Cecilia, Killer Disease, Dem Criticise, and P.S.I. The album explores different genres and sounds, such as hip hop, R&B, soul, funk, rock, and pop. It also reflects Blackky's maturity and growth as an artist. The album is a testament of Blackky's versatility and creativity as a singer, rapper, songwriter, and producer. The album is a gem of Nigerian reggae music and a must-listen for any fan.

      -

      Blackout (1997)

      -

      This is Blackky's third and last album to date. It features some of his most mature and sophisticated songs, such as Blackky's Skank, Shake Your Body, Don't Stop, Love Me Jeje, and Blackky's Anthem. The album showcases Blackky's evolution and refinement as an artist. It also features guest appearances from other Nigerian artists, such as Esse Agesse, Mani Eke, Alby, and Rapta. The album is a masterpiece of Nigerian reggae music and a must-listen for any fan.

      -

      blackky rosie mp3 download
      -blackky albums free download
      -blackky sugar stick mp3 download
      -blackky campus girl mp3 download
      -blackky wonderful mum mp3 download
      -blackky arrested mp3 download
      -blackky bang belly mp3 download
      -blackky cecilia mp3 download
      -blackky di gun mp3 download
      -blackky divorce mp3 download
      -blackky doppy mp3 download
      -blackky hardcore mp3 download
      -blackky jumbo mp3 download
      -blackky killer disease mp3 download
      -blackky okada mp3 download
      -blackky p.s.i mp3 download
      -blackky slaves mp3 download
      -blackky tribute mp3 download
      -blackky yansh man mp3 download
      -blackky sunday morning mp3 download
      -best of blackky songs mp3 download
      -latest blackky songs mp3 download
      -old school blackky songs mp3 download
      -nigerian reggae artist blackky songs mp3 download
      -boomplay music blackky songs mp3 download
      -wynk music blackky songs mp3 download
      -naijagreen music blackky songs mp3 download
      -tooxclusive music blackky songs mp3 download
      -naijaloaded music blackky songs mp3 download
      -9jaflaver music blackky songs mp3 download
      -notjustok music blackky songs mp3 download
      -360nobs music blackky songs mp3 download
      -jaguda music blackky songs mp3 download
      -pulse music blackky songs mp3 download
      -netnaija music blackky songs mp3 download
      -waploaded music blackky songs mp3 download
      -naijavibes music blackky songs mp3 download
      -afrobeat9ja music blackky songs mp3 download
      -afrohits music blackky songs mp3 download
      -afrofire music blackky songs mp3 download
      -afrobeat360 music blackky songs mp3 download
      -afrohouseking music blackky songs mp3 download
      -afrobeatnaija music blackky songs mp3 download
      -afrobeatglobal music blackky songs mp3 download
      -afrofusionmusic music blackky songs mp3 download

      -

      How to Download Blackky Songs MP3 for Free

      -

      If you want to enjoy the best of Blackky's songs, you might be wondering how to download them for free. Well, there are several websites and apps that offer free MP3 downloads of Blackky's songs. Here are some of them:

      -

      Boomplay

      -

      Boomplay is a music streaming and download app that offers millions of songs from African and international artists. You can download Blackky's songs for free on Boomplay by following these steps:

      -
        -
      • Download the Boomplay app from the Google Play Store or the App Store.
      • -
      • Create an account or log in with your Facebook or Google account.
      • -
      • Search for Blackky in the app and select his profile.
      • -
      • Browse through his albums and songs and tap on the download icon next to the ones you want.
      • -
      • Enjoy listening to Blackky's songs offline on your device.
      • -
      -

      Wynk

      -

      Wynk is another music streaming and download app that offers millions of songs from various genres and languages. You can download Blackky's songs for free on Wynk by following these steps:

      -
        -
      • Download the Wynk app from the Google Play Store or the App Store.
      • -
      • Create an account or log in with your phone number or email address.
      • -
      • Search for Blackky in the app and select his profile.
      • -
      • Browse through his albums and songs and tap on the download icon next to the ones you want.
      • -
      • Enjoy listening to Blackky's songs offline on your device.
      • -
      -

      NaijaGreen

      -

      NaijaGreen is a website that offers free MP3 downloads of Nigerian music from various artists and genres. You can download Blackky's songs for free on NaijaGreen by following these steps:

      -
        -
      • Go to the NaijaGreen website at (https://naijagreen.com.ng/).
      • -
      • Search for Blackky in the website and select his profile.
      • -
      • Browse through his albums and songs and click on the download button next to the ones you want.
      • -
      • Save the MP3 files on your device or computer.
      • -
      • Enjoy listening to Blackky's songs offline on your device or computer.
      • -
      -

      Conclusion

      -

      In conclusion, Blackky is one of the pioneers of Nigerian reggae music who has made a lasting impact on the industry with his catchy songs, energetic performances, and unique style. He has released three albums, Black & Krazzy, About Tyme, and Blackout, which are all classics of Nigerian reggae music. You can download his songs for free on various websites and apps, such as Boomplay, Wynk, and NaijaGreen. If you are a fan of reggae music, you should definitely check out Blackky's songs and enjoy the best of Nigerian reggae music.

      -

      FAQs

      -

      Here are some frequently asked questions about Blackky and his songs:

      -

      Q: Is Blackky still alive?

      -

      A: Yes, Blackky is still alive and well. He is currently based in Lagos, Nigeria, where he runs his own record label, BKS Records. He also performs occasionally at various events and shows.

      -

      Q: Is Blackky married?

      -

      A: Yes, Blackky is married to a woman named Ngozi Inyang. They have two children together, a son named Edward Jr. and a daughter named Nya.

      -

      Q: What is Blackky's net worth?

      -

      A: According to some sources, Blackky's net worth is estimated to be around $5 million as of 2023. He has made most of his money from his music career, as well as from his record label, BKS Records. He also owns some properties and assets in Nigeria and abroad.

      -

      Q: What are some of Blackky's awards and achievements?

      -

      A: Blackky has won several awards and accolades for his music and contributions to the Nigerian entertainment industry. Some of them are:

      -
        -
      • Best Reggae Album for Black & Krazzy at the Nigerian Music Awards (1990)
      • -
      • Best New Artiste at the Fame Music Awards (1990)
      • -
      • Best Reggae Artiste at the Fame Music Awards (1991)
      • -
      • Best Reggae Album for About Tyme at the Fame Music Awards (1992)
      • -
      • Best Reggae Artiste at the Nigerian Music Awards (1992)
      • -
      • Best Reggae Album for Blackout at the Nigerian Music Awards (1998)
      • -
      • Reggae Legend Award at the City People Entertainment Awards (2017)
      • -
      -

      Q: Where can I watch Blackky's videos and performances?

      -

      A: You can watch Blackky's videos and performances on various platforms, such as YouTube, Vimeo, Facebook, Instagram, and Twitter. You can also visit his official website at (https://www.blackky.com.ng/) to get more information about his music, events, and news.

      -

      Q: How can I contact Blackky or book him for a show?

      -

      A: You can contact Blackky or book him for a show by sending an email to bksrecords@yahoo.com or calling +234 803 333 3333. You can also follow him on his social media accounts to get updates on his activities and projects.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Instagram Pro 2 APK for Android - Enjoy Extra Features.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Instagram Pro 2 APK for Android - Enjoy Extra Features.md deleted file mode 100644 index 636ba0f52e8afdab27858ec96654197f5152aadd..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Instagram Pro 2 APK for Android - Enjoy Extra Features.md +++ /dev/null @@ -1,133 +0,0 @@ -
      -

      Instagram Pro 2 APK Download New Version: What You Need to Know

      -

      Instagram is one of the most popular social media platforms in the world, with over a billion users. It allows you to share photos and videos, connect with people, find relevant news, and more. But what if you want to enjoy Instagram without any limitations or restrictions? What if you want to access more features and benefits that are not available in the official app? That's where Instagram Pro 2 APK comes in.

      -

      instagram pro 2 apk download new version


      Download Zip ››››› https://bltlly.com/2uOnDs



      -

      Instagram Pro 2 APK is a modified version of the official Instagram app that offers you more control and customization over your Instagram experience. It lets you download photos and videos, hide your online status, view stories anonymously, disable ads, and much more. In this article, we will tell you everything you need to know about Instagram Pro 2 APK, including its features, benefits, how to download and install it, how to use it, and some FAQs.

      -

      What is Instagram Pro 2 APK?

      -

      Instagram Pro 2 APK is a third-party app that is developed by a team of independent developers who are not affiliated with Instagram or Facebook. It is based on the latest version of the official Instagram app, but with some added features and enhancements that make it more user-friendly and convenient. Some of the features and benefits of Instagram Pro 2 APK are:

      -

      Features of Instagram Pro 2 APK

      -
        -
      • You can download any photo or video from any account, even if it is private or archived.
      • -
      • You can hide your online status and activity from others, such as when you are typing, seen, or last seen.
      • -
      • You can view stories anonymously, without letting the story owner know that you have seen their story.
      • -
      • You can disable ads and sponsored posts that clutter your feed and stories.
      • -
      • You can copy captions, comments, bios, and hashtags from any post or profile.
      • -
      • You can zoom in and out on any photo or video.
      • -
      • You can change the theme and color of the app according to your preference.
      • -
      • You can enable dark mode for a better viewing experience at night.
      • -
      • You can translate comments and captions into any language.
      • -
      -

      Benefits of Instagram Pro 2 APK

      -
        -
      • You can save your favorite photos and videos to your device without using any external app or tool.
      • -
      • You can protect your privacy and avoid unwanted interactions by hiding your online status and activity.
      • -
      • You can stalk anyone's stories without them knowing.
      • -
      • You can enjoy a cleaner and smoother Instagram experience without any annoying ads or sponsored posts.
      • -
      • You can copy and paste any text from any post or profile without any hassle.
      • -
      • You can view any photo or video in full detail by zooming in and out.
      • -
      • You can customize the look and feel of the app according to your mood and taste.
      • -
      • You can reduce eye strain and battery consumption by enabling dark mode.
      • -
      • You can understand any comment or caption in any language by translating it.
      • -
      -

      How to Download and Install Instagram Pro 2 APK?

      -

      If you are interested in downloading and installing Instagram Pro 2 APK on your device, you need to follow some simple steps. But before that, you need to make sure that your device meets some requirements for running the app smoothly. Here are the requirements for Instagram Pro 2 APK:

      -

      InstaPro 2 latest version free download
      -How to install instagram pro 2 apk on android
      -Instagram pro 2 mod apk with extra features
      -Download instagram pro 2 for android from uptodown[^1^]
      -Instagram pro 2 apk update 2023
      -Instagram pro 2 premium apk unlocked
      -Instagram pro 2 app download for android phone
      -Instagram pro 2 apk download link
      -Instagram pro 2 new version download apk
      -Instagram pro 2 apk free download full version
      -Instagram pro 2 apk mod download latest version
      -Instagram pro 2 app for android free download
      -Download instagram pro 2 mod apk 2023
      -Instagram pro 2 latest apk download for android
      -Instagram pro 2 apk download new update
      -Instagram pro 2 app apk download latest version
      -Instagram pro 2 modded apk free download
      -Download instagram pro 2 apk for android device
      -Instagram pro 2 new update apk download
      -Instagram pro 2 apk download latest version 2023
      -Instagram pro 2 app free download for android
      -Download instagram pro 2 latest version apk
      -Instagram pro 2 mod apk download for android
      -Instagram pro 2 apk free download new version
      -Instagram pro 2 app download latest version apk
      -Download instagram pro 2 new version mod apk
      -Instagram pro 2 apk download for android phone
      -Instagram pro 2 app mod apk free download
      -Download instagram pro 2 latest update apk
      -Instagram pro 2 apk download latest mod version

      -

      Requirements for Instagram Pro 2 APK

      -
        -
      • Your device should have Android version 4.4 or higher.
      • -
      • Your device should have at least 1 GB of RAM and 100 MB of free storage space.
      • -
      • Your device should have a stable internet connection.
      • -
      • Your device should allow installation of apps from unknown sources. You can enable this option by going to Settings > Security > Unknown Sources and toggling it on.
      • -
      -

      Once you have checked these requirements, you can proceed to download and install Instagram Pro 2 APK by following these steps:

      -

      Steps to Download and Install Instagram Pro 2 APK

      -
        -
      1. Go to the official website of Instagram Pro 2 APK and click on the download button. You can also use this link to download the app: .
      2. -
      3. Wait for the download to complete and then open the downloaded file. You may get a warning message that says "This type of file can harm your device". Ignore it and tap on OK.
      4. -
      5. Tap on Install and wait for the installation to finish. You may get another warning message that says "Do you want to install this application? It does not require any special access". Tap on Install anyway.
      6. -
      7. Once the installation is done, tap on Open to launch the app. You may also find the app icon on your home screen or app drawer.
      8. -
      9. Log in with your Instagram account or create a new one if you don't have one. You can also use your Facebook account to log in.
      10. -
      11. Enjoy using Instagram Pro 2 APK with all its features and benefits.
      12. -
      -

      How to Use Instagram Pro 2 APK?

      -

      Using Instagram Pro 2 APK is very easy and similar to using the official Instagram app. You can access all the features and benefits of the app by tapping on the menu icon at the top right corner of the app. Here are some tips and tricks for using Instagram Pro 2 APK:

      -

      Tips and Tricks for Instagram Pro 2 APK

      -
        -
      • To download any photo or video from any account, tap on the three dots icon at the top right corner of the post and select Download.
      • -
      • To hide your online status and activity, tap on the menu icon at the top right corner of the app and select Privacy. Then, toggle off Show Activity Status and Typing Status.
      • -
      • To view stories anonymously, tap on the menu icon at the top right corner of the app and select Privacy. Then, toggle on View Stories Anonymously.
      • -
      • To disable ads and sponsored posts, tap on the menu icon at the top right corner of the app and select Ad Settings. Then, toggle off Show Ads and Show Sponsored Posts.
      • -
      • To copy captions, comments, bios, and hashtags from any post or profile, tap and hold on the text you want to copy and select Copy.
      • -
      • To zoom in and out on any photo or video, pinch in and out with your fingers on the screen.
      • -
      • To change the theme and color of the app, tap on the menu icon at the top right corner of the app and select Theme. Then, choose from various themes and colors available.
      • -
      • To enable dark mode, tap on the menu icon at the top right corner of the app and select Theme. Then, toggle on Dark Mode.
      • -
      • To translate comments and captions into any language, tap on the comment or caption you want to translate and select Translate.
      • -
      -

      Precautions and Warnings for Instagram Pro 2 APK

      -
        -
      • Instagram Pro 2 APK is not an official app from Instagram or Facebook. It is a third-party app that is developed by independent developers who are not associated with Instagram or Facebook. Therefore, use it at your own risk.
      • -
      • Instagram Pro 2 APK may not be compatible with some devices or Android versions. It may also have some bugs or errors that may affect its performance or functionality. Therefore, update it regularly to get the latest version with bug fixes and improvements.
      • -
      • Instagram Pro 2 APK may violate some terms and conditions of Instagram or Facebook. It may also be detected by Instagram or Facebook as a suspicious activity or a violation of their policies. Therefore, use it responsibly and moderately to avoid getting banned or suspended from your account.
      • -
      -

      Conclusion

      -

      Instagram Pro 2 APK is a great app for anyone who wants to enjoy Instagram without any limitations or restrictions. It offers you more features and benefits than the official Instagram app, such as downloading photos and videos, hiding your online status, viewing stories anonymously, disabling ads, copying captions, comments, bios, and hashtags, zooming in and out on photos and videos, changing the theme and color of the app, enabling dark mode, translating comments and captions into any language, and more. It is easy to download and install it on your device, and use it with some simple tips and tricks. However, you also need to be careful and cautious while using it, as it may not be an official app from Instagram or Facebook, and it may have some compatibility or security issues. Therefore, use it at your own risk and discretion.

      -

      We hope this article has helped you to learn more about Instagram Pro 2 APK and how to download and use it. If you have any questions or feedback, feel free to leave a comment below. We would love to hear from you.

      -

      FAQs

      -

      Here are some frequently asked questions about Instagram Pro 2 APK:

      - - - - - - - - - - - - - - - - - - - - - - - - - -
      QuestionAnswer
      Is Instagram Pro 2 APK safe to use?Instagram Pro 2 APK is not an official app from Instagram or Facebook. It is a third-party app that is developed by independent developers who are not associated with Instagram or Facebook. Therefore, it may not be safe to use, as it may contain some malware or viruses that may harm your device or data. It may also violate some terms and conditions of Instagram or Facebook, and get you banned or suspended from your account. Therefore, use it at your own risk and discretion.
      Is Instagram Pro 2 APK free to use?Yes, Instagram Pro 2 APK is free to use. You don't need to pay any money to download or use it. However, you may need to pay for some premium features or services that are offered by the app.
      Can I use Instagram Pro 2 APK with my existing Instagram account?Yes, you can use Instagram Pro 2 APK with your existing Instagram account. You just need to log in with your username and password, or use your Facebook account to log in. However, you should be careful and moderate while using the app, as it may be detected by Instagram or Facebook as a suspicious activity or a violation of their policies.
      Can I use Instagram Pro 2 APK on my PC or laptop?No, you cannot use Instagram Pro 2 APK on your PC or laptop. It is only compatible with Android devices that have Android version 4.4 or higher. If you want to use Instagram on your PC or laptop, you can use the official website of Instagram or download an emulator that can run Android apps on your PC or laptop.
      Where can I download the latest version of Instagram Pro 2 APK?You can download the latest version of Instagram Pro 2 APK from the official website of the app or from this link: . You should always download the app from a trusted and reliable source, and avoid downloading it from any unknown or suspicious source.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/timmy0x-eth/Testspace/app.py b/spaces/timmy0x-eth/Testspace/app.py deleted file mode 100644 index a699bc5b3c2e987102ca93e0ee28d601e0a93d02..0000000000000000000000000000000000000000 --- a/spaces/timmy0x-eth/Testspace/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -def greet(name): - return "Hello " + name + "!!" - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Best Accounting Software For Home On Mac WORK.md b/spaces/tioseFevbu/cartoon-converter/scripts/Best Accounting Software For Home On Mac WORK.md deleted file mode 100644 index f08c53b2de09f8893427c617b03e648401ee3a57..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Best Accounting Software For Home On Mac WORK.md +++ /dev/null @@ -1,14 +0,0 @@ -
      -

      Best Accounting Software For Home On Mac

      -

      If you are looking for a reliable and easy-to-use accounting software for your home or small business on Mac, you might be overwhelmed by the number of options available. However, not all accounting software are created equal, and some might not be compatible with your Mac or meet your specific needs. To help you narrow down your choices, we have compiled a list of the best accounting software for home on Mac based on their features, pricing, user reviews, and compatibility.

      -

      Best Accounting Software For Home On Mac


      DOWNLOAD ››› https://urlcod.com/2uHwwU



      -
        -
      • QuickBooks Online: QuickBooks Online is one of the most popular and widely used accounting software for small businesses and freelancers. It offers a cloud-based solution that lets you access your data from any device, including your Mac. You can easily track your income and expenses, create invoices and estimates, manage taxes and payroll, and connect with your bank accounts and other apps. QuickBooks Online has four plans to choose from, starting from $25 per month for the Simple Start plan. You can also try it free for 30 days.
      • -
      • FreshBooks: FreshBooks is another cloud-based accounting software that is designed for small businesses and self-employed professionals. It allows you to create professional-looking invoices, accept online payments, track your time and expenses, generate reports, and collaborate with your team and clients. FreshBooks has a simple and intuitive interface that works seamlessly on your Mac. It has three plans to choose from, starting from $15 per month for the Lite plan. You can also try it free for 30 days.
      • -
      • Wave: Wave is a free accounting software that is ideal for home users and small businesses with simple accounting needs. It lets you manage your income and expenses, create invoices and receipts, scan receipts with your phone, and connect with your bank accounts and PayPal. Wave also offers paid services such as online payments, payroll, and bookkeeping. Wave is compatible with Mac and other devices through its web browser app.
      • -
      • Xero: Xero is a cloud-based accounting software that is suitable for small to medium-sized businesses. It enables you to manage your cash flow, invoicing, inventory, taxes, payroll, and more. Xero also integrates with over 800 apps to help you streamline your business processes. Xero has a user-friendly interface that works well on your Mac. It has three plans to choose from, starting from $11 per month for the Starter plan. You can also try it free for 30 days.
      • -
      • Zoho Books: Zoho Books is a cloud-based accounting software that is part of the Zoho suite of business applications. It helps you manage your finances, invoicing, inventory, projects, taxes, and more. Zoho Books also connects with other Zoho apps and third-party apps to enhance your productivity. Zoho Books has a sleek and simple interface that is compatible with your Mac. It has four plans to choose from, starting from $9 per month for the Basic plan. You can also try it free for 14 days.
      • -
      -

      These are some of the best accounting software for home on Mac that you can consider for your personal or business needs. They all offer different features and pricing options to suit your preferences and budget. You can compare them and choose the one that works best for you.

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Download Film Tai Chi Zero 2 Ganool ((FULL)).md b/spaces/tioseFevbu/cartoon-converter/scripts/Download Film Tai Chi Zero 2 Ganool ((FULL)).md deleted file mode 100644 index 79eb2dba90ff4edd70a179db654269b1e8f6f782..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Download Film Tai Chi Zero 2 Ganool ((FULL)).md +++ /dev/null @@ -1,23 +0,0 @@ - -Here is a possible title and article with html formatting for the keyword "Download Film Tai Chi Zero 2 Ganool": - -

      How to Download Film Tai Chi Zero 2 Ganool

      -

      If you are a fan of martial arts movies, you might be interested in downloading the film Tai Chi Zero 2 Ganool. This is the sequel to the 2012 Chinese steampunk action comedy Tai Chi Zero, which tells the story of Yang Luchan, a young genius who travels to Chen Village to learn the art of Tai Chi. In this film, Yang Luchan continues his journey and faces new challenges and enemies as he tries to protect the village from a powerful army of steampunk soldiers.

      -

      Download Film Tai Chi Zero 2 Ganool


      DOWNLOADhttps://urlcod.com/2uHxCW



      -

      But how can you download this film? There are many websites that offer free downloads of movies, but not all of them are safe and legal. Some of them may contain viruses, malware, or spyware that can harm your computer or device. Some of them may also violate the copyright laws and infringe on the rights of the filmmakers and distributors. Therefore, you should be careful and choose a reliable and trustworthy source for downloading movies.

      -

      One of the best options for downloading film Tai Chi Zero 2 Ganool is Google Play. Google Play is a digital distribution service that offers movies, music, books, apps, and games for various devices. You can rent or buy movies from Google Play and watch them online or offline on your computer, smartphone, tablet, or smart TV. Google Play also has a large collection of Asian movies, including Tai Chi Zero 2 Ganool.

      -

      To download film Tai Chi Zero 2 Ganool from Google Play, you need to follow these simple steps:

      -
        -
      1. Go to this link to access the movie page on Google Play.
      2. -
      3. Click on the "Rent" or "Buy" button and choose your preferred option. You will need to sign in with your Google account and provide your payment information if you haven't done so before.
      4. -
      5. After completing the purchase or rental process, you can start watching the movie online by clicking on the "Watch Now" button. You can also download it to your device by clicking on the "Download" button next to it.
      6. -
      7. To watch the movie offline, you need to have the Google Play Movies & TV app installed on your device. You can find it on this link. Open the app and go to "My Library" section. You will see the movie under "Movies" tab. Tap on it and enjoy!
      8. -
      -

      That's it! You have successfully downloaded film Tai Chi Zero 2 Ganool from Google Play. You can now watch it anytime and anywhere you want. If you liked this movie, you might also want to check out its predecessor Tai Chi Zero and its successor Tai Chi Hero, which are also available on Google Play.

      Here are a few more paragraphs for the article: - -

      Tai Chi Zero 2 Ganool is a fun and entertaining movie that combines martial arts, comedy, and steampunk elements. The movie features a talented cast of actors and martial artists, including Yuan Xiaochao, Angelababy, Tony Leung Ka-fai, Eddie Peng, Feng Shaofeng, and Shu Qi. The movie also has some special appearances by famous martial arts stars, such as Sammo Hung, Yuen Biao, Bruce Leung, and Chen Kuan-tai.

      -

      -

      The movie is directed by Stephen Fung, who is also a martial artist and actor. He has previously directed movies such as House of Fury, Jump, and Tai Chi Hero. He is also known for his roles in movies such as Gen-X Cops, The Avenging Fist, and Enter the Phoenix. He is married to Shu Qi, who plays the role of Yang Luchan's wife in the Tai Chi trilogy.

      -

      The movie is based on the life of Yang Luchan, who is considered to be the founder of Yang-style Tai Chi. He was born in 1799 in Hebei province and learned Tai Chi from Chen Changxing, a master of Chen-style Tai Chi. He later moved to Beijing and taught Tai Chi to many people, including members of the imperial family and the military. He died in 1872 and was buried in Yongnian county.

      7196e7f11a
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/How To Get A Cracked Minecraft Account Mac.md b/spaces/tioseFevbu/cartoon-converter/scripts/How To Get A Cracked Minecraft Account Mac.md deleted file mode 100644 index 88c1517473d0776eb92c09ce40adb6f97c4fc3f5..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/How To Get A Cracked Minecraft Account Mac.md +++ /dev/null @@ -1,27 +0,0 @@ -
      -

      How to Get a Cracked Minecraft Account for Mac

      -

      If you want to play Minecraft on your Mac without paying for a premium account, you might be tempted to look for a cracked version online. However, this is not a safe or legal option, as you could expose your computer to malware, viruses, or legal issues. In this article, we will show you how to get a free Minecraft account for Mac without breaking any rules or risking your security.

      -

      The best way to get a free Minecraft account for Mac is to use an alternative service called MCLeaks. MCLeaks is a website that provides free alts or alternative accounts for Minecraft players who don't want to buy the game. These alts are randomly generated from unused or donated accounts and can be used to log in to the official Minecraft launcher.

      -

      how to get a cracked minecraft account mac


      Download === https://urlcod.com/2uHxED



      -

      Here are the steps to get a free Minecraft account for Mac using MCLeaks:

      -
        -
      1. Go to https://mcleaks.net/authenticator/ and download the MCLeaks Authenticator for Mac.
      2. -
      3. Open the downloaded file and drag the MCLeaks Authenticator app to your Applications folder.
      4. -
      5. Launch the MCLeaks Authenticator app and click on "MCLeaks" to switch your authentication mode.
      6. -
      7. Go back to https://mcleaks.net/ and click on "Get an alt" to generate a free alt account.
      8. -
      9. Copy the ALT-Token that appears on the screen.
      10. -
      11. Open the Minecraft Launcher and press “MOJANG LOGIN“[^1^].
      12. -
      13. Paste the ALT-Token into the Username textfield and some random letters into the Password textfield and press login.
      14. -
      15. The launcher will automatically log you in with the MCLeaks alt. Have fun \uD83D\uDE42
      16. -
      -

      Note: To switch back to your original account, you need to launch the MCLeaks Authenticator app again and click on "Mojang" to change your authentication mode. Then, you can log in with your email or username and password as usual.

      -

      Conclusion: Getting a cracked Minecraft account for Mac is not worth the hassle and risk. Instead, you can use MCLeaks to get a free alt account that lets you play Minecraft legally and safely. However, keep in mind that these alts are temporary and may not work at all times. If you want to enjoy the full features and benefits of Minecraft, we recommend buying a premium account from https://www.minecraft.net/en-us/login.

      Here are some additional tips and tricks to make the most of your free Minecraft account for Mac:

      - -

      We hope this article has helped you get a free Minecraft account for Mac and enjoy the game without any hassle or risk. Remember to always be respectful and responsible when playing online and have fun exploring the infinite possibilities of Minecraft.

      e93f5a0c3f
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Libncurses5 Dev Rpm Download Windows [EXCLUSIVE].md b/spaces/tioseFevbu/cartoon-converter/scripts/Libncurses5 Dev Rpm Download Windows [EXCLUSIVE].md deleted file mode 100644 index c67c191106a2471823ddb5f85a60f1f4ff828f66..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Libncurses5 Dev Rpm Download Windows [EXCLUSIVE].md +++ /dev/null @@ -1,22 +0,0 @@ - -

      How to Install libncurses5-dev on Windows

      -

      libncurses5-dev is a development package for the ncurses library, which provides a terminal-independent way of manipulating the screen. ncurses is widely used for creating text-based user interfaces and applications. If you want to install libncurses5-dev on Windows, you have a few options:

      -

      libncurses5 dev rpm download windows


      Download ✸✸✸ https://urlcod.com/2uHxNP



      -
        -
      • Use a Linux subsystem or virtual machine: You can run a Linux distribution on Windows using tools like Windows Subsystem for Linux (WSL), Cygwin, MinGW, or VirtualBox. Then you can use the package manager of your Linux distribution to install libncurses5-dev. For example, on Debian or Ubuntu, you can run sudo apt-get install libncurses5-dev. This option gives you the most compatibility and functionality of ncurses on Windows.
      • -
      • Use a precompiled binary: You can download a precompiled binary of ncurses for Windows from various sources, such as the official ncurses website [^1^] or pkgs.org [^2^]. You will need to extract the files and copy them to the appropriate directories on your system. You may also need to set some environment variables or modify some configuration files to make ncurses work properly on Windows.
      • -
      • Compile from source: You can download the source code of ncurses from the official ncurses website [^1^] and compile it yourself using a C compiler and a make tool. You will need to follow the instructions in the README file and configure the build options according to your system and preferences. This option gives you the most control and customization of ncurses on Windows.
      • -
      -

      In this article, we have discussed how to install libncurses5-dev on Windows using different methods. We hope this helps you to use ncurses for your development projects on Windows.

      Some examples of ncurses applications

      -

      ncurses is a powerful library that allows developers to create text-based user interfaces and applications that can run on various terminal types. ncurses provides functions for manipulating the screen, handling keyboard and mouse input, creating menus, windows, panels, forms, and more. ncurses is widely used for creating interactive programs that run in the terminal, such as editors, games, utilities, and system tools. Here are some examples of ncurses applications:

      -
        -
      • nano: a simple and easy-to-use text editor that supports syntax highlighting, search and replace, multiple buffers, and more. nano is a good example of a ncurses application that uses windows, menus, and keyboard shortcuts [^2^].
      • -
      • htop: an interactive process viewer that shows system information and resource usage. htop is a good example of a ncurses application that uses colors, bars, graphs, and mouse support [^2^].
      • -
      • emacs: a powerful and extensible text editor that can do almost anything. emacs is a good example of a ncurses application that uses multiple windows, modes, commands, and customization [^3^].
      • -
      • DevDungeon: a text-based adventure game that teaches Python programming. DevDungeon is a good example of a ncurses application that uses graphics, sound, animation, and user input [^4^].
      • -
      • RDerik: a text-based application that shows how to use Swift and ncurses to create a simple calculator. RDerik is a good example of a ncurses application that uses forms, buttons, labels, and fields .
      • -
      -

      In this article, we have seen some examples of ncurses applications that demonstrate the capabilities and features of the ncurses library. We hope this inspires you to use ncurses for your own text-based projects.

      -

      cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/tommy24/chatGPT2/app.py b/spaces/tommy24/chatGPT2/app.py deleted file mode 100644 index cea35f0f23e4843c4e98b05783bc6d36b8c48599..0000000000000000000000000000000000000000 --- a/spaces/tommy24/chatGPT2/app.py +++ /dev/null @@ -1,318 +0,0 @@ -from pyChatGPT import ChatGPT -import gradio as gr -import os, sys, json -from loguru import logger -import paddlehub as hub -import random - -language_translation_model = hub.Module(directory=f'./baidu_translate') -def getTextTrans(text, source='zh', target='en'): - try: - text_translation = language_translation_model.translate(text, source, target) - return text_translation - except Exception as e: - return text - -session_token = os.environ.get('SessionToken') -# logger.info(f"session_token_: {session_token}") - -def get_api(): - api = None - # try: - # api = ChatGPT(session_token) - # # api.refresh_auth() - # except: - # api = None - return api - -def get_response_from_chatbot(api, text): - if api is None: - # return "Sorry, I'm busy. Try again later.(1)" - return "Openai said: I'm too tired. Let me lie down for a few days. If you like, you can visit my home." - try: - resp = api.send_message(text) - api.refresh_auth() - # api.reset_conversation() - response = resp['message'] - conversation_id = resp['conversation_id'] - parent_id = resp['parent_id'] - # logger.info(f"response_: {response}") - logger.info(f"conversation_id_: [{conversation_id}] / parent_id: [{parent_id}]") - except: - # response = "Sorry, I'm busy. Try again later.(2)" - response = "Openai said: I'm so tired. Let me lie down for a few days. If you like, you can visit my home." - return response - -model_ids = { - # "models/stabilityai/stable-diffusion-2-1":"sd-v2-1", - # "models/stabilityai/stable-diffusion-2":"sd-v2-0", - # "models/runwayml/stable-diffusion-v1-5":"sd-v1-5", - # "models/CompVis/stable-diffusion-v1-4":"sd-v1-4", - # "models/prompthero/openjourney":"openjourney", - # "models/ShadoWxShinigamI/Midjourney-Rangoli":"midjourney", - "models/hakurei/waifu-diffusion":"waifu-diffusion", - # "models/Linaqruf/anything-v3.0":"anything-v3.0", - } - -tab_actions = [] -tab_titles = [] -for model_id in model_ids.keys(): - print(model_id, model_ids[model_id]) - try: - tab = gr.Interface.load(model_id) - tab_actions.append(tab) - tab_titles.append(model_ids[model_id]) - except: - logger.info(f"load_fail__{model_id}_") - -def chat(api, input0, input1, chat_radio, chat_history): - out_chat = [] - if chat_history != '': - out_chat = json.loads(chat_history) - logger.info(f"out_chat_: {len(out_chat)} / {chat_radio}") - if chat_radio == "Talk to chatGPT": - response = get_response_from_chatbot(api, input0) - out_chat.append((input0, response)) - chat_history = json.dumps(out_chat) - return api, out_chat, input1, chat_history - else: - prompt_en = getTextTrans(input0, source='zh', target='en') + f',{random.randint(0,sys.maxsize)}' - return api, out_chat, prompt_en, chat_history - -start_work = """async() => { - function isMobile() { - try { - document.createEvent("TouchEvent"); return true; - } catch(e) { - return false; - } - } - function getClientHeight() - { - var clientHeight=0; - if(document.body.clientHeight&&document.documentElement.clientHeight) { - var clientHeight = (document.body.clientHeightdocument.documentElement.clientHeight)?document.body.clientHeight:document.documentElement.clientHeight; - } - return clientHeight; - } - - function setNativeValue(element, value) { - const valueSetter = Object.getOwnPropertyDescriptor(element.__proto__, 'value').set; - const prototype = Object.getPrototypeOf(element); - const prototypeValueSetter = Object.getOwnPropertyDescriptor(prototype, 'value').set; - - if (valueSetter && valueSetter !== prototypeValueSetter) { - prototypeValueSetter.call(element, value); - } else { - valueSetter.call(element, value); - } - } - function save_conversation(chatbot) { - var conversations = new Array(); - for (var i = 0; i < chatbot.children.length; i++) { - conversations[i] = chatbot.children[i].innerHTML; - } - var json_str = JSON.stringify(conversations); - localStorage.setItem('chatgpt_conversations', json_str); - } - function load_conversation(chatbot) { - var json_str = localStorage.getItem('chatgpt_conversations'); - if (json_str) { - conversations = JSON.parse(json_str); - for (var i = 0; i < conversations.length; i++) { - var new_div = document.createElement("div"); - if((i%2)===0){ - new_div.className = "px-3 py-2 rounded-[22px] rounded-br-none text-white text-sm chat-message svelte-rct66g"; - new_div.style.backgroundColor = "#16a34a"; - } else { - new_div.className = "px-3 py-2 rounded-[22px] rounded-bl-none place-self-start text-white text-sm chat-message svelte-rct66g"; - new_div.style.backgroundColor = "#2563eb"; - if (conversations[i].indexOf(" gradio-app').shadowRoot; - if (!gradioEl) { - gradioEl = document.querySelector('body > gradio-app'); - } - - if (typeof window['gradioEl'] === 'undefined') { - window['gradioEl'] = gradioEl; - - const page1 = window['gradioEl'].querySelectorAll('#page_1')[0]; - const page2 = window['gradioEl'].querySelectorAll('#page_2')[0]; - - page1.style.display = "none"; - page2.style.display = "block"; - window['div_count'] = 0; - window['chat_bot'] = window['gradioEl'].querySelectorAll('#chat_bot')[0]; - window['chat_bot1'] = window['gradioEl'].querySelectorAll('#chat_bot1')[0]; - chat_row = window['gradioEl'].querySelectorAll('#chat_row')[0]; - prompt_row = window['gradioEl'].querySelectorAll('#prompt_row')[0]; - window['chat_bot1'].children[1].textContent = ''; - - clientHeight = getClientHeight(); - if (isMobile()) { - output_htmls = window['gradioEl'].querySelectorAll('.output-html'); - for (var i = 0; i < output_htmls.length; i++) { - output_htmls[i].style.display = "none"; - } - new_height = (clientHeight - 250) + 'px'; - } else { - new_height = (clientHeight - 350) + 'px'; - } - chat_row.style.height = new_height; - window['chat_bot'].style.height = new_height; - window['chat_bot'].children[2].style.height = new_height; - window['chat_bot1'].style.height = new_height; - window['chat_bot1'].children[2].style.height = new_height; - prompt_row.children[0].style.flex = 'auto'; - prompt_row.children[0].style.width = '100%'; - window['gradioEl'].querySelectorAll('#chat_radio')[0].style.flex = 'auto'; - window['gradioEl'].querySelectorAll('#chat_radio')[0].style.width = '100%'; - prompt_row.children[0].setAttribute('style','flex-direction: inherit; flex: 1 1 auto; width: 100%;border-color: green;border-width: 1px !important;') - window['chat_bot1'].children[1].setAttribute('style', 'border-bottom-right-radius:0;top:unset;bottom:0;padding-left:0.1rem'); - window['gradioEl'].querySelectorAll('#btns_row')[0].children[0].setAttribute('style', 'min-width: min(10px, 100%); flex-grow: 1'); - window['gradioEl'].querySelectorAll('#btns_row')[0].children[1].setAttribute('style', 'min-width: min(10px, 100%); flex-grow: 1'); - - load_conversation(window['chat_bot1'].children[2].children[0]); - window['chat_bot1'].children[2].scrollTop = window['chat_bot1'].children[2].scrollHeight; - - window['gradioEl'].querySelectorAll('#clear-btn')[0].onclick = function(e){ - if (confirm('Clear all outputs?')==true) { - window['chat_bot1'].children[2].children[0].innerHTML = ''; - save_conversation(window['chat_bot1'].children[2].children[0]); - } - } - - window['prevPrompt'] = ''; - window['doCheckPrompt'] = 0; - window['prevImgSrc'] = ''; - window['checkChange'] = function checkChange() { - try { - if (window['gradioEl'].querySelectorAll('.gr-radio')[0].checked) { - if (window['chat_bot'].children[2].children[0].children.length > window['div_count']) { - new_len = window['chat_bot'].children[2].children[0].children.length - window['div_count']; - for (var i = 0; i < new_len; i++) { - new_div = window['chat_bot'].children[2].children[0].children[window['div_count'] + i].cloneNode(true); - window['chat_bot1'].children[2].children[0].appendChild(new_div); - } - window['div_count'] = chat_bot.children[2].children[0].children.length; - window['chat_bot1'].children[2].scrollTop = window['chat_bot1'].children[2].scrollHeight; - save_conversation(window['chat_bot1'].children[2].children[0]); - } - if (window['chat_bot'].children[0].children.length > 1) { - window['chat_bot1'].children[1].textContent = window['chat_bot'].children[0].children[1].textContent; - } else { - window['chat_bot1'].children[1].textContent = ''; - } - } else { - texts = window['gradioEl'].querySelectorAll('textarea'); - text0 = texts[0]; - text1 = texts[1]; - img_index = 0; - if (window['doCheckPrompt'] === 0 && window['prevPrompt'] !== text1.value) { - console.log('_____new prompt___[' + text1.value + ']_'); - window['doCheckPrompt'] = 1; - window['prevPrompt'] = text1.value; - for (var i = 3; i < texts.length; i++) { - setNativeValue(texts[i], text1.value); - texts[i].dispatchEvent(new Event('input', { bubbles: true })); - } - setTimeout(function() { - img_submit_btns = window['gradioEl'].querySelectorAll('#tab_img')[0].querySelectorAll("button"); - for (var i = 0; i < img_submit_btns.length; i++) { - if (img_submit_btns[i].innerText == 'Submit') { - img_submit_btns[i].click(); - } - } - window['doCheckPrompt'] = 0; - }, 10); - } - tabitems = window['gradioEl'].querySelectorAll('.tabitem'); - imgs = tabitems[img_index].children[0].children[1].children[1].children[0].querySelectorAll("img"); - if (imgs.length > 0) { - if (window['prevImgSrc'] !== imgs[0].src) { - var user_div = document.createElement("div"); - user_div.className = "px-3 py-2 rounded-[22px] rounded-br-none text-white text-sm chat-message svelte-rct66g"; - user_div.style.backgroundColor = "#16a34a"; - user_div.innerHTML = "

      " + text0.value + "

      "; - window['chat_bot1'].children[2].children[0].appendChild(user_div); - var bot_div = document.createElement("div"); - bot_div.className = "px-3 py-2 rounded-[22px] rounded-bl-none place-self-start text-white text-sm chat-message svelte-rct66g"; - bot_div.style.backgroundColor = "#2563eb"; - bot_div.style.width = "80%"; - bot_div.style.padding = "0.2rem"; - bot_div.appendChild(imgs[0].cloneNode(true)); - window['chat_bot1'].children[2].children[0].appendChild(bot_div); - - window['chat_bot1'].children[2].scrollTop = window['chat_bot1'].children[2].scrollHeight; - window['prevImgSrc'] = imgs[0].src; - save_conversation(window['chat_bot1'].children[2].children[0]); - } - } - if (tabitems[img_index].children[0].children[1].children[1].children[0].children[0].children.length > 1) { - window['chat_bot1'].children[1].textContent = tabitems[img_index].children[0].children[1].children[1].children[0].children[0].children[1].textContent; - } else { - window['chat_bot1'].children[1].textContent = ''; - } - } - - } catch(e) { - } - } - window['checkChange_interval'] = window.setInterval("window.checkChange()", 500); - } - - return false; -}""" - - -with gr.Blocks(title='Talk to chatGPT') as demo: - gr.HTML("

      You can duplicating this space and use your own session token: Duplicate Space

      ") - gr.HTML("

      Instruction on how to get session token can be seen in video here. Add your session token by going to settings and add under secrets.

      ") - with gr.Group(elem_id="page_1", visible=True) as page_1: - with gr.Box(): - with gr.Row(): - start_button = gr.Button("Let's talk to chatGPT!", elem_id="start-btn", visible=True) - start_button.click(fn=None, inputs=[], outputs=[], _js=start_work) - - with gr.Group(elem_id="page_2", visible=False) as page_2: - with gr.Row(elem_id="chat_row"): - chatbot = gr.Chatbot(elem_id="chat_bot", visible=False).style(color_map=("green", "blue")) - chatbot1 = gr.Chatbot(elem_id="chat_bot1").style(color_map=("green", "blue")) - with gr.Row(elem_id="prompt_row"): - prompt_input0 = gr.Textbox(lines=2, label="prompt",show_label=False) - prompt_input1 = gr.Textbox(lines=4, label="prompt", visible=False) - chat_history = gr.Textbox(lines=4, label="prompt", visible=False) - chat_radio = gr.Radio(["Talk to chatGPT", "Text to Image"], elem_id="chat_radio",value="Talk to chatGPT", show_label=False) - with gr.Row(elem_id="btns_row"): - with gr.Column(id="submit_col"): - submit_btn = gr.Button(value = "submit",elem_id="submit-btn").style( - margin=True, - rounded=(True, True, True, True), - width=100 - ) - with gr.Column(id="clear_col"): - clear_btn = gr.Button(value = "clear outputs", elem_id="clear-btn").style( - margin=True, - rounded=(True, True, True, True), - width=100 - ) - api = gr.State(value=get_api()) - submit_btn.click(fn=chat, - inputs=[api, prompt_input0, prompt_input1, chat_radio, chat_history], - outputs=[api, chatbot, prompt_input1, chat_history], - ) - with gr.Row(elem_id='tab_img', visible=False).style(height=5): - tab_img = gr.TabbedInterface(tab_actions, tab_titles) - -demo.launch(debug = True) diff --git a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/data/datasets/utils.py b/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/data/datasets/utils.py deleted file mode 100644 index fd89e059945399c81deff84b5e57522cca78d81c..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/data/datasets/utils.py +++ /dev/null @@ -1,38 +0,0 @@ -#!/usr/bin/env python3 - -import os -import shlex -import shutil -import subprocess - - -def extract_archive(dataset_archive, tmp_data_path): - if not os.path.isfile(dataset_archive): - return False - - dataset_ext = os.path.splitext(dataset_archive)[1] - if dataset_ext != ".gz" and dataset_ext != ".tar": - return False - - if os.path.isdir(tmp_data_path): - shutil.rmtree(tmp_data_path, ignore_errors=True) - os.makedirs(tmp_data_path) - - if dataset_ext == ".gz": - tar_opt = "-xzf" - else: - tar_opt = "-xf" - - extract_cmd = ("tar {} {} -C {}").format(tar_opt, dataset_archive, tmp_data_path) - - subprocess.call(shlex.split(extract_cmd)) - - return True - - -def tar_file(tar_path, tmp_path): - tar_name = tar_path.split('/')[-1] - if extract_archive(tar_path, tmp_path): - print('extract ' + tar_name + 'successfully!') - else: - print("fail to extract " + tar_name) diff --git a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/modeling/roi_heads/mask_head/loss.py b/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/modeling/roi_heads/mask_head/loss.py deleted file mode 100644 index dc77cd3106127af8b8fbfb458fbca1967dcaf0d1..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/modeling/roi_heads/mask_head/loss.py +++ /dev/null @@ -1,237 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import torch - -# from maskrcnn_benchmark.layers import smooth_l1_loss -from maskrcnn_benchmark.modeling.matcher import Matcher -from maskrcnn_benchmark.modeling.utils import cat -from maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou -from torch.nn import functional as F - - -def project_masks_on_boxes(segmentation_masks, proposals, discretization_size): - """ - Given segmentation masks and the bounding boxes corresponding - to the location of the masks in the image, this function - crops and resizes the masks in the position defined by the - boxes. This prepares the masks for them to be fed to the - loss computation as the targets. - - Arguments: - segmentation_masks: an instance of SegmentationMask - proposals: an instance of BoxList - """ - masks = [] - M = discretization_size - device = proposals.bbox.device - proposals = proposals.convert("xyxy") - assert segmentation_masks.size == proposals.size, "{}, {}".format( - segmentation_masks, proposals - ) - # TODO put the proposals on the CPU, as the representation for the - # masks is not efficient GPU-wise (possibly several small tensors for - # representing a single instance mask) - proposals = proposals.bbox.to(torch.device("cpu")) - for segmentation_mask, proposal in zip(segmentation_masks, proposals): - # crop the masks, resize them to the desired resolution and - # then convert them to the tensor representation, - # instead of the list representation that was used - cropped_mask = segmentation_mask.crop(proposal) - scaled_mask = cropped_mask.resize((M, M)) - mask = scaled_mask.convert(mode="mask") - masks.append(mask) - if len(masks) == 0: - return torch.empty(0, dtype=torch.float32, device=device) - return torch.stack(masks, dim=0).to(device, dtype=torch.float32) - - -class MaskRCNNLossComputation(object): - def __init__(self, proposal_matcher, discretization_size): - """ - Arguments: - proposal_matcher (Matcher) - discretization_size (int) - """ - self.proposal_matcher = proposal_matcher - self.discretization_size = discretization_size - - def match_targets_to_proposals(self, proposal, target): - match_quality_matrix = boxlist_iou(target, proposal) - matched_idxs = self.proposal_matcher(match_quality_matrix) - # Mask RCNN needs "labels" and "masks "fields for creating the targets - target = target.copy_with_fields(["labels", "masks"]) - # get the targets corresponding GT for each proposal - # NB: need to clamp the indices because we can have a single - # GT in the image, and matched_idxs can be -2, which goes - # out of bounds - matched_targets = target[matched_idxs.clamp(min=0)] - matched_targets.add_field("matched_idxs", matched_idxs) - return matched_targets - - def prepare_targets(self, proposals, targets): - labels = [] - masks = [] - for proposals_per_image, targets_per_image in zip(proposals, targets): - matched_targets = self.match_targets_to_proposals( - proposals_per_image, targets_per_image - ) - matched_idxs = matched_targets.get_field("matched_idxs") - - labels_per_image = matched_targets.get_field("labels") - labels_per_image = labels_per_image.to(dtype=torch.int64) - - # this can probably be removed, but is left here for clarity - # and completeness - neg_inds = matched_idxs == Matcher.BELOW_LOW_THRESHOLD - labels_per_image[neg_inds] = 0 - - # mask scores are only computed on positive samples - positive_inds = torch.nonzero(labels_per_image > 0).squeeze(1) - - segmentation_masks = matched_targets.get_field("masks") - segmentation_masks = segmentation_masks[positive_inds] - - positive_proposals = proposals_per_image[positive_inds] - - masks_per_image = project_masks_on_boxes( - segmentation_masks, positive_proposals, self.discretization_size - ) - - labels.append(labels_per_image) - masks.append(masks_per_image) - - return labels, masks - - def __call__(self, proposals, mask_logits, targets): - """ - Arguments: - proposals (list[BoxList]) - mask_logits (Tensor) - targets (list[BoxList]) - - Return: - mask_loss (Tensor): scalar tensor containing the loss - """ - labels, mask_targets = self.prepare_targets(proposals, targets) - - labels = cat(labels, dim=0) - mask_targets = cat(mask_targets, dim=0) - - positive_inds = torch.nonzero(labels > 0).squeeze(1) - labels_pos = labels[positive_inds] - - # torch.mean (in binary_cross_entropy_with_logits) doesn't - # accept empty tensors, so handle it separately - if mask_targets.numel() == 0: - return mask_logits.sum() * 0 - - mask_loss = F.binary_cross_entropy_with_logits( - mask_logits[positive_inds, labels_pos], mask_targets - ) - return mask_loss - - -class CharMaskRCNNLossComputation(object): - def __init__(self, use_weighted_loss=False): - """ - Arguments: - proposal_matcher (Matcher) - discretization_size (int) - """ - self.use_weighted_loss = use_weighted_loss - - def __call__( - self, - proposals, - mask_logits, - char_mask_logits, - mask_targets, - char_mask_targets, - char_mask_weights, - ): - """ - Arguments: - proposals (list[BoxList]) - mask_logits (Tensor) - targets (list[BoxList]) - - Return: - mask_loss (Tensor): scalar tensor containing the loss - """ - mask_targets = cat(mask_targets, dim=0) - char_mask_targets = cat(char_mask_targets, dim=0) - char_mask_weights = cat(char_mask_weights, dim=0) - char_mask_weights = char_mask_weights.mean(dim=0) - - # torch.mean (in binary_cross_entropy_with_logits) doesn't - # accept empty tensors, so handle it separately - if mask_targets.numel() == 0 or char_mask_targets.numel() == 0: - return mask_logits.sum() * 0, char_mask_targets.sum() * 0 - - mask_loss = F.binary_cross_entropy_with_logits( - mask_logits.squeeze(dim=1), mask_targets - ) - if self.use_weighted_loss: - char_mask_loss = F.cross_entropy( - char_mask_logits, char_mask_targets, char_mask_weights, ignore_index=-1 - ) - else: - char_mask_loss = F.cross_entropy( - char_mask_logits, char_mask_targets, ignore_index=-1 - ) - return mask_loss, char_mask_loss - -class SeqMaskRCNNLossComputation(object): - def __init__(self): - """ - Arguments: - proposal_matcher (Matcher) - discretization_size (int) - """ - - def __call__( - self, - proposals, - mask_logits, - mask_targets, - ): - """ - Arguments: - proposals (list[BoxList]) - mask_logits (Tensor) - targets (list[BoxList]) - - Return: - mask_loss (Tensor): scalar tensor containing the loss - """ - mask_targets = cat(mask_targets, dim=0) - - # torch.mean (in binary_cross_entropy_with_logits) doesn't - # accept empty tensors, so handle it separately - if mask_targets.numel() == 0: - return mask_logits.sum() * 0 - - mask_loss = F.binary_cross_entropy_with_logits( - mask_logits.squeeze(dim=1), mask_targets - ) - return mask_loss - - -def make_roi_mask_loss_evaluator(cfg): - matcher = Matcher( - cfg.MODEL.ROI_HEADS.FG_IOU_THRESHOLD, - cfg.MODEL.ROI_HEADS.BG_IOU_THRESHOLD, - allow_low_quality_matches=False, - ) - if cfg.MODEL.CHAR_MASK_ON: - loss_evaluator = CharMaskRCNNLossComputation( - use_weighted_loss=cfg.MODEL.ROI_MASK_HEAD.USE_WEIGHTED_CHAR_MASK - ) - else: - if cfg.SEQUENCE.SEQ_ON: - loss_evaluator = SeqMaskRCNNLossComputation() - else: - loss_evaluator = MaskRCNNLossComputation( - matcher, cfg.MODEL.ROI_MASK_HEAD.RESOLUTION - ) - - return loss_evaluator diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/dcn/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/dcn/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py deleted file mode 100644 index 3b3683af235f46df36d8793e52c2b9c52e0defeb..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/dcn/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = '../cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py' -model = dict( - backbone=dict( - dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/tonyassi/video-face-swap/tests/__init__.py b/spaces/tonyassi/video-face-swap/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/trl-internal-testing/rlhf_dialog_experiment/README.md b/spaces/trl-internal-testing/rlhf_dialog_experiment/README.md deleted file mode 100644 index 2ff358dc04e37c9dbea65a487e280c49d33330df..0000000000000000000000000000000000000000 --- a/spaces/trl-internal-testing/rlhf_dialog_experiment/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Rlhf Dialog Experiment -emoji: 🚀 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/umichVision/virtex-redcaps/virtex/factories.py b/spaces/umichVision/virtex-redcaps/virtex/factories.py deleted file mode 100644 index 711e483e15bd8dc48e0ce565e190772fb8b8e981..0000000000000000000000000000000000000000 --- a/spaces/umichVision/virtex-redcaps/virtex/factories.py +++ /dev/null @@ -1,636 +0,0 @@ -r""" -This module is a collection of *factories* for creating objects of datasets, -models, optimizers and other useful components. For example, a ResNet-50 -visual backbone can be created as: - - .. code-block:: python - - >>> # Explicitly by name, args and kwargs: - >>> backbone = VisualBackboneFactory.create( - ... "torchvision::resnet50", pretrained=False - ... ) - >>> # Directly from a config object: - >>> _C = Config(override_list=["MODEL.VISUAL.NAME", "torchvision::resnet50"]) - >>> backbone = VisualBackboneFactory.from_config(_C) - -Creating directly from :class:`~virtex.config.Config` is fast and simple, and -ensures minimal changes throughout the codebase upon any change in the call -signature of underlying class; or config hierarchy. Refer description of -specific factories for more details. -""" -import re -from functools import partial -from typing import Any, Callable, Dict, Iterable, List - -import albumentations as alb -from torch import nn, optim - -import virtex.data as vdata -import virtex.models as vmodels -import virtex.utils.distributed as dist -from virtex.config import Config -from virtex.data import transforms as T -from virtex.data.tokenizers import SentencePieceBPETokenizer -from virtex.modules import visual_backbones, textual_heads -from virtex.optim import Lookahead, lr_scheduler -from virtex.utils.beam_search import AutoRegressiveBeamSearch -from virtex.utils.nucleus_sampling import AutoRegressiveNucleusSampling - -class Factory(object): - r""" - Base class for all factories. All factories must inherit this base class - and follow these guidelines for a consistent behavior: - - * Factory objects cannot be instantiated, doing ``factory = SomeFactory()`` - is illegal. Child classes should not implement ``__init__`` methods. - * All factories must have an attribute named ``PRODUCTS`` of type - ``Dict[str, Callable]``, which associates each class with a unique string - name which can be used to create it. - * All factories must implement one classmethod, :meth:`from_config` which - contains logic for creating an object directly by taking name and other - arguments directly from :class:`~virtex.config.Config`. They can use - :meth:`create` already implemented in this base class. - * :meth:`from_config` should not use too many extra arguments than the - config itself, unless necessary (such as model parameters for optimizer). - """ - - PRODUCTS: Dict[str, Callable] = {} - - def __init__(self): - raise ValueError( - f"""Cannot instantiate {self.__class__.__name__} object, use - `create` classmethod to create a product from this factory. - """ - ) - - @classmethod - def create(cls, name: str, *args, **kwargs) -> Any: - r"""Create an object by its name, args and kwargs.""" - if name not in cls.PRODUCTS: - raise KeyError(f"{cls.__class__.__name__} cannot create {name}.") - - return cls.PRODUCTS[name](*args, **kwargs) - - @classmethod - def from_config(cls, config: Config) -> Any: - r"""Create an object directly from config.""" - raise NotImplementedError - - -class TokenizerFactory(Factory): - r""" - Factory to create text tokenizers. This codebase ony supports one tokenizer - for now, but having a dedicated factory makes it easy to add more if needed. - - Possible choices: ``{"SentencePieceBPETokenizer"}``. - """ - - PRODUCTS: Dict[str, Callable] = { - "SentencePieceBPETokenizer": SentencePieceBPETokenizer - } - - @classmethod - def from_config(cls, config: Config) -> SentencePieceBPETokenizer: - r""" - Create a tokenizer directly from config. - - Parameters - ---------- - config: virtex.config.Config - Config object with all the parameters. - """ - - _C = config - - tokenizer = cls.create( - "SentencePieceBPETokenizer", model_path=_C.DATA.TOKENIZER_MODEL - ) - return tokenizer - - -class ImageTransformsFactory(Factory): - r""" - Factory to create image transformations for common preprocessing and data - augmentations. These are a mix of default transformations from - `albumentations `_ and - some extended ones defined in :mod:`virtex.data.transforms`. - - It uses sensible default values, however they can be provided with the name - in dict syntax. Example: ``random_resized_crop::{'scale': (0.08, 1.0)}`` - - .. note:: - - This factory does not implement :meth:`from_config` method. It is only - used by :class:`PretrainingDatasetFactory` and - :class:`DownstreamDatasetFactory`. - - Possible choices: ``{"center_crop", "horizontal_flip", "random_resized_crop", - "normalize", "global_resize", "color_jitter", "smallest_resize"}``. - """ - - # fmt: off - PRODUCTS: Dict[str, Callable] = { - # Input resize transforms: whenever selected, these are always applied. - # These transforms require one position argument: image dimension. - "random_resized_crop": partial( - T.RandomResizedSquareCrop, scale=(0.2, 1.0), ratio=(0.75, 1.333), p=1.0 - ), - "center_crop": partial(T.CenterSquareCrop, p=1.0), - "smallest_resize": partial(alb.SmallestMaxSize, p=1.0), - "global_resize": partial(T.SquareResize, p=1.0), - - # Keep hue limits small in color jitter because it changes color drastically - # and captions often mention colors. Apply with higher probability. - "color_jitter": partial( - alb.ColorJitter, brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1, p=0.8 - ), - "horizontal_flip": partial(T.HorizontalFlip, p=0.5), - - # Color normalization: whenever selected, always applied. This accepts images - # in [0, 255], requires mean and std in [0, 1] and normalizes to `N(0, 1)`. - "normalize": partial( - alb.Normalize, mean=T.IMAGENET_COLOR_MEAN, std=T.IMAGENET_COLOR_STD, p=1.0 - ), - } - # fmt: on - - @classmethod - def create(cls, name: str, *args, **kwargs) -> Any: - r"""Create an object by its name, args and kwargs.""" - - if "::" in name: - name, __kwargs = name.split("::") - _kwargs = eval(__kwargs) - else: - _kwargs = {} - - _kwargs.update(kwargs) - return super().create(name, *args, **_kwargs) - - @classmethod - def from_config(cls, config: Config): - r"""Augmentations cannot be created from config, only :meth:`create`.""" - raise NotImplementedError - - -class PretrainingDatasetFactory(Factory): - r""" - Factory to create :class:`~torch.utils.data.Dataset` s for pretraining - VirTex models. Datasets are created depending on pretraining task used. - Typically these datasets either provide image-caption pairs, or only images - from COCO Captions dataset (serialized to an LMDB file). - - As an exception, the dataset for ``multilabel_classification`` provides - COCO images and labels of their bounding box annotations. - - Possible choices: ``{"bicaptioning", "captioning", "masked_lm", - "token_classification", "multilabel_classification"}``. - """ - - PRODUCTS: Dict[str, Callable] = { - "virtex": vdata.CaptioningDataset, - "bicaptioning": vdata.CaptioningDataset, - "captioning": vdata.CaptioningDataset, - "masked_lm": vdata.MaskedLmDataset, - "token_classification": vdata.TokenClassificationDataset, - "multilabel_classification": vdata.MultiLabelClassificationDataset, - } - - @classmethod - def from_config(cls, config: Config, split: str = "train"): - r""" - Create a dataset directly from config. Names in this factory match with - names in :class:`PretrainingModelFactory` because both use same config - parameter ``MODEL.NAME`` to create objects. - - Parameters - ---------- - config: virtex.config.Config - Config object with all the parameters. - split: str, optional (default = "train") - Which split to load for the dataset. One of ``{"train", "val"}``. - """ - - _C = config - # Every dataset needs these two args. - kwargs = {"data_root": _C.DATA.ROOT, "split": split} - - # Create a list of image transformations based on transform names. - image_transform_list: List[Callable] = [] - - for name in getattr(_C.DATA, f"IMAGE_TRANSFORM_{split.upper()}"): - # Pass dimensions if cropping / resizing, else rely on the defaults - # as per `ImageTransformsFactory`. - if "resize" in name or "crop" in name: - image_transform_list.append( - ImageTransformsFactory.create(name, _C.DATA.IMAGE_CROP_SIZE) - ) - else: - image_transform_list.append(ImageTransformsFactory.create(name)) - - kwargs["image_transform"] = alb.Compose(image_transform_list) - - tokenizer = TokenizerFactory.from_config(_C) - - if _C.MODEL.NAME in {"virtex", "bicaptioning", "captioning"}: - kwargs.update( - tokenizer=tokenizer, - max_caption_length=_C.DATA.MAX_CAPTION_LENGTH, - use_single_caption=_C.DATA.USE_SINGLE_CAPTION, - percentage=_C.DATA.USE_PERCENTAGE if split == "train" else 100.0, - ) - - elif _C.MODEL.NAME == "token_classification": - kwargs.update( - tokenizer=tokenizer, max_caption_length=_C.DATA.MAX_CAPTION_LENGTH - ) - - elif _C.MODEL.NAME == "masked_lm": - kwargs.update( - tokenizer=tokenizer, - max_caption_length=_C.DATA.MAX_CAPTION_LENGTH, - use_single_caption=_C.DATA.USE_SINGLE_CAPTION, - percentage=_C.DATA.USE_PERCENTAGE if split == "train" else 100.0, - mask_proportion=_C.DATA.MASKED_LM.MASK_PROPORTION, - mask_probability=_C.DATA.MASKED_LM.MASK_PROBABILITY, - replace_probability=_C.DATA.MASKED_LM.REPLACE_PROBABILITY, - ) - - elif _C.MODEL.NAME in {"virtex_web", "miniclip_web"}: - # Remove "split" argument, not necessary. - _ = kwargs.pop("split") - kwargs.update( - batch_size=_C.OPTIM.BATCH_SIZE // dist.get_world_size(), - tokenizer=tokenizer, - max_caption_length=_C.DATA.MAX_CAPTION_LENGTH, - ) - - # Dataset names match with model names (and ofcourse pretext names). - return cls.create(_C.MODEL.NAME, **kwargs) - - -class DownstreamDatasetFactory(Factory): - r""" - Factory to create :class:`~torch.utils.data.Dataset` s for evaluating - VirTex models on downstream tasks. - - Possible choices: ``{"datasets/VOC2007", "datasets/imagenet"}``. - """ - - PRODUCTS: Dict[str, Callable] = { - "datasets/VOC2007": vdata.VOC07ClassificationDataset, - "datasets/imagenet": vdata.ImageNetDataset, - "datasets/inaturalist": vdata.INaturalist2018Dataset, - } - - @classmethod - def from_config(cls, config: Config, split: str = "train"): - r""" - Create a dataset directly from config. Names in this factory are paths - of dataset directories (relative to the project directory), because - config parameter ``DATA.ROOT`` is used to create objects. - - Parameters - ---------- - config: virtex.config.Config - Config object with all the parameters. - split: str, optional (default = "train") - Which split to load for the dataset. One of ``{"trainval", "test"}`` - for VOC2007, or one of ``{"train", "val"}`` for ImageNet. - """ - - _C = config - # Every dataset needs these two args. - kwargs = {"data_root": _C.DATA.ROOT, "split": split} - - # For VOC2007, `IMAGE_TRANSFORM_TRAIN` is used for "trainval" split and - # `IMAGE_TRANSFORM_VAL` is used fo "test" split. - image_transform_names: List[str] = list( - _C.DATA.IMAGE_TRANSFORM_TRAIN - if "train" in split - else _C.DATA.IMAGE_TRANSFORM_VAL - ) - # Create a list of image transformations based on names. - image_transform_list: List[Callable] = [] - - for name in image_transform_names: - # Pass dimensions for resize/crop, else rely on the defaults. - if name.split("::")[0] in { - "random_resized_crop", - "center_crop", - "global_resize", - }: - transform = ImageTransformsFactory.create(name, 224) - elif name.split("::")[0] in {"smallest_resize"}: - transform = ImageTransformsFactory.create(name, 256) - else: - transform = ImageTransformsFactory.create(name) - - image_transform_list.append(transform) - - kwargs["image_transform"] = alb.Compose(image_transform_list) - - return cls.create(_C.DATA.ROOT, **kwargs) - - -class VisualBackboneFactory(Factory): - r""" - Factory to create :mod:`~virtex.modules.visual_backbones`. This factory - supports any ResNet-like model from - `Torchvision `_. - Use the method name for model as in torchvision, for example, - ``torchvision::resnet50``, ``torchvision::wide_resnet50_2`` etc. - - Possible choices: ``{"torchvision", "timm"}``. - """ - - PRODUCTS: Dict[str, Callable] = { - "torchvision": visual_backbones.TorchvisionVisualBackbone, - "timm": visual_backbones.TimmVisualBackbone, - } - - @classmethod - def from_config(cls, config: Config) -> visual_backbones.VisualBackbone: - r""" - Create a visual backbone directly from config. - - Parameters - ---------- - config: virtex.config.Config - Config object with all the parameters. - """ - - _C = config - kwargs = {"visual_feature_size": _C.MODEL.VISUAL.FEATURE_SIZE} - - # Check the name for models from torchvision or timm. - package_name, cnn_name = _C.MODEL.VISUAL.NAME.split("::") - kwargs["pretrained"] = _C.MODEL.VISUAL.PRETRAINED - kwargs["frozen"] = _C.MODEL.VISUAL.FROZEN - - return cls.create(package_name, cnn_name, **kwargs) - - -class TextualHeadFactory(Factory): - r""" - Factory to create :mod:`~virtex.modules.textual_heads`. Architectural - hyperparameters for transformers can be specified as ``name::*``. - For example, ``transdec_postnorm::L1_H1024_A16_F4096`` would create a - transformer textual head with ``L = 1`` layers, ``H = 1024`` hidden size, - ``A = 16`` attention heads, and ``F = 4096`` size of feedforward layers. - - Textual head should be ``"none"`` for pretraining tasks which do not - involve language modeling, such as ``"token_classification"``. - - Possible choices: ``{"transdec_postnorm", "transdec_prenorm", "none"}``. - """ - - PRODUCTS: Dict[str, Callable] = { - "transdec_prenorm": partial( - textual_heads.TransformerDecoderTextualHead, norm_type="pre" - ), - "transdec_postnorm": partial( - textual_heads.TransformerDecoderTextualHead, norm_type="post" - ), - "transenc_postnorm": partial( - textual_heads.TransformerEncoderTextualHead, norm_type="post" - ), - "transenc_prenorm": partial( - textual_heads.TransformerEncoderTextualHead, norm_type="pre" - ), - "none": textual_heads.LinearTextualHead, - } - - @classmethod - def from_config(cls, config: Config) -> nn.Module: - r""" - Create a textual head directly from config. - - Parameters - ---------- - config: virtex.config.Config - Config object with all the parameters. - """ - - _C = config - name = _C.MODEL.TEXTUAL.NAME - kwargs = { - "visual_feature_size": _C.MODEL.VISUAL.FEATURE_SIZE, - "vocab_size": _C.DATA.VOCAB_SIZE, - } - - if "trans" in _C.MODEL.TEXTUAL.NAME: - # Get architectural hyper-params as per name by matching regex. - name, architecture = name.split("::") - architecture = re.match(r"L(\d+)_H(\d+)_A(\d+)_F(\d+)", architecture) - - num_layers = int(architecture.group(1)) - hidden_size = int(architecture.group(2)) - attention_heads = int(architecture.group(3)) - feedforward_size = int(architecture.group(4)) - - # Mask the future tokens for autoregressive captioning. - mask_future = _C.MODEL.NAME in {"virtex", "virtex_web", "captioning", "bicaptioning"} - - kwargs.update( - hidden_size=hidden_size, - num_layers=num_layers, - attention_heads=attention_heads, - feedforward_size=feedforward_size, - dropout=_C.MODEL.TEXTUAL.DROPOUT, - mask_future_positions=mask_future, - max_caption_length=_C.DATA.MAX_CAPTION_LENGTH, - padding_idx=_C.DATA.UNK_INDEX, - ) - return cls.create(name, **kwargs) - - -class PretrainingModelFactory(Factory): - r""" - Factory to create :mod:`~virtex.models` for different pretraining tasks. - - Possible choices: ``{"bicaptioning", "captioning", "masked_lm", - "token_classification", "multilabel_classification"}``. - """ - - PRODUCTS: Dict[str, Callable] = { - # First two are basically the same. Added for shorthand notation. - "virtex": vmodels.VirTexModel, - "bicaptioning": vmodels.BidirectionalCaptioningModel, - "captioning": vmodels.ForwardCaptioningModel, - "masked_lm": vmodels.MaskedLMModel, - "token_classification": vmodels.TokenClassificationModel, - "multilabel_classification": vmodels.MultiLabelClassificationModel, - "virtex_web": vmodels.VirTexModel, - "miniclip_web": vmodels.ImageTextContrastiveModel, - } - - @classmethod - def from_config(cls, config: Config) -> nn.Module: - r""" - Create a model directly from config. - - Parameters - ---------- - config: virtex.config.Config - Config object with all the parameters. - """ - - _C = config - - # Build visual and textual streams based on config. - visual = VisualBackboneFactory.from_config(_C) - textual = TextualHeadFactory.from_config(_C) - - # Add model specific kwargs. Refer call signatures of specific models - # for matching kwargs here. - if _C.MODEL.NAME in {"virtex", "captioning", "bicaptioning", "virtex_web"}: - kwargs = { - "sos_index": _C.DATA.SOS_INDEX, - "eos_index": _C.DATA.EOS_INDEX, - "label_smoothing": _C.MODEL.LABEL_SMOOTHING, - "decoder": CaptionDecoderFactory.from_config(_C), - } - elif _C.MODEL.NAME in {"miniclip_web"}: - kwargs = {"label_smoothing": _C.MODEL.LABEL_SMOOTHING} - - elif _C.MODEL.NAME == "token_classification": - kwargs = { - "ignore_indices": [ - _C.DATA.UNK_INDEX, - _C.DATA.SOS_INDEX, - _C.DATA.EOS_INDEX, - _C.DATA.MASK_INDEX, - ] - } - elif _C.MODEL.NAME == "multilabel_classification": - kwargs = {"ignore_indices": [0]} # background index - else: - kwargs = {} - - return cls.create(_C.MODEL.NAME, visual, textual, **kwargs) - - -class CaptionDecoderFactory(Factory): - r""" - Factory to create decoders from predicting captions from VirTex model. - - Possible choices: ``{"beam_search", "nucleus_sampling"}``. - """ - - PRODUCTS: Dict[str, Callable] = { - "beam_search": AutoRegressiveBeamSearch, - "nucleus_sampling": AutoRegressiveNucleusSampling, - } - - @classmethod - def from_config(cls, config: Config) -> nn.Module: - r""" - Create a model directly from config. - - Parameters - ---------- - config: virtex.config.Config - Config object with all the parameters. - """ - - _C = config - kwargs = { - "eos_index": _C.DATA.EOS_INDEX, - "max_steps": _C.MODEL.DECODER.MAX_DECODING_STEPS, - } - if _C.MODEL.DECODER.NAME == "beam_search": - kwargs["beam_size"] = _C.MODEL.DECODER.BEAM_SIZE - elif _C.MODEL.DECODER.NAME == "nucleus_sampling": - kwargs["nucleus_size"] = _C.MODEL.DECODER.NUCLEUS_SIZE - - return cls.create(_C.MODEL.DECODER.NAME, **kwargs) - - -class OptimizerFactory(Factory): - r"""Factory to create optimizers. Possible choices: ``{"sgd", "adamw"}``.""" - - PRODUCTS: Dict[str, Callable] = {"sgd": optim.SGD, "adamw": optim.AdamW} - - @classmethod - def from_config( - cls, config: Config, named_parameters: Iterable[Any] - ) -> optim.Optimizer: - r""" - Create an optimizer directly from config. - - Parameters - ---------- - config: virtex.config.Config - Config object with all the parameters. - named_parameters: Iterable - Named parameters of model (retrieved by ``model.named_parameters()``) - for the optimizer. We use named parameters to set different LR and - turn off weight decay for certain parameters based on their names. - """ - - _C = config - - # Set different learning rate for CNN and rest of the model during - # pretraining. This doesn't matter for downstream evaluation because - # there are no modules with "cnn" in their name. - # Also turn off weight decay for layer norm and bias in textual stream. - param_groups = [] - for name, param in named_parameters: - wd = 0.0 if re.match(_C.OPTIM.NO_DECAY, name) else _C.OPTIM.WEIGHT_DECAY - lr = _C.OPTIM.CNN_LR if "cnn" in name else _C.OPTIM.LR - param_groups.append({"params": [param], "lr": lr, "weight_decay": wd}) - - if _C.OPTIM.OPTIMIZER_NAME == "sgd": - kwargs = {"momentum": _C.OPTIM.SGD_MOMENTUM} - else: - kwargs = {} - - optimizer = cls.create(_C.OPTIM.OPTIMIZER_NAME, param_groups, **kwargs) - if _C.OPTIM.LOOKAHEAD.USE: - optimizer = Lookahead( - optimizer, k=_C.OPTIM.LOOKAHEAD.STEPS, alpha=_C.OPTIM.LOOKAHEAD.ALPHA - ) - return optimizer - - -class LRSchedulerFactory(Factory): - r""" - Factory to create LR schedulers. All schedulers have a built-in LR warmup - schedule before actual LR scheduling (decay) starts. - - Possible choices: ``{"none", "multistep", "linear", "cosine"}``. - """ - - PRODUCTS: Dict[str, Callable] = { - "none": lr_scheduler.LinearWarmupNoDecayLR, - "multistep": lr_scheduler.LinearWarmupMultiStepLR, - "linear": lr_scheduler.LinearWarmupLinearDecayLR, - "cosine": lr_scheduler.LinearWarmupCosineAnnealingLR, - } - - @classmethod - def from_config( - cls, config: Config, optimizer: optim.Optimizer - ) -> optim.lr_scheduler.LambdaLR: - r""" - Create an LR scheduler directly from config. - - Parameters - ---------- - config: virtex.config.Config - Config object with all the parameters. - optimizer: torch.optim.Optimizer - Optimizer on which LR scheduling would be performed. - """ - - _C = config - kwargs = { - "total_steps": _C.OPTIM.NUM_ITERATIONS, - "warmup_steps": _C.OPTIM.WARMUP_STEPS, - } - # Multistep LR requires multiplicative factor and milestones. - if _C.OPTIM.LR_DECAY_NAME == "multistep": - kwargs.update(gamma=_C.OPTIM.LR_GAMMA, milestones=_C.OPTIM.LR_STEPS) - - return cls.create(_C.OPTIM.LR_DECAY_NAME, optimizer, **kwargs) diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/AutoCAD LT For Mac 2017 Activation Code Keygen Free Download __HOT__.md b/spaces/usbethFlerru/sovits-modelsV2/example/AutoCAD LT For Mac 2017 Activation Code Keygen Free Download __HOT__.md deleted file mode 100644 index 20524bb36e5e72726bb52ca3bc6b455b377e9371..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/AutoCAD LT For Mac 2017 Activation Code Keygen Free Download __HOT__.md +++ /dev/null @@ -1,35 +0,0 @@ - -```html -

      How to Get AutoCAD LT for Mac 2017 Activation Code Keygen Free Download

      -

      If you are looking for a reliable and easy way to get AutoCAD LT for Mac 2017 activation code keygen free download, you have come to the right place. In this article, we will show you how to download and install AutoCAD LT for Mac 2017, a professional 2D drafting and documentation software that lets you create stunning designs and drawings on your Mac device. We will also provide you with a working activation code keygen that will help you activate the software for free and enjoy its full features.

      -

      AutoCAD LT for Mac 2017 Activation Code Keygen Free Download


      Download Zip ……… https://urlcod.com/2uyVV0



      -AutoCAD LT for Mac 2017 -

      What is AutoCAD LT for Mac 2017?

      -

      AutoCAD LT for Mac 2017 is a version of AutoCAD LT, a popular 2D CAD software that is used by millions of engineers, architects, designers, and drafters worldwide. AutoCAD LT for Mac 2017 is specially designed for Mac users who want to create precise and detailed drawings on their devices. With AutoCAD LT for Mac 2017, you can:

      -
        -
      • Create and edit 2D drawings with powerful tools and commands
      • -
      • Work with layers, blocks, dimensions, annotations, and more
      • -
      • Share your drawings with other users and collaborate online
      • -
      • Access online libraries of symbols, parts, and templates
      • -
      • Customize your workspace and preferences
      • -
      • Integrate with other Autodesk products and cloud services
      • -
      -

      AutoCAD LT for Mac 2017 is compatible with macOS Sierra (10.12) or later. It requires a minimum of 3 GB of RAM, 3 GB of free disk space, and a 1280 x 800 display resolution.

      -

      How to Download AutoCAD LT for Mac 2017?

      -

      To download AutoCAD LT for Mac 2017, you need to follow these steps:

      -
        -
      1. Go to the official website of Autodesk here and click on the "Download Free Trial" button.
      2. -
      3. Select your operating system (Mac) and fill in the required information. You will need to create an Autodesk account or sign in with an existing one.
      4. -
      5. Click on the "Download Now" button and wait for the installer file to be downloaded.
      6. -
      7. Open the installer file and follow the instructions to install AutoCAD LT for Mac 2017 on your device.
      8. -
      -

      Congratulations! You have successfully downloaded AutoCAD LT for Mac 2017. However, this is only a free trial version that will expire after 30 days. To activate the software and use it without any limitations, you will need an activation code keygen.

      -

      How to Get AutoCAD LT for Mac 2017 Activation Code Keygen?

      -

      An activation code keygen is a program that generates valid activation codes for AutoCAD LT for Mac 2017. By using an activation code keygen, you can bypass the registration process and activate the software for free. However, not all activation code keygens are safe and reliable. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information.

      -

      -

      That's why we recommend you to use our trusted activation code keygen that we have tested and verified. Our activation code keygen is clean, secure, and easy to use. It can generate unlimited activation codes for AutoCAD LT for Mac 2017 in seconds. Here's how to use it:

      -
        -
      1. Download our activation code keygen from here. It is a zip file that contains the keygen.exe file.
      2. -
      3. Extract the zip file and run

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Baixar Pegadinhas do Mucao Pra Gratis Conhea as Histrias e os Personagens das Pegadinhas.md b/spaces/usbethFlerru/sovits-modelsV2/example/Baixar Pegadinhas do Mucao Pra Gratis Conhea as Histrias e os Personagens das Pegadinhas.md deleted file mode 100644 index e43a8b5472acfe885b2369bb914846464f58a019..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Baixar Pegadinhas do Mucao Pra Gratis Conhea as Histrias e os Personagens das Pegadinhas.md +++ /dev/null @@ -1,6 +0,0 @@ - -

        Aqui você no musky pode ouvir e baixar músicas facilmente utilizando nosso buscador de MP3, é grátis e fácil. Não é necessário cadastro ou login,basta digitar o nome da música ou artista e buscar , simples assim, aproveite.Nao precisa baixar programas ou aplicativos , voce pode baixar musicas de pegadinha do mucao dona denise a morais moreira de qualquer lugar no seu celular , computador ou qualquer dispositivo conectado na internetGostou de baixar pegadinha do mucao dona denise a morais moreira ou do nosso buscador de musica? compartilhe com seu amigos e nas redes sociais , nos ajude crescer.musicas recentes de pegadinha do mucao dona denise a morais moreira 2023 , voce encontra aqui

        -

        O site Musiky, buscador de música é uma plataforma incrível que oferece aos seus usuários uma maneira fácil e rapida de encontrar as músicas que procuram.Com a ferramenta de busca voce pode encontrar pegadinha do mucao dona denise a morais moreira e muito mais , você pode pesquisar por artistas, músicas ou álbuns de uma forma intuitiva e simples.Não é necessário se registrar ou baixar um aplicativo para acessar a biblioteca de músicas, tudo o que você precisa fazer é acessar o site e começar a busca por Pegadinha Do Mucao Dona Denise A Morais Moreira.A variedade de gêneros musicais disponíveis no site é ampla, o que significa que você pode encontrar músicas de todos os tipos, desde pop até rock, passando por hip hop, R&B e muito mais.Além disso, o site oferece músicas em alta qualidade, o que garante que você possa desfrutar de uma experiência de áudio incrível.O Buscador de música Musiky também é atualizado constantemente, o que significa que você sempre terá acesso aos últimos lançamentos e sucessos musicais. Isso é incrível,pois você nunca perde a oportunidade de ouvir as músicas mais recentes e populares.Em resumo, se você está procurando uma maneira fácil, conveniente e gratuita de encontrar e baixar músicas, o site de busca de música é a escolha certa.Com a ferramenta de busca, você pode encontrar as músicas que procura em questão de segundos, inclusive Pegadinha Do Mucao Dona Denise A Morais Moreira,sem precisar se preocupar com a necessidade de se registrar ou baixar um aplicativo. Além disso, com a ampla variedade de gêneros e artistas disponíveis,você sempre encontrará músicas que combinam com seus gostos musicais. Então, não perca mais tempo, experimente o site de busca de música hoje mesmo!

        -

        Baixar Pegadinhas do Mucao Pra Gratis


        Download Zip –––––>>> https://urlcod.com/2uyU86



        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Dingoo A320 Complete ROMS Pack - ROM GBA GBC SNES SMD NeoGeo Serial Key The Ultimate Collection of Classic Games for Your Portable Console.md b/spaces/usbethFlerru/sovits-modelsV2/example/Dingoo A320 Complete ROMS Pack - ROM GBA GBC SNES SMD NeoGeo Serial Key The Ultimate Collection of Classic Games for Your Portable Console.md deleted file mode 100644 index 9589579fc5fe8fae1a2ef062c19358d0c0820ab9..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Dingoo A320 Complete ROMS Pack - ROM GBA GBC SNES SMD NeoGeo Serial Key The Ultimate Collection of Classic Games for Your Portable Console.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Dingoo A320 Complete ROMS Pack - ROM GBA, GBC, SNES, SMD, NeoGeo Serial Key


        Download ✯✯✯ https://urlcod.com/2uyUEH



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/data/dataset.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/data/dataset.py deleted file mode 100644 index 17e6d47c109f9f9088ce01602dc9b06e0a89fde9..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/data/dataset.py +++ /dev/null @@ -1,274 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license - -from itertools import repeat -from multiprocessing.pool import ThreadPool -from pathlib import Path - -import cv2 -import numpy as np -import torch -import torchvision -from tqdm import tqdm - -from ..utils import LOCAL_RANK, NUM_THREADS, TQDM_BAR_FORMAT, is_dir_writeable -from .augment import Compose, Format, Instances, LetterBox, classify_albumentations, classify_transforms, v8_transforms -from .base import BaseDataset -from .utils import HELP_URL, LOGGER, get_hash, img2label_paths, verify_image_label - - -class YOLODataset(BaseDataset): - """ - Dataset class for loading object detection and/or segmentation labels in YOLO format. - - Args: - data (dict, optional): A dataset YAML dictionary. Defaults to None. - use_segments (bool, optional): If True, segmentation masks are used as labels. Defaults to False. - use_keypoints (bool, optional): If True, keypoints are used as labels. Defaults to False. - - Returns: - (torch.utils.data.Dataset): A PyTorch dataset object that can be used for training an object detection model. - """ - cache_version = '1.0.2' # dataset labels *.cache version, >= 1.0.0 for YOLOv8 - rand_interp_methods = [cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4] - - def __init__(self, *args, data=None, use_segments=False, use_keypoints=False, **kwargs): - self.use_segments = use_segments - self.use_keypoints = use_keypoints - self.data = data - assert not (self.use_segments and self.use_keypoints), 'Can not use both segments and keypoints.' - super().__init__(*args, **kwargs) - - def cache_labels(self, path=Path('./labels.cache')): - """Cache dataset labels, check images and read shapes. - Args: - path (Path): path where to save the cache file (default: Path('./labels.cache')). - Returns: - (dict): labels. - """ - x = {'labels': []} - nm, nf, ne, nc, msgs = 0, 0, 0, 0, [] # number missing, found, empty, corrupt, messages - desc = f'{self.prefix}Scanning {path.parent / path.stem}...' - total = len(self.im_files) - nkpt, ndim = self.data.get('kpt_shape', (0, 0)) - if self.use_keypoints and (nkpt <= 0 or ndim not in (2, 3)): - raise ValueError("'kpt_shape' in data.yaml missing or incorrect. Should be a list with [number of " - "keypoints, number of dims (2 for x,y or 3 for x,y,visible)], i.e. 'kpt_shape: [17, 3]'") - with ThreadPool(NUM_THREADS) as pool: - results = pool.imap(func=verify_image_label, - iterable=zip(self.im_files, self.label_files, repeat(self.prefix), - repeat(self.use_keypoints), repeat(len(self.data['names'])), repeat(nkpt), - repeat(ndim))) - pbar = tqdm(results, desc=desc, total=total, bar_format=TQDM_BAR_FORMAT) - for im_file, lb, shape, segments, keypoint, nm_f, nf_f, ne_f, nc_f, msg in pbar: - nm += nm_f - nf += nf_f - ne += ne_f - nc += nc_f - if im_file: - x['labels'].append( - dict( - im_file=im_file, - shape=shape, - cls=lb[:, 0:1], # n, 1 - bboxes=lb[:, 1:], # n, 4 - segments=segments, - keypoints=keypoint, - normalized=True, - bbox_format='xywh')) - if msg: - msgs.append(msg) - pbar.desc = f'{desc} {nf} images, {nm + ne} backgrounds, {nc} corrupt' - pbar.close() - - if msgs: - LOGGER.info('\n'.join(msgs)) - if nf == 0: - LOGGER.warning(f'{self.prefix}WARNING ⚠️ No labels found in {path}. {HELP_URL}') - x['hash'] = get_hash(self.label_files + self.im_files) - x['results'] = nf, nm, ne, nc, len(self.im_files) - x['msgs'] = msgs # warnings - x['version'] = self.cache_version # cache version - if is_dir_writeable(path.parent): - if path.exists(): - path.unlink() # remove *.cache file if exists - np.save(str(path), x) # save cache for next time - path.with_suffix('.cache.npy').rename(path) # remove .npy suffix - LOGGER.info(f'{self.prefix}New cache created: {path}') - else: - LOGGER.warning(f'{self.prefix}WARNING ⚠️ Cache directory {path.parent} is not writeable, cache not saved.') - return x - - def get_labels(self): - """Returns dictionary of labels for YOLO training.""" - self.label_files = img2label_paths(self.im_files) - cache_path = Path(self.label_files[0]).parent.with_suffix('.cache') - try: - import gc - gc.disable() # reduce pickle load time https://github.com/ultralytics/ultralytics/pull/1585 - cache, exists = np.load(str(cache_path), allow_pickle=True).item(), True # load dict - gc.enable() - assert cache['version'] == self.cache_version # matches current version - assert cache['hash'] == get_hash(self.label_files + self.im_files) # identical hash - except (FileNotFoundError, AssertionError, AttributeError): - cache, exists = self.cache_labels(cache_path), False # run cache ops - - # Display cache - nf, nm, ne, nc, n = cache.pop('results') # found, missing, empty, corrupt, total - if exists and LOCAL_RANK in (-1, 0): - d = f'Scanning {cache_path}... {nf} images, {nm + ne} backgrounds, {nc} corrupt' - tqdm(None, desc=self.prefix + d, total=n, initial=n, bar_format=TQDM_BAR_FORMAT) # display cache results - if cache['msgs']: - LOGGER.info('\n'.join(cache['msgs'])) # display warnings - if nf == 0: # number of labels found - raise FileNotFoundError(f'{self.prefix}No labels found in {cache_path}, can not start training. {HELP_URL}') - - # Read cache - [cache.pop(k) for k in ('hash', 'version', 'msgs')] # remove items - labels = cache['labels'] - self.im_files = [lb['im_file'] for lb in labels] # update im_files - - # Check if the dataset is all boxes or all segments - lengths = ((len(lb['cls']), len(lb['bboxes']), len(lb['segments'])) for lb in labels) - len_cls, len_boxes, len_segments = (sum(x) for x in zip(*lengths)) - if len_segments and len_boxes != len_segments: - LOGGER.warning( - f'WARNING ⚠️ Box and segment counts should be equal, but got len(segments) = {len_segments}, ' - f'len(boxes) = {len_boxes}. To resolve this only boxes will be used and all segments will be removed. ' - 'To avoid this please supply either a detect or segment dataset, not a detect-segment mixed dataset.') - for lb in labels: - lb['segments'] = [] - if len_cls == 0: - raise ValueError(f'All labels empty in {cache_path}, can not start training without labels. {HELP_URL}') - return labels - - # TODO: use hyp config to set all these augmentations - def build_transforms(self, hyp=None): - """Builds and appends transforms to the list.""" - if self.augment: - hyp.mosaic = hyp.mosaic if self.augment and not self.rect else 0.0 - hyp.mixup = hyp.mixup if self.augment and not self.rect else 0.0 - transforms = v8_transforms(self, self.imgsz, hyp) - else: - transforms = Compose([LetterBox(new_shape=(self.imgsz, self.imgsz), scaleup=False)]) - transforms.append( - Format(bbox_format='xywh', - normalize=True, - return_mask=self.use_segments, - return_keypoint=self.use_keypoints, - batch_idx=True, - mask_ratio=hyp.mask_ratio, - mask_overlap=hyp.overlap_mask)) - return transforms - - def close_mosaic(self, hyp): - """Sets mosaic, copy_paste and mixup options to 0.0 and builds transformations.""" - hyp.mosaic = 0.0 # set mosaic ratio=0.0 - hyp.copy_paste = 0.0 # keep the same behavior as previous v8 close-mosaic - hyp.mixup = 0.0 # keep the same behavior as previous v8 close-mosaic - self.transforms = self.build_transforms(hyp) - - def update_labels_info(self, label): - """custom your label format here.""" - # NOTE: cls is not with bboxes now, classification and semantic segmentation need an independent cls label - # we can make it also support classification and semantic segmentation by add or remove some dict keys there. - bboxes = label.pop('bboxes') - segments = label.pop('segments') - keypoints = label.pop('keypoints', None) - bbox_format = label.pop('bbox_format') - normalized = label.pop('normalized') - label['instances'] = Instances(bboxes, segments, keypoints, bbox_format=bbox_format, normalized=normalized) - return label - - @staticmethod - def collate_fn(batch): - """Collates data samples into batches.""" - new_batch = {} - keys = batch[0].keys() - values = list(zip(*[list(b.values()) for b in batch])) - for i, k in enumerate(keys): - value = values[i] - if k == 'img': - value = torch.stack(value, 0) - if k in ['masks', 'keypoints', 'bboxes', 'cls']: - value = torch.cat(value, 0) - new_batch[k] = value - new_batch['batch_idx'] = list(new_batch['batch_idx']) - for i in range(len(new_batch['batch_idx'])): - new_batch['batch_idx'][i] += i # add target image index for build_targets() - new_batch['batch_idx'] = torch.cat(new_batch['batch_idx'], 0) - return new_batch - - -# Classification dataloaders ------------------------------------------------------------------------------------------- -class ClassificationDataset(torchvision.datasets.ImageFolder): - """ - YOLO Classification Dataset. - - Args: - root (str): Dataset path. - - Attributes: - cache_ram (bool): True if images should be cached in RAM, False otherwise. - cache_disk (bool): True if images should be cached on disk, False otherwise. - samples (list): List of samples containing file, index, npy, and im. - torch_transforms (callable): torchvision transforms applied to the dataset. - album_transforms (callable, optional): Albumentations transforms applied to the dataset if augment is True. - """ - - def __init__(self, root, args, augment=False, cache=False): - """ - Initialize YOLO object with root, image size, augmentations, and cache settings. - - Args: - root (str): Dataset path. - args (Namespace): Argument parser containing dataset related settings. - augment (bool, optional): True if dataset should be augmented, False otherwise. Defaults to False. - cache (bool | str | optional): Cache setting, can be True, False, 'ram' or 'disk'. Defaults to False. - """ - super().__init__(root=root) - if augment and args.fraction < 1.0: # reduce training fraction - self.samples = self.samples[:round(len(self.samples) * args.fraction)] - self.cache_ram = cache is True or cache == 'ram' - self.cache_disk = cache == 'disk' - self.samples = [list(x) + [Path(x[0]).with_suffix('.npy'), None] for x in self.samples] # file, index, npy, im - self.torch_transforms = classify_transforms(args.imgsz) - self.album_transforms = classify_albumentations( - augment=augment, - size=args.imgsz, - scale=(1.0 - args.scale, 1.0), # (0.08, 1.0) - hflip=args.fliplr, - vflip=args.flipud, - hsv_h=args.hsv_h, # HSV-Hue augmentation (fraction) - hsv_s=args.hsv_s, # HSV-Saturation augmentation (fraction) - hsv_v=args.hsv_v, # HSV-Value augmentation (fraction) - mean=(0.0, 0.0, 0.0), # IMAGENET_MEAN - std=(1.0, 1.0, 1.0), # IMAGENET_STD - auto_aug=False) if augment else None - - def __getitem__(self, i): - """Returns subset of data and targets corresponding to given indices.""" - f, j, fn, im = self.samples[i] # filename, index, filename.with_suffix('.npy'), image - if self.cache_ram and im is None: - im = self.samples[i][3] = cv2.imread(f) - elif self.cache_disk: - if not fn.exists(): # load npy - np.save(fn.as_posix(), cv2.imread(f)) - im = np.load(fn) - else: # read image - im = cv2.imread(f) # BGR - if self.album_transforms: - sample = self.album_transforms(image=cv2.cvtColor(im, cv2.COLOR_BGR2RGB))['image'] - else: - sample = self.torch_transforms(im) - return {'img': sample, 'cls': j} - - def __len__(self) -> int: - return len(self.samples) - - -# TODO: support semantic segmentation -class SemanticDataset(BaseDataset): - - def __init__(self): - """Initialize a SemanticDataset object.""" - super().__init__() diff --git a/spaces/vanderbilt-dsi/free-speech-app/README.md b/spaces/vanderbilt-dsi/free-speech-app/README.md deleted file mode 100644 index a3fba8ade52fcb481b9924b4eb3232381e4332db..0000000000000000000000000000000000000000 --- a/spaces/vanderbilt-dsi/free-speech-app/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Free Speech App -emoji: 🏆 -colorFrom: purple -colorTo: pink -sdk: streamlit -sdk_version: 1.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vobecant/DaS/segmenter_model/decoder.py b/spaces/vobecant/DaS/segmenter_model/decoder.py deleted file mode 100644 index c62a9ba0f5d544e3bfd278d15a474af469e5381f..0000000000000000000000000000000000000000 --- a/spaces/vobecant/DaS/segmenter_model/decoder.py +++ /dev/null @@ -1,214 +0,0 @@ -import math -import torch -import torch.nn as nn -import torch.nn.functional as F -from einops import rearrange - -from timm.models.layers import trunc_normal_ - -from segmenter_model.blocks import Block, FeedForward -from segmenter_model.utils import init_weights - - -class DecoderLinear(nn.Module): - def __init__(self, n_cls, patch_size, d_encoder): - super().__init__() - - self.d_encoder = d_encoder - self.patch_size = patch_size - self.n_cls = n_cls - - self.head = nn.Linear(self.d_encoder, n_cls) - self.apply(init_weights) - - @torch.jit.ignore - def no_weight_decay(self): - return set() - - def forward(self, x, im_size): - H, W = im_size - GS = H // self.patch_size - x = self.head(x) - x = rearrange(x, "b (h w) c -> b c h w", h=GS) - - return x - - -class MaskTransformer(nn.Module): - def __init__( - self, - n_cls, - patch_size, - d_encoder, - n_layers, - n_heads, - d_model, - d_ff, - drop_path_rate, - dropout, - ): - super().__init__() - self.d_encoder = d_encoder - self.patch_size = patch_size - self.n_layers = n_layers - self.n_cls = n_cls - self.d_model = d_model - self.d_ff = d_ff - self.scale = d_model ** -0.5 - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, n_layers)] - self.blocks = nn.ModuleList( - [Block(d_model, n_heads, d_ff, dropout, dpr[i]) for i in range(n_layers)] - ) - - self.cls_emb = nn.Parameter(torch.randn(1, n_cls, d_model)) - self.proj_dec = nn.Linear(d_encoder, d_model) - - self.proj_patch = nn.Parameter(self.scale * torch.randn(d_model, d_model)) - self.proj_classes = nn.Parameter(self.scale * torch.randn(d_model, d_model)) - - self.decoder_norm = nn.LayerNorm(d_model) - self.mask_norm = nn.LayerNorm(n_cls) - - self.apply(init_weights) - trunc_normal_(self.cls_emb, std=0.02) - - @torch.jit.ignore - def no_weight_decay(self): - return {"cls_emb"} - - def forward(self, x, im_size, features_only=False, no_rearrange=False): - H, W = im_size - GS = H // self.patch_size - - # project from the encoder dimensionality to the decoder dimensionality (usually the same) - x = self.proj_dec(x) - # reshape the class embedding token - cls_emb = self.cls_emb.expand(x.size(0), -1, -1) - # concatenate the class embedding token to the input - x = torch.cat((x, cls_emb), 1) - # forward the concatenated tokens through decoder blocks - for blk in self.blocks: - x = blk(x) - # perform normalization - x = self.decoder_norm(x) - - # split to patch features and class-segmentation features - patches, cls_seg_feat = x[:, : -self.n_cls], x[:, -self.n_cls:] - - # project the patch features - patches = patches @ self.proj_patch - - if features_only: - if not no_rearrange: - features = rearrange(patches, "b (h w) n -> b n h w", h=int(GS)) - else: - features = patches - return features - - # project the class-segmentation features - cls_seg_feat = cls_seg_feat @ self.proj_classes - - # scalar product between L2-normalized patch embeddings and class embeddings -> masks - patches = patches / patches.norm(dim=-1, keepdim=True) - cls_seg_feat = cls_seg_feat / cls_seg_feat.norm(dim=-1, keepdim=True) - masks = patches @ cls_seg_feat.transpose(1, 2) - - masks = self.mask_norm(masks) - if not no_rearrange: - masks = rearrange(masks, "b (h w) n -> b n h w", h=int(GS)) - - return masks - - def get_attention_map(self, x, layer_id): - if layer_id >= self.n_layers or layer_id < 0: - raise ValueError( - f"Provided layer_id: {layer_id} is not valid. 0 <= {layer_id} < {self.n_layers}." - ) - x = self.proj_dec(x) - cls_emb = self.cls_emb.expand(x.size(0), -1, -1) - x = torch.cat((x, cls_emb), 1) - for i, blk in enumerate(self.blocks): - if i < layer_id: - x = blk(x) - else: - return blk(x, return_attention=True) - - -class DeepLabHead(nn.Sequential): - def __init__(self, in_channels, num_classes, patch_size=None): - super(DeepLabHead, self).__init__( - ASPP(in_channels, [12, 24, 36]), - nn.Conv2d(256, 256, 3, padding=1, bias=False), - nn.BatchNorm2d(256), - nn.ReLU(), - nn.Conv2d(256, num_classes, 1) - ) - self.patch_size = patch_size - - def forward(self, x, im_size=None): - if len(x.shape) == 3: - # features from ViT - assert im_size is not None and self.patch_size is not None - H, W = im_size - GS = H // self.patch_size - x = rearrange(x, "b (h w) n -> b n h w", h=int(GS)).contiguous() - for module in self: - x = module(x) - return x - - -class ASPPConv(nn.Sequential): - def __init__(self, in_channels, out_channels, dilation): - modules = [ - nn.Conv2d(in_channels, out_channels, 3, padding=dilation, dilation=dilation, bias=False), - nn.BatchNorm2d(out_channels), - nn.ReLU() - ] - super(ASPPConv, self).__init__(*modules) - - -class ASPPPooling(nn.Sequential): - def __init__(self, in_channels, out_channels): - super(ASPPPooling, self).__init__( - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(in_channels, out_channels, 1, bias=False), - nn.BatchNorm2d(out_channels), - nn.ReLU()) - - def forward(self, x): - size = x.shape[-2:] - for mod in self: - x = mod(x) - return F.interpolate(x, size=size, mode='bilinear', align_corners=False) - - -class ASPP(nn.Module): - def __init__(self, in_channels, atrous_rates, out_channels=256): - super(ASPP, self).__init__() - modules = [] - modules.append(nn.Sequential( - nn.Conv2d(in_channels, out_channels, 1, bias=False), - nn.BatchNorm2d(out_channels), - nn.ReLU())) - - rates = tuple(atrous_rates) - for rate in rates: - modules.append(ASPPConv(in_channels, out_channels, rate)) - - modules.append(ASPPPooling(in_channels, out_channels)) - - self.convs = nn.ModuleList(modules) - - self.project = nn.Sequential( - nn.Conv2d(5 * out_channels, out_channels, 1, bias=False), - nn.BatchNorm2d(out_channels), - nn.ReLU(), - nn.Dropout(0.5)) - - def forward(self, x): - res = [] - for conv in self.convs: - res.append(conv(x)) - res = torch.cat(res, dim=1) - return self.project(res) diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/sync_buffer.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/sync_buffer.py deleted file mode 100644 index 6376b7ff894280cb2782243b25e8973650591577..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/sync_buffer.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..dist_utils import allreduce_params -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class SyncBuffersHook(Hook): - """Synchronize model buffers such as running_mean and running_var in BN at - the end of each epoch. - - Args: - distributed (bool): Whether distributed training is used. It is - effective only for distributed training. Defaults to True. - """ - - def __init__(self, distributed=True): - self.distributed = distributed - - def after_epoch(self, runner): - """All-reduce model buffers at the end of each epoch.""" - if self.distributed: - allreduce_params(runner.model.buffers()) diff --git a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/document_store/test_chromadb_store.py b/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/document_store/test_chromadb_store.py deleted file mode 100644 index f8c11e1ca5090b096d6e08b37844e04bfb516de1..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/document_store/test_chromadb_store.py +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/6/6 00:41 -@Author : alexanderwu -@File : test_chromadb_store.py -""" -from metagpt.document_store.chromadb_store import ChromaStore - - -# @pytest.mark.skip() -def test_chroma_store(): - """FIXME:chroma使用感觉很诡异,一用Python就挂,测试用例里也是""" - # 创建 ChromaStore 实例,使用 'sample_collection' 集合 - document_store = ChromaStore('sample_collection_1') - - # 使用 write 方法添加多个文档 - document_store.write(["This is document1", "This is document2"], - [{"source": "google-docs"}, {"source": "notion"}], - ["doc1", "doc2"]) - - # 使用 add 方法添加一个文档 - document_store.add("This is document3", {"source": "notion"}, "doc3") - - # 搜索文档 - results = document_store.search("This is a query document", n_results=3) - assert len(results) > 0 diff --git a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/roles/test_architect.py b/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/roles/test_architect.py deleted file mode 100644 index d44e0d923287fec90c747c7c04961bfbcb5a6a0f..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/roles/test_architect.py +++ /dev/null @@ -1,21 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/20 14:37 -@Author : alexanderwu -@File : test_architect.py -""" -import pytest - -from metagpt.logs import logger -from metagpt.roles import Architect -from tests.metagpt.roles.mock import MockMessages - - -@pytest.mark.asyncio -async def test_architect(): - role = Architect() - role.recv(MockMessages.req) - rsp = await role.handle(MockMessages.prd) - logger.info(rsp) - assert len(rsp.content) > 0 diff --git a/spaces/wong26/faster-whisper-webui/app.py b/spaces/wong26/faster-whisper-webui/app.py deleted file mode 100644 index 52022cfb7b3d216bf26a3890f8fae6c7239636df..0000000000000000000000000000000000000000 --- a/spaces/wong26/faster-whisper-webui/app.py +++ /dev/null @@ -1,627 +0,0 @@ -from datetime import datetime -import json -import math -from typing import Iterator, Union -import argparse - -from io import StringIO -import os -import pathlib -import tempfile -import zipfile -import numpy as np - -import torch - -from src.config import VAD_INITIAL_PROMPT_MODE_VALUES, ApplicationConfig, VadInitialPromptMode -from src.hooks.progressListener import ProgressListener -from src.hooks.subTaskProgressListener import SubTaskProgressListener -from src.hooks.whisperProgressHook import create_progress_listener_handle -from src.languages import get_language_names -from src.modelCache import ModelCache -from src.prompts.jsonPromptStrategy import JsonPromptStrategy -from src.prompts.prependPromptStrategy import PrependPromptStrategy -from src.source import get_audio_source_collection -from src.vadParallel import ParallelContext, ParallelTranscription - -# External programs -import ffmpeg - -# UI -import gradio as gr - -from src.download import ExceededMaximumDuration, download_url -from src.utils import optional_int, slugify, write_srt, write_vtt -from src.vad import AbstractTranscription, NonSpeechStrategy, PeriodicTranscriptionConfig, TranscriptionConfig, VadPeriodicTranscription, VadSileroTranscription -from src.whisper.abstractWhisperContainer import AbstractWhisperContainer -from src.whisper.whisperFactory import create_whisper_container - -# Configure more application defaults in config.json5 - -# Gradio seems to truncate files without keeping the extension, so we need to truncate the file prefix ourself -MAX_FILE_PREFIX_LENGTH = 17 - -# Limit auto_parallel to a certain number of CPUs (specify vad_cpu_cores to get a higher number) -MAX_AUTO_CPU_CORES = 8 - -WHISPER_MODELS = ["tiny", "base", "small", "medium", "large", "large-v1", "large-v2"] - -class VadOptions: - def __init__(self, vad: str = None, vadMergeWindow: float = 5, vadMaxMergeSize: float = 150, vadPadding: float = 1, vadPromptWindow: float = 1, - vadInitialPromptMode: Union[VadInitialPromptMode, str] = VadInitialPromptMode.PREPREND_FIRST_SEGMENT): - self.vad = vad - self.vadMergeWindow = vadMergeWindow - self.vadMaxMergeSize = vadMaxMergeSize - self.vadPadding = vadPadding - self.vadPromptWindow = vadPromptWindow - self.vadInitialPromptMode = vadInitialPromptMode if isinstance(vadInitialPromptMode, VadInitialPromptMode) \ - else VadInitialPromptMode.from_string(vadInitialPromptMode) - -class WhisperTranscriber: - def __init__(self, input_audio_max_duration: float = None, vad_process_timeout: float = None, - vad_cpu_cores: int = 1, delete_uploaded_files: bool = False, output_dir: str = None, - app_config: ApplicationConfig = None): - self.model_cache = ModelCache() - self.parallel_device_list = None - self.gpu_parallel_context = None - self.cpu_parallel_context = None - self.vad_process_timeout = vad_process_timeout - self.vad_cpu_cores = vad_cpu_cores - - self.vad_model = None - self.inputAudioMaxDuration = input_audio_max_duration - self.deleteUploadedFiles = delete_uploaded_files - self.output_dir = output_dir - - self.app_config = app_config - - def set_parallel_devices(self, vad_parallel_devices: str): - self.parallel_device_list = [ device.strip() for device in vad_parallel_devices.split(",") ] if vad_parallel_devices else None - - def set_auto_parallel(self, auto_parallel: bool): - if auto_parallel: - if torch.cuda.is_available(): - self.parallel_device_list = [ str(gpu_id) for gpu_id in range(torch.cuda.device_count())] - - self.vad_cpu_cores = min(os.cpu_count(), MAX_AUTO_CPU_CORES) - print("[Auto parallel] Using GPU devices " + str(self.parallel_device_list) + " and " + str(self.vad_cpu_cores) + " CPU cores for VAD/transcription.") - - # Entry function for the simple tab - def transcribe_webui_simple(self, modelName, languageName, urlData, multipleFiles, microphoneData, task, - vad, vadMergeWindow, vadMaxMergeSize, - word_timestamps: bool = False, highlight_words: bool = False): - return self.transcribe_webui_simple_progress(modelName, languageName, urlData, multipleFiles, microphoneData, task, - vad, vadMergeWindow, vadMaxMergeSize, - word_timestamps, highlight_words) - - # Entry function for the simple tab progress - def transcribe_webui_simple_progress(self, modelName, languageName, urlData, multipleFiles, microphoneData, task, - vad, vadMergeWindow, vadMaxMergeSize, - word_timestamps: bool = False, highlight_words: bool = False, - progress=gr.Progress()): - - vadOptions = VadOptions(vad, vadMergeWindow, vadMaxMergeSize, self.app_config.vad_padding, self.app_config.vad_prompt_window, self.app_config.vad_initial_prompt_mode) - - return self.transcribe_webui(modelName, languageName, urlData, multipleFiles, microphoneData, task, vadOptions, - word_timestamps=word_timestamps, highlight_words=highlight_words, progress=progress) - - # Entry function for the full tab - def transcribe_webui_full(self, modelName, languageName, urlData, multipleFiles, microphoneData, task, - vad, vadMergeWindow, vadMaxMergeSize, vadPadding, vadPromptWindow, vadInitialPromptMode, - # Word timestamps - word_timestamps: bool, highlight_words: bool, prepend_punctuations: str, append_punctuations: str, - initial_prompt: str, temperature: float, best_of: int, beam_size: int, patience: float, length_penalty: float, suppress_tokens: str, - condition_on_previous_text: bool, fp16: bool, temperature_increment_on_fallback: float, - compression_ratio_threshold: float, logprob_threshold: float, no_speech_threshold: float): - - return self.transcribe_webui_full_progress(modelName, languageName, urlData, multipleFiles, microphoneData, task, - vad, vadMergeWindow, vadMaxMergeSize, vadPadding, vadPromptWindow, vadInitialPromptMode, - word_timestamps, highlight_words, prepend_punctuations, append_punctuations, - initial_prompt, temperature, best_of, beam_size, patience, length_penalty, suppress_tokens, - condition_on_previous_text, fp16, temperature_increment_on_fallback, - compression_ratio_threshold, logprob_threshold, no_speech_threshold) - - # Entry function for the full tab with progress - def transcribe_webui_full_progress(self, modelName, languageName, urlData, multipleFiles, microphoneData, task, - vad, vadMergeWindow, vadMaxMergeSize, vadPadding, vadPromptWindow, vadInitialPromptMode, - # Word timestamps - word_timestamps: bool, highlight_words: bool, prepend_punctuations: str, append_punctuations: str, - initial_prompt: str, temperature: float, best_of: int, beam_size: int, patience: float, length_penalty: float, suppress_tokens: str, - condition_on_previous_text: bool, fp16: bool, temperature_increment_on_fallback: float, - compression_ratio_threshold: float, logprob_threshold: float, no_speech_threshold: float, - progress=gr.Progress()): - - # Handle temperature_increment_on_fallback - if temperature_increment_on_fallback is not None: - temperature = tuple(np.arange(temperature, 1.0 + 1e-6, temperature_increment_on_fallback)) - else: - temperature = [temperature] - - vadOptions = VadOptions(vad, vadMergeWindow, vadMaxMergeSize, vadPadding, vadPromptWindow, vadInitialPromptMode) - - return self.transcribe_webui(modelName, languageName, urlData, multipleFiles, microphoneData, task, vadOptions, - initial_prompt=initial_prompt, temperature=temperature, best_of=best_of, beam_size=beam_size, patience=patience, length_penalty=length_penalty, suppress_tokens=suppress_tokens, - condition_on_previous_text=condition_on_previous_text, fp16=fp16, - compression_ratio_threshold=compression_ratio_threshold, logprob_threshold=logprob_threshold, no_speech_threshold=no_speech_threshold, - word_timestamps=word_timestamps, prepend_punctuations=prepend_punctuations, append_punctuations=append_punctuations, highlight_words=highlight_words, - progress=progress) - - def transcribe_webui(self, modelName, languageName, urlData, multipleFiles, microphoneData, task, - vadOptions: VadOptions, progress: gr.Progress = None, highlight_words: bool = False, - **decodeOptions: dict): - try: - sources = self.__get_source(urlData, multipleFiles, microphoneData) - - try: - selectedLanguage = languageName.lower() if len(languageName) > 0 else None - selectedModel = modelName if modelName is not None else "base" - - model = create_whisper_container(whisper_implementation=self.app_config.whisper_implementation, - model_name=selectedModel, compute_type=self.app_config.compute_type, - cache=self.model_cache, models=self.app_config.models) - - # Result - download = [] - zip_file_lookup = {} - text = "" - vtt = "" - - # Write result - downloadDirectory = tempfile.mkdtemp() - source_index = 0 - - outputDirectory = self.output_dir if self.output_dir is not None else downloadDirectory - - # Progress - total_duration = sum([source.get_audio_duration() for source in sources]) - current_progress = 0 - - # A listener that will report progress to Gradio - root_progress_listener = self._create_progress_listener(progress) - - # Execute whisper - for source in sources: - source_prefix = "" - source_audio_duration = source.get_audio_duration() - - if (len(sources) > 1): - # Prefix (minimum 2 digits) - source_index += 1 - source_prefix = str(source_index).zfill(2) + "_" - print("Transcribing ", source.source_path) - - scaled_progress_listener = SubTaskProgressListener(root_progress_listener, - base_task_total=total_duration, - sub_task_start=current_progress, - sub_task_total=source_audio_duration) - - # Transcribe - result = self.transcribe_file(model, source.source_path, selectedLanguage, task, vadOptions, scaled_progress_listener, **decodeOptions) - filePrefix = slugify(source_prefix + source.get_short_name(), allow_unicode=True) - - # Update progress - current_progress += source_audio_duration - - source_download, source_text, source_vtt = self.write_result(result, filePrefix, outputDirectory, highlight_words) - - if len(sources) > 1: - # Add new line separators - if (len(source_text) > 0): - source_text += os.linesep + os.linesep - if (len(source_vtt) > 0): - source_vtt += os.linesep + os.linesep - - # Append file name to source text too - source_text = source.get_full_name() + ":" + os.linesep + source_text - source_vtt = source.get_full_name() + ":" + os.linesep + source_vtt - - # Add to result - download.extend(source_download) - text += source_text - vtt += source_vtt - - if (len(sources) > 1): - # Zip files support at least 260 characters, but we'll play it safe and use 200 - zipFilePrefix = slugify(source_prefix + source.get_short_name(max_length=200), allow_unicode=True) - - # File names in ZIP file can be longer - for source_download_file in source_download: - # Get file postfix (after last -) - filePostfix = os.path.basename(source_download_file).split("-")[-1] - zip_file_name = zipFilePrefix + "-" + filePostfix - zip_file_lookup[source_download_file] = zip_file_name - - # Create zip file from all sources - if len(sources) > 1: - downloadAllPath = os.path.join(downloadDirectory, "All_Output-" + datetime.now().strftime("%Y%m%d-%H%M%S") + ".zip") - - with zipfile.ZipFile(downloadAllPath, 'w', zipfile.ZIP_DEFLATED) as zip: - for download_file in download: - # Get file name from lookup - zip_file_name = zip_file_lookup.get(download_file, os.path.basename(download_file)) - zip.write(download_file, arcname=zip_file_name) - - download.insert(0, downloadAllPath) - - return download, text, vtt - - finally: - # Cleanup source - if self.deleteUploadedFiles: - for source in sources: - print("Deleting source file " + source.source_path) - - try: - os.remove(source.source_path) - except Exception as e: - # Ignore error - it's just a cleanup - print("Error deleting source file " + source.source_path + ": " + str(e)) - - except ExceededMaximumDuration as e: - return [], ("[ERROR]: Maximum remote video length is " + str(e.maxDuration) + "s, file was " + str(e.videoDuration) + "s"), "[ERROR]" - - def transcribe_file(self, model: AbstractWhisperContainer, audio_path: str, language: str, task: str = None, - vadOptions: VadOptions = VadOptions(), - progressListener: ProgressListener = None, **decodeOptions: dict): - - initial_prompt = decodeOptions.pop('initial_prompt', None) - - if progressListener is None: - # Default progress listener - progressListener = ProgressListener() - - if ('task' in decodeOptions): - task = decodeOptions.pop('task') - - initial_prompt_mode = vadOptions.vadInitialPromptMode - - # Set default initial prompt mode - if (initial_prompt_mode is None): - initial_prompt_mode = VadInitialPromptMode.PREPREND_FIRST_SEGMENT - - if (initial_prompt_mode == VadInitialPromptMode.PREPEND_ALL_SEGMENTS or - initial_prompt_mode == VadInitialPromptMode.PREPREND_FIRST_SEGMENT): - # Prepend initial prompt - prompt_strategy = PrependPromptStrategy(initial_prompt, initial_prompt_mode) - elif (vadOptions.vadInitialPromptMode == VadInitialPromptMode.JSON_PROMPT_MODE): - # Use a JSON format to specify the prompt for each segment - prompt_strategy = JsonPromptStrategy(initial_prompt) - else: - raise ValueError("Invalid vadInitialPromptMode: " + initial_prompt_mode) - - # Callable for processing an audio file - whisperCallable = model.create_callback(language, task, prompt_strategy=prompt_strategy, **decodeOptions) - - # The results - if (vadOptions.vad == 'silero-vad'): - # Silero VAD where non-speech gaps are transcribed - process_gaps = self._create_silero_config(NonSpeechStrategy.CREATE_SEGMENT, vadOptions) - result = self.process_vad(audio_path, whisperCallable, self.vad_model, process_gaps, progressListener=progressListener) - elif (vadOptions.vad == 'silero-vad-skip-gaps'): - # Silero VAD where non-speech gaps are simply ignored - skip_gaps = self._create_silero_config(NonSpeechStrategy.SKIP, vadOptions) - result = self.process_vad(audio_path, whisperCallable, self.vad_model, skip_gaps, progressListener=progressListener) - elif (vadOptions.vad == 'silero-vad-expand-into-gaps'): - # Use Silero VAD where speech-segments are expanded into non-speech gaps - expand_gaps = self._create_silero_config(NonSpeechStrategy.EXPAND_SEGMENT, vadOptions) - result = self.process_vad(audio_path, whisperCallable, self.vad_model, expand_gaps, progressListener=progressListener) - elif (vadOptions.vad == 'periodic-vad'): - # Very simple VAD - mark every 5 minutes as speech. This makes it less likely that Whisper enters an infinite loop, but - # it may create a break in the middle of a sentence, causing some artifacts. - periodic_vad = VadPeriodicTranscription() - period_config = PeriodicTranscriptionConfig(periodic_duration=vadOptions.vadMaxMergeSize, max_prompt_window=vadOptions.vadPromptWindow) - result = self.process_vad(audio_path, whisperCallable, periodic_vad, period_config, progressListener=progressListener) - - else: - if (self._has_parallel_devices()): - # Use a simple period transcription instead, as we need to use the parallel context - periodic_vad = VadPeriodicTranscription() - period_config = PeriodicTranscriptionConfig(periodic_duration=math.inf, max_prompt_window=1) - - result = self.process_vad(audio_path, whisperCallable, periodic_vad, period_config, progressListener=progressListener) - else: - # Default VAD - result = whisperCallable.invoke(audio_path, 0, None, None, progress_listener=progressListener) - - return result - - def _create_progress_listener(self, progress: gr.Progress): - if (progress is None): - # Dummy progress listener - return ProgressListener() - - class ForwardingProgressListener(ProgressListener): - def __init__(self, progress: gr.Progress): - self.progress = progress - - def on_progress(self, current: Union[int, float], total: Union[int, float]): - # From 0 to 1 - self.progress(current / total) - - def on_finished(self): - self.progress(1) - - return ForwardingProgressListener(progress) - - def process_vad(self, audio_path, whisperCallable, vadModel: AbstractTranscription, vadConfig: TranscriptionConfig, - progressListener: ProgressListener = None): - if (not self._has_parallel_devices()): - # No parallel devices, so just run the VAD and Whisper in sequence - return vadModel.transcribe(audio_path, whisperCallable, vadConfig, progressListener=progressListener) - - gpu_devices = self.parallel_device_list - - if (gpu_devices is None or len(gpu_devices) == 0): - # No GPU devices specified, pass the current environment variable to the first GPU process. This may be NULL. - gpu_devices = [os.environ.get("CUDA_VISIBLE_DEVICES", None)] - - # Create parallel context if needed - if (self.gpu_parallel_context is None): - # Create a context wih processes and automatically clear the pool after 1 hour of inactivity - self.gpu_parallel_context = ParallelContext(num_processes=len(gpu_devices), auto_cleanup_timeout_seconds=self.vad_process_timeout) - # We also need a CPU context for the VAD - if (self.cpu_parallel_context is None): - self.cpu_parallel_context = ParallelContext(num_processes=self.vad_cpu_cores, auto_cleanup_timeout_seconds=self.vad_process_timeout) - - parallel_vad = ParallelTranscription() - return parallel_vad.transcribe_parallel(transcription=vadModel, audio=audio_path, whisperCallable=whisperCallable, - config=vadConfig, cpu_device_count=self.vad_cpu_cores, gpu_devices=gpu_devices, - cpu_parallel_context=self.cpu_parallel_context, gpu_parallel_context=self.gpu_parallel_context, - progress_listener=progressListener) - - def _has_parallel_devices(self): - return (self.parallel_device_list is not None and len(self.parallel_device_list) > 0) or self.vad_cpu_cores > 1 - - def _concat_prompt(self, prompt1, prompt2): - if (prompt1 is None): - return prompt2 - elif (prompt2 is None): - return prompt1 - else: - return prompt1 + " " + prompt2 - - def _create_silero_config(self, non_speech_strategy: NonSpeechStrategy, vadOptions: VadOptions): - # Use Silero VAD - if (self.vad_model is None): - self.vad_model = VadSileroTranscription() - - config = TranscriptionConfig(non_speech_strategy = non_speech_strategy, - max_silent_period=vadOptions.vadMergeWindow, max_merge_size=vadOptions.vadMaxMergeSize, - segment_padding_left=vadOptions.vadPadding, segment_padding_right=vadOptions.vadPadding, - max_prompt_window=vadOptions.vadPromptWindow) - - return config - - def write_result(self, result: dict, source_name: str, output_dir: str, highlight_words: bool = False): - if not os.path.exists(output_dir): - os.makedirs(output_dir) - - text = result["text"] - language = result["language"] - languageMaxLineWidth = self.__get_max_line_width(language) - - print("Max line width " + str(languageMaxLineWidth)) - vtt = self.__get_subs(result["segments"], "vtt", languageMaxLineWidth, highlight_words=highlight_words) - srt = self.__get_subs(result["segments"], "srt", languageMaxLineWidth, highlight_words=highlight_words) - json_result = json.dumps(result, indent=4, ensure_ascii=False) - - output_files = [] - output_files.append(self.__create_file(srt, output_dir, source_name + "-subs.srt")); - output_files.append(self.__create_file(vtt, output_dir, source_name + "-subs.vtt")); - output_files.append(self.__create_file(text, output_dir, source_name + "-transcript.txt")); - output_files.append(self.__create_file(json_result, output_dir, source_name + "-result.json")); - - return output_files, text, vtt - - def clear_cache(self): - self.model_cache.clear() - self.vad_model = None - - def __get_source(self, urlData, multipleFiles, microphoneData): - return get_audio_source_collection(urlData, multipleFiles, microphoneData, self.inputAudioMaxDuration) - - def __get_max_line_width(self, language: str) -> int: - if (language and language.lower() in ["japanese", "ja", "chinese", "zh"]): - # Chinese characters and kana are wider, so limit line length to 40 characters - return 40 - else: - # TODO: Add more languages - # 80 latin characters should fit on a 1080p/720p screen - return 80 - - def __get_subs(self, segments: Iterator[dict], format: str, maxLineWidth: int, highlight_words: bool = False) -> str: - segmentStream = StringIO() - - if format == 'vtt': - write_vtt(segments, file=segmentStream, maxLineWidth=maxLineWidth, highlight_words=highlight_words) - elif format == 'srt': - write_srt(segments, file=segmentStream, maxLineWidth=maxLineWidth, highlight_words=highlight_words) - else: - raise Exception("Unknown format " + format) - - segmentStream.seek(0) - return segmentStream.read() - - def __create_file(self, text: str, directory: str, fileName: str) -> str: - # Write the text to a file - with open(os.path.join(directory, fileName), 'w+', encoding="utf-8") as file: - file.write(text) - - return file.name - - def close(self): - print("Closing parallel contexts") - self.clear_cache() - - if (self.gpu_parallel_context is not None): - self.gpu_parallel_context.close() - if (self.cpu_parallel_context is not None): - self.cpu_parallel_context.close() - - -def create_ui(app_config: ApplicationConfig): - ui = WhisperTranscriber(app_config.input_audio_max_duration, app_config.vad_process_timeout, app_config.vad_cpu_cores, - app_config.delete_uploaded_files, app_config.output_dir, app_config) - - # Specify a list of devices to use for parallel processing - ui.set_parallel_devices(app_config.vad_parallel_devices) - ui.set_auto_parallel(app_config.auto_parallel) - - is_whisper = False - - if app_config.whisper_implementation == "whisper": - implementation_name = "Whisper" - is_whisper = True - elif app_config.whisper_implementation in ["faster-whisper", "faster_whisper"]: - implementation_name = "Faster Whisper" - else: - # Try to convert from camel-case to title-case - implementation_name = app_config.whisper_implementation.title().replace("_", " ").replace("-", " ") - - ui_description = implementation_name + " is a general-purpose speech recognition model. It is trained on a large dataset of diverse " - ui_description += " audio and is also a multi-task model that can perform multilingual speech recognition " - ui_description += " as well as speech translation and language identification. " - - ui_description += "\n\n\n\nFor longer audio files (>10 minutes) not in English, it is recommended that you select Silero VAD (Voice Activity Detector) in the VAD option." - - # Recommend faster-whisper - if is_whisper: - ui_description += "\n\n\n\nFor faster inference on GPU, try [faster-whisper](https://huggingface.co/spaces/aadnk/faster-whisper-webui)." - - if app_config.input_audio_max_duration > 0: - ui_description += "\n\n" + "Max audio file length: " + str(app_config.input_audio_max_duration) + " s" - - ui_article = "Read the [documentation here](https://gitlab.com/aadnk/whisper-webui/-/blob/main/docs/options.md)." - - whisper_models = app_config.get_model_names() - - common_inputs = lambda : [ - gr.Dropdown(choices=whisper_models, value=app_config.default_model_name, label="Model"), - gr.Dropdown(choices=sorted(get_language_names()), label="Language", value=app_config.language), - gr.Text(label="URL (YouTube, etc.)"), - gr.File(label="Upload Files", file_count="multiple"), - gr.Audio(source="microphone", type="filepath", label="Microphone Input"), - gr.Dropdown(choices=["transcribe", "translate"], label="Task", value=app_config.task), - ] - - common_vad_inputs = lambda : [ - gr.Dropdown(choices=["none", "silero-vad", "silero-vad-skip-gaps", "silero-vad-expand-into-gaps", "periodic-vad"], value=app_config.default_vad, label="VAD"), - gr.Number(label="VAD - Merge Window (s)", precision=0, value=app_config.vad_merge_window), - gr.Number(label="VAD - Max Merge Size (s)", precision=0, value=app_config.vad_max_merge_size), - ] - - common_word_timestamps_inputs = lambda : [ - gr.Checkbox(label="Word Timestamps", value=app_config.word_timestamps), - gr.Checkbox(label="Word Timestamps - Highlight Words", value=app_config.highlight_words), - ] - - is_queue_mode = app_config.queue_concurrency_count is not None and app_config.queue_concurrency_count > 0 - - simple_transcribe = gr.Interface(fn=ui.transcribe_webui_simple_progress if is_queue_mode else ui.transcribe_webui_simple, - description=ui_description, article=ui_article, inputs=[ - *common_inputs(), - *common_vad_inputs(), - *common_word_timestamps_inputs(), - ], outputs=[ - gr.File(label="Download"), - gr.Text(label="Transcription"), - gr.Text(label="Segments") - ]) - - full_description = ui_description + "\n\n\n\n" + "Be careful when changing some of the options in the full interface - this can cause the model to crash." - - full_transcribe = gr.Interface(fn=ui.transcribe_webui_full_progress if is_queue_mode else ui.transcribe_webui_full, - description=full_description, article=ui_article, inputs=[ - *common_inputs(), - - *common_vad_inputs(), - gr.Number(label="VAD - Padding (s)", precision=None, value=app_config.vad_padding), - gr.Number(label="VAD - Prompt Window (s)", precision=None, value=app_config.vad_prompt_window), - gr.Dropdown(choices=VAD_INITIAL_PROMPT_MODE_VALUES, label="VAD - Initial Prompt Mode"), - - *common_word_timestamps_inputs(), - gr.Text(label="Word Timestamps - Prepend Punctuations", value=app_config.prepend_punctuations), - gr.Text(label="Word Timestamps - Append Punctuations", value=app_config.append_punctuations), - - gr.TextArea(label="Initial Prompt"), - gr.Number(label="Temperature", value=app_config.temperature), - gr.Number(label="Best Of - Non-zero temperature", value=app_config.best_of, precision=0), - gr.Number(label="Beam Size - Zero temperature", value=app_config.beam_size, precision=0), - gr.Number(label="Patience - Zero temperature", value=app_config.patience), - gr.Number(label="Length Penalty - Any temperature", value=app_config.length_penalty), - gr.Text(label="Suppress Tokens - Comma-separated list of token IDs", value=app_config.suppress_tokens), - gr.Checkbox(label="Condition on previous text", value=app_config.condition_on_previous_text), - gr.Checkbox(label="FP16", value=app_config.fp16), - gr.Number(label="Temperature increment on fallback", value=app_config.temperature_increment_on_fallback), - gr.Number(label="Compression ratio threshold", value=app_config.compression_ratio_threshold), - gr.Number(label="Logprob threshold", value=app_config.logprob_threshold), - gr.Number(label="No speech threshold", value=app_config.no_speech_threshold), - ], outputs=[ - gr.File(label="Download"), - gr.Text(label="Transcription"), - gr.Text(label="Segments") - ]) - - demo = gr.TabbedInterface([simple_transcribe, full_transcribe], tab_names=["Simple", "Full"]) - - # Queue up the demo - if is_queue_mode: - demo.queue(concurrency_count=app_config.queue_concurrency_count) - print("Queue mode enabled (concurrency count: " + str(app_config.queue_concurrency_count) + ")") - else: - print("Queue mode disabled - progress bars will not be shown.") - - demo.launch(share=app_config.share, server_name=app_config.server_name, server_port=app_config.server_port) - - # Clean up - ui.close() - -if __name__ == '__main__': - default_app_config = ApplicationConfig.create_default() - whisper_models = default_app_config.get_model_names() - - # Environment variable overrides - default_whisper_implementation = os.environ.get("WHISPER_IMPLEMENTATION", default_app_config.whisper_implementation) - - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument("--input_audio_max_duration", type=int, default=default_app_config.input_audio_max_duration, \ - help="Maximum audio file length in seconds, or -1 for no limit.") # 600 - parser.add_argument("--share", type=bool, default=default_app_config.share, \ - help="True to share the app on HuggingFace.") # False - parser.add_argument("--server_name", type=str, default=default_app_config.server_name, \ - help="The host or IP to bind to. If None, bind to localhost.") # None - parser.add_argument("--server_port", type=int, default=default_app_config.server_port, \ - help="The port to bind to.") # 7860 - parser.add_argument("--queue_concurrency_count", type=int, default=default_app_config.queue_concurrency_count, \ - help="The number of concurrent requests to process.") # 1 - parser.add_argument("--default_model_name", type=str, choices=whisper_models, default=default_app_config.default_model_name, \ - help="The default model name.") # medium - parser.add_argument("--default_vad", type=str, default=default_app_config.default_vad, \ - help="The default VAD.") # silero-vad - parser.add_argument("--vad_initial_prompt_mode", type=str, default=default_app_config.vad_initial_prompt_mode, choices=VAD_INITIAL_PROMPT_MODE_VALUES, \ - help="Whether or not to prepend the initial prompt to each VAD segment (prepend_all_segments), or just the first segment (prepend_first_segment)") # prepend_first_segment - parser.add_argument("--vad_parallel_devices", type=str, default=default_app_config.vad_parallel_devices, \ - help="A commma delimited list of CUDA devices to use for parallel processing. If None, disable parallel processing.") # "" - parser.add_argument("--vad_cpu_cores", type=int, default=default_app_config.vad_cpu_cores, \ - help="The number of CPU cores to use for VAD pre-processing.") # 1 - parser.add_argument("--vad_process_timeout", type=float, default=default_app_config.vad_process_timeout, \ - help="The number of seconds before inactivate processes are terminated. Use 0 to close processes immediately, or None for no timeout.") # 1800 - parser.add_argument("--auto_parallel", type=bool, default=default_app_config.auto_parallel, \ - help="True to use all available GPUs and CPU cores for processing. Use vad_cpu_cores/vad_parallel_devices to specify the number of CPU cores/GPUs to use.") # False - parser.add_argument("--output_dir", "-o", type=str, default=default_app_config.output_dir, \ - help="directory to save the outputs") - parser.add_argument("--whisper_implementation", type=str, default=default_whisper_implementation, choices=["whisper", "faster-whisper"],\ - help="the Whisper implementation to use") - parser.add_argument("--compute_type", type=str, default=default_app_config.compute_type, choices=["default", "auto", "int8", "int8_float16", "int16", "float16", "float32"], \ - help="the compute type to use for inference") - parser.add_argument("--threads", type=optional_int, default=0, - help="number of threads used by torch for CPU inference; supercedes MKL_NUM_THREADS/OMP_NUM_THREADS") - - args = parser.parse_args().__dict__ - - updated_config = default_app_config.update(**args) - - if (threads := args.pop("threads")) > 0: - torch.set_num_threads(threads) - - create_ui(app_config=updated_config) \ No newline at end of file diff --git a/spaces/xcocogoatx/WaifuCreatorAi/utils.py b/spaces/xcocogoatx/WaifuCreatorAi/utils.py deleted file mode 100644 index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000 --- a/spaces/xcocogoatx/WaifuCreatorAi/utils.py +++ /dev/null @@ -1,6 +0,0 @@ -def is_google_colab(): - try: - import google.colab - return True - except: - return False \ No newline at end of file diff --git a/spaces/xiang-wuu/yolov5/utils/flask_rest_api/example_request.py b/spaces/xiang-wuu/yolov5/utils/flask_rest_api/example_request.py deleted file mode 100644 index 773ad893296750992789a77a59e0f5ad657d0e35..0000000000000000000000000000000000000000 --- a/spaces/xiang-wuu/yolov5/utils/flask_rest_api/example_request.py +++ /dev/null @@ -1,19 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Perform test request -""" - -import pprint - -import requests - -DETECTION_URL = "http://localhost:5000/v1/object-detection/yolov5s" -IMAGE = "zidane.jpg" - -# Read image -with open(IMAGE, "rb") as f: - image_data = f.read() - -response = requests.post(DETECTION_URL, files={"image": image_data}).json() - -pprint.pprint(response) diff --git a/spaces/xiang-wuu/yolov5/val.py b/spaces/xiang-wuu/yolov5/val.py deleted file mode 100644 index 006ade37d03ec646c4a463edc7cc10707cdc471b..0000000000000000000000000000000000000000 --- a/spaces/xiang-wuu/yolov5/val.py +++ /dev/null @@ -1,398 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Validate a trained YOLOv5 model accuracy on a custom dataset - -Usage: - $ python path/to/val.py --weights yolov5s.pt --data coco128.yaml --img 640 - -Usage - formats: - $ python path/to/val.py --weights yolov5s.pt # PyTorch - yolov5s.torchscript # TorchScript - yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn - yolov5s.xml # OpenVINO - yolov5s.engine # TensorRT - yolov5s.mlmodel # CoreML (macOS-only) - yolov5s_saved_model # TensorFlow SavedModel - yolov5s.pb # TensorFlow GraphDef - yolov5s.tflite # TensorFlow Lite - yolov5s_edgetpu.tflite # TensorFlow Edge TPU -""" - -import argparse -import json -import os -import sys -from pathlib import Path - -import numpy as np -import torch -from tqdm import tqdm - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[0] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -from models.common import DetectMultiBackend -from utils.callbacks import Callbacks -from utils.dataloaders import create_dataloader -from utils.general import (LOGGER, check_dataset, check_img_size, check_requirements, check_yaml, - coco80_to_coco91_class, colorstr, emojis, increment_path, non_max_suppression, print_args, - scale_coords, xywh2xyxy, xyxy2xywh) -from utils.metrics import ConfusionMatrix, ap_per_class, box_iou -from utils.plots import output_to_target, plot_images, plot_val_study -from utils.torch_utils import select_device, time_sync - - -def save_one_txt(predn, save_conf, shape, file): - # Save one txt result - gn = torch.tensor(shape)[[1, 0, 1, 0]] # normalization gain whwh - for *xyxy, conf, cls in predn.tolist(): - xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format - with open(file, 'a') as f: - f.write(('%g ' * len(line)).rstrip() % line + '\n') - - -def save_one_json(predn, jdict, path, class_map): - # Save one JSON result {"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236} - image_id = int(path.stem) if path.stem.isnumeric() else path.stem - box = xyxy2xywh(predn[:, :4]) # xywh - box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner - for p, b in zip(predn.tolist(), box.tolist()): - jdict.append({ - 'image_id': image_id, - 'category_id': class_map[int(p[5])], - 'bbox': [round(x, 3) for x in b], - 'score': round(p[4], 5)}) - - -def process_batch(detections, labels, iouv): - """ - Return correct predictions matrix. Both sets of boxes are in (x1, y1, x2, y2) format. - Arguments: - detections (Array[N, 6]), x1, y1, x2, y2, conf, class - labels (Array[M, 5]), class, x1, y1, x2, y2 - Returns: - correct (Array[N, 10]), for 10 IoU levels - """ - correct = np.zeros((detections.shape[0], iouv.shape[0])).astype(bool) - iou = box_iou(labels[:, 1:], detections[:, :4]) - correct_class = labels[:, 0:1] == detections[:, 5] - for i in range(len(iouv)): - x = torch.where((iou >= iouv[i]) & correct_class) # IoU > threshold and classes match - if x[0].shape[0]: - matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() # [label, detect, iou] - if x[0].shape[0] > 1: - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 1], return_index=True)[1]] - # matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 0], return_index=True)[1]] - correct[matches[:, 1].astype(int), i] = True - return torch.tensor(correct, dtype=torch.bool, device=iouv.device) - - -@torch.no_grad() -def run( - data, - weights=None, # model.pt path(s) - batch_size=32, # batch size - imgsz=640, # inference size (pixels) - conf_thres=0.001, # confidence threshold - iou_thres=0.6, # NMS IoU threshold - task='val', # train, val, test, speed or study - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - workers=8, # max dataloader workers (per RANK in DDP mode) - single_cls=False, # treat as single-class dataset - augment=False, # augmented inference - verbose=False, # verbose output - save_txt=False, # save results to *.txt - save_hybrid=False, # save label+prediction hybrid results to *.txt - save_conf=False, # save confidences in --save-txt labels - save_json=False, # save a COCO-JSON results file - project=ROOT / 'runs/val', # save to project/name - name='exp', # save to project/name - exist_ok=False, # existing project/name ok, do not increment - half=True, # use FP16 half-precision inference - dnn=False, # use OpenCV DNN for ONNX inference - model=None, - dataloader=None, - save_dir=Path(''), - plots=True, - callbacks=Callbacks(), - compute_loss=None, -): - # Initialize/load model and set device - training = model is not None - if training: # called by train.py - device, pt, jit, engine = next(model.parameters()).device, True, False, False # get model device, PyTorch model - half &= device.type != 'cpu' # half precision only supported on CUDA - model.half() if half else model.float() - else: # called directly - device = select_device(device, batch_size=batch_size) - - # Directories - save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Load model - model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) - stride, pt, jit, engine = model.stride, model.pt, model.jit, model.engine - imgsz = check_img_size(imgsz, s=stride) # check image size - half = model.fp16 # FP16 supported on limited backends with CUDA - if engine: - batch_size = model.batch_size - else: - device = model.device - if not (pt or jit): - batch_size = 1 # export.py models default to batch-size 1 - LOGGER.info(f'Forcing --batch-size 1 square inference (1,3,{imgsz},{imgsz}) for non-PyTorch models') - - # Data - data = check_dataset(data) # check - - # Configure - model.eval() - cuda = device.type != 'cpu' - is_coco = isinstance(data.get('val'), str) and data['val'].endswith(f'coco{os.sep}val2017.txt') # COCO dataset - nc = 1 if single_cls else int(data['nc']) # number of classes - iouv = torch.linspace(0.5, 0.95, 10, device=device) # iou vector for mAP@0.5:0.95 - niou = iouv.numel() - - # Dataloader - if not training: - if pt and not single_cls: # check --weights are trained on --data - ncm = model.model.nc - assert ncm == nc, f'{weights} ({ncm} classes) trained on different --data than what you passed ({nc} ' \ - f'classes). Pass correct combination of --weights and --data that are trained together.' - model.warmup(imgsz=(1 if pt else batch_size, 3, imgsz, imgsz)) # warmup - pad = 0.0 if task in ('speed', 'benchmark') else 0.5 - rect = False if task == 'benchmark' else pt # square inference for benchmarks - task = task if task in ('train', 'val', 'test') else 'val' # path to train/val/test images - dataloader = create_dataloader(data[task], - imgsz, - batch_size, - stride, - single_cls, - pad=pad, - rect=rect, - workers=workers, - prefix=colorstr(f'{task}: '))[0] - - seen = 0 - confusion_matrix = ConfusionMatrix(nc=nc) - names = {k: v for k, v in enumerate(model.names if hasattr(model, 'names') else model.module.names)} - class_map = coco80_to_coco91_class() if is_coco else list(range(1000)) - s = ('%20s' + '%11s' * 6) % ('Class', 'Images', 'Labels', 'P', 'R', 'mAP@.5', 'mAP@.5:.95') - dt, p, r, f1, mp, mr, map50, map = [0.0, 0.0, 0.0], 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 - loss = torch.zeros(3, device=device) - jdict, stats, ap, ap_class = [], [], [], [] - callbacks.run('on_val_start') - pbar = tqdm(dataloader, desc=s, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') # progress bar - for batch_i, (im, targets, paths, shapes) in enumerate(pbar): - callbacks.run('on_val_batch_start') - t1 = time_sync() - if cuda: - im = im.to(device, non_blocking=True) - targets = targets.to(device) - im = im.half() if half else im.float() # uint8 to fp16/32 - im /= 255 # 0 - 255 to 0.0 - 1.0 - nb, _, height, width = im.shape # batch size, channels, height, width - t2 = time_sync() - dt[0] += t2 - t1 - - # Inference - out, train_out = model(im) if training else model(im, augment=augment, val=True) # inference, loss outputs - dt[1] += time_sync() - t2 - - # Loss - if compute_loss: - loss += compute_loss([x.float() for x in train_out], targets)[1] # box, obj, cls - - # NMS - targets[:, 2:] *= torch.tensor((width, height, width, height), device=device) # to pixels - lb = [targets[targets[:, 0] == i, 1:] for i in range(nb)] if save_hybrid else [] # for autolabelling - t3 = time_sync() - out = non_max_suppression(out, conf_thres, iou_thres, labels=lb, multi_label=True, agnostic=single_cls) - dt[2] += time_sync() - t3 - - # Metrics - for si, pred in enumerate(out): - labels = targets[targets[:, 0] == si, 1:] - nl, npr = labels.shape[0], pred.shape[0] # number of labels, predictions - path, shape = Path(paths[si]), shapes[si][0] - correct = torch.zeros(npr, niou, dtype=torch.bool, device=device) # init - seen += 1 - - if npr == 0: - if nl: - stats.append((correct, *torch.zeros((2, 0), device=device), labels[:, 0])) - if plots: - confusion_matrix.process_batch(detections=None, labels=labels[:, 0]) - continue - - # Predictions - if single_cls: - pred[:, 5] = 0 - predn = pred.clone() - scale_coords(im[si].shape[1:], predn[:, :4], shape, shapes[si][1]) # native-space pred - - # Evaluate - if nl: - tbox = xywh2xyxy(labels[:, 1:5]) # target boxes - scale_coords(im[si].shape[1:], tbox, shape, shapes[si][1]) # native-space labels - labelsn = torch.cat((labels[:, 0:1], tbox), 1) # native-space labels - correct = process_batch(predn, labelsn, iouv) - if plots: - confusion_matrix.process_batch(predn, labelsn) - stats.append((correct, pred[:, 4], pred[:, 5], labels[:, 0])) # (correct, conf, pcls, tcls) - - # Save/log - if save_txt: - save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / (path.stem + '.txt')) - if save_json: - save_one_json(predn, jdict, path, class_map) # append to COCO-JSON dictionary - callbacks.run('on_val_image_end', pred, predn, path, names, im[si]) - - # Plot images - if plots and batch_i < 3: - plot_images(im, targets, paths, save_dir / f'val_batch{batch_i}_labels.jpg', names) # labels - plot_images(im, output_to_target(out), paths, save_dir / f'val_batch{batch_i}_pred.jpg', names) # pred - - callbacks.run('on_val_batch_end') - - # Compute metrics - stats = [torch.cat(x, 0).cpu().numpy() for x in zip(*stats)] # to numpy - if len(stats) and stats[0].any(): - tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names) - ap50, ap = ap[:, 0], ap.mean(1) # AP@0.5, AP@0.5:0.95 - mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean() - nt = np.bincount(stats[3].astype(int), minlength=nc) # number of targets per class - else: - nt = torch.zeros(1) - - # Print results - pf = '%20s' + '%11i' * 2 + '%11.3g' * 4 # print format - LOGGER.info(pf % ('all', seen, nt.sum(), mp, mr, map50, map)) - if nt.sum() == 0: - LOGGER.warning(emojis(f'WARNING: no labels found in {task} set, can not compute metrics without labels ⚠️')) - - # Print results per class - if (verbose or (nc < 50 and not training)) and nc > 1 and len(stats): - for i, c in enumerate(ap_class): - LOGGER.info(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i])) - - # Print speeds - t = tuple(x / seen * 1E3 for x in dt) # speeds per image - if not training: - shape = (batch_size, 3, imgsz, imgsz) - LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {shape}' % t) - - # Plots - if plots: - confusion_matrix.plot(save_dir=save_dir, names=list(names.values())) - callbacks.run('on_val_end') - - # Save JSON - if save_json and len(jdict): - w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else '' # weights - anno_json = str(Path(data.get('path', '../coco')) / 'annotations/instances_val2017.json') # annotations json - pred_json = str(save_dir / f"{w}_predictions.json") # predictions json - LOGGER.info(f'\nEvaluating pycocotools mAP... saving {pred_json}...') - with open(pred_json, 'w') as f: - json.dump(jdict, f) - - try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb - check_requirements(['pycocotools']) - from pycocotools.coco import COCO - from pycocotools.cocoeval import COCOeval - - anno = COCO(anno_json) # init annotations api - pred = anno.loadRes(pred_json) # init predictions api - eval = COCOeval(anno, pred, 'bbox') - if is_coco: - eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.im_files] # image IDs to evaluate - eval.evaluate() - eval.accumulate() - eval.summarize() - map, map50 = eval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5) - except Exception as e: - LOGGER.info(f'pycocotools unable to run: {e}') - - # Return results - model.float() # for training - if not training: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") - maps = np.zeros(nc) + map - for i, c in enumerate(ap_class): - maps[c] = ap[i] - return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t - - -def parse_opt(): - parser = argparse.ArgumentParser() - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') - parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model.pt path(s)') - parser.add_argument('--batch-size', type=int, default=32, help='batch size') - parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--conf-thres', type=float, default=0.001, help='confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.6, help='NMS IoU threshold') - parser.add_argument('--task', default='val', help='train, val, test, speed or study') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') - parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--verbose', action='store_true', help='report mAP by class') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-hybrid', action='store_true', help='save label+prediction hybrid results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--save-json', action='store_true', help='save a COCO-JSON results file') - parser.add_argument('--project', default=ROOT / 'runs/val', help='save to project/name') - parser.add_argument('--name', default='exp', help='save to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') - parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference') - opt = parser.parse_args() - opt.data = check_yaml(opt.data) # check YAML - opt.save_json |= opt.data.endswith('coco.yaml') - opt.save_txt |= opt.save_hybrid - print_args(vars(opt)) - return opt - - -def main(opt): - check_requirements(requirements=ROOT / 'requirements.txt', exclude=('tensorboard', 'thop')) - - if opt.task in ('train', 'val', 'test'): # run normally - if opt.conf_thres > 0.001: # https://github.com/ultralytics/yolov5/issues/1466 - LOGGER.info(emojis(f'WARNING: confidence threshold {opt.conf_thres} > 0.001 produces invalid results ⚠️')) - run(**vars(opt)) - - else: - weights = opt.weights if isinstance(opt.weights, list) else [opt.weights] - opt.half = True # FP16 for fastest results - if opt.task == 'speed': # speed benchmarks - # python val.py --task speed --data coco.yaml --batch 1 --weights yolov5n.pt yolov5s.pt... - opt.conf_thres, opt.iou_thres, opt.save_json = 0.25, 0.45, False - for opt.weights in weights: - run(**vars(opt), plots=False) - - elif opt.task == 'study': # speed vs mAP benchmarks - # python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n.pt yolov5s.pt... - for opt.weights in weights: - f = f'study_{Path(opt.data).stem}_{Path(opt.weights).stem}.txt' # filename to save to - x, y = list(range(256, 1536 + 128, 128)), [] # x axis (image sizes), y axis - for opt.imgsz in x: # img-size - LOGGER.info(f'\nRunning {f} --imgsz {opt.imgsz}...') - r, _, t = run(**vars(opt), plots=False) - y.append(r + t) # results and times - np.savetxt(f, y, fmt='%10.4g') # save - os.system('zip -r study.zip study_*.txt') - plot_val_study(x=x) # plot - - -if __name__ == "__main__": - opt = parse_opt() - main(opt) diff --git a/spaces/xiang2811/ChatGPT/assets/custom.js b/spaces/xiang2811/ChatGPT/assets/custom.js deleted file mode 100644 index b8071034f3618c541e3f4169c7fc6d6593d56f44..0000000000000000000000000000000000000000 --- a/spaces/xiang2811/ChatGPT/assets/custom.js +++ /dev/null @@ -1,224 +0,0 @@ - -// custom javascript here - -const MAX_HISTORY_LENGTH = 32; - -var key_down_history = []; -var currentIndex = -1; -var user_input_ta; - -var gradioContainer = null; -var user_input_ta = null; -var user_input_tb = null; -var userInfoDiv = null; -var appTitleDiv = null; -var chatbot = null; -var apSwitch = null; - -var ga = document.getElementsByTagName("gradio-app"); -var targetNode = ga[0]; -var isInIframe = (window.self !== window.top); - -// gradio 页面加载好了么??? 我能动你的元素了么?? -function gradioLoaded(mutations) { - for (var i = 0; i < mutations.length; i++) { - if (mutations[i].addedNodes.length) { - gradioContainer = document.querySelector(".gradio-container"); - user_input_tb = document.getElementById('user_input_tb'); - userInfoDiv = document.getElementById("user_info"); - appTitleDiv = document.getElementById("app_title"); - chatbot = document.querySelector('#chuanhu_chatbot'); - apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - - if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没? - adjustDarkMode(); - } - if (user_input_tb) { // user_input_tb 加载出来了没? - selectHistory(); - } - if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没? - setTimeout(showOrHideUserInfo(), 2000); - } - if (chatbot) { // chatbot 加载出来了没? - setChatbotHeight() - } - } - } -} - -function selectHistory() { - user_input_ta = user_input_tb.querySelector("textarea"); - if (user_input_ta) { - observer.disconnect(); // 停止监听 - // 在 textarea 上监听 keydown 事件 - user_input_ta.addEventListener("keydown", function (event) { - var value = user_input_ta.value.trim(); - // 判断按下的是否为方向键 - if (event.code === 'ArrowUp' || event.code === 'ArrowDown') { - // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作 - if (value && key_down_history.indexOf(value) === -1) - return; - // 对于需要响应的动作,阻止默认行为。 - event.preventDefault(); - var length = key_down_history.length; - if (length === 0) { - currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置 - return; - } - if (currentIndex === -1) { - currentIndex = length; - } - if (event.code === 'ArrowUp' && currentIndex > 0) { - currentIndex--; - user_input_ta.value = key_down_history[currentIndex]; - } else if (event.code === 'ArrowDown' && currentIndex < length - 1) { - currentIndex++; - user_input_ta.value = key_down_history[currentIndex]; - } - user_input_ta.selectionStart = user_input_ta.value.length; - user_input_ta.selectionEnd = user_input_ta.value.length; - const input_event = new InputEvent("input", { bubbles: true, cancelable: true }); - user_input_ta.dispatchEvent(input_event); - } else if (event.code === "Enter") { - if (value) { - currentIndex = -1; - if (key_down_history.indexOf(value) === -1) { - key_down_history.push(value); - if (key_down_history.length > MAX_HISTORY_LENGTH) { - key_down_history.shift(); - } - } - } - } - }); - } -} - -function toggleUserInfoVisibility(shouldHide) { - if (userInfoDiv) { - if (shouldHide) { - userInfoDiv.classList.add("hideK"); - } else { - userInfoDiv.classList.remove("hideK"); - } - } -} -function showOrHideUserInfo() { - var sendBtn = document.getElementById("submit_btn"); - - // Bind mouse/touch events to show/hide user info - appTitleDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - userInfoDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - sendBtn.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - - appTitleDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - userInfoDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - sendBtn.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - - appTitleDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - userInfoDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - sendBtn.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - - appTitleDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - userInfoDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - sendBtn.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); // Delay 1 second to hide user info - }; - - // Hide user info after 2 second - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 2000); -} - -function toggleDarkMode(isEnabled) { - if (isEnabled) { - gradioContainer.classList.add("dark"); - document.body.style.setProperty("background-color", "var(--neutral-950)", "important"); - } else { - gradioContainer.classList.remove("dark"); - document.body.style.backgroundColor = ""; - } -} -function adjustDarkMode() { - const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)"); - - // 根据当前颜色模式设置初始状态 - apSwitch.checked = darkModeQuery.matches; - toggleDarkMode(darkModeQuery.matches); - // 监听颜色模式变化 - darkModeQuery.addEventListener("change", (e) => { - apSwitch.checked = e.matches; - toggleDarkMode(e.matches); - }); - // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - apSwitch.addEventListener("change", (e) => { - toggleDarkMode(e.target.checked); - }); -} - -function setChatbotHeight() { - const screenWidth = window.innerWidth; - const statusDisplay = document.querySelector('#status_display'); - const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0; - const wrap = chatbot.querySelector('.wrap'); - const vh = window.innerHeight * 0.01; - document.documentElement.style.setProperty('--vh', `${vh}px`); - if (isInIframe) { - chatbot.style.height = `700px`; - wrap.style.maxHeight = `calc(700px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))` - } else { - if (screenWidth <= 320) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else if (screenWidth <= 499) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } - } -} - -// 监视页面内部 DOM 变动 -var observer = new MutationObserver(function (mutations) { - gradioLoaded(mutations); -}); -observer.observe(targetNode, { childList: true, subtree: true }); - -// 监视页面变化 -window.addEventListener("DOMContentLoaded", function () { - isInIframe = (window.self !== window.top); -}); -window.addEventListener('resize', setChatbotHeight); -window.addEventListener('scroll', setChatbotHeight); -window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode); \ No newline at end of file diff --git a/spaces/xwsm/gpt/crazy_functions/test_project/cpp/cppipc/policy.h b/spaces/xwsm/gpt/crazy_functions/test_project/cpp/cppipc/policy.h deleted file mode 100644 index f88ab5d8cb343f97026966b402eaeed8831e356a..0000000000000000000000000000000000000000 --- a/spaces/xwsm/gpt/crazy_functions/test_project/cpp/cppipc/policy.h +++ /dev/null @@ -1,25 +0,0 @@ -#pragma once - -#include - -#include "libipc/def.h" -#include "libipc/prod_cons.h" - -#include "libipc/circ/elem_array.h" - -namespace ipc { -namespace policy { - -template