diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Assimil Il Tedesco Senza Sforzo MP3 77.00M La soluzione ideale per imparare il tedesco da casa o in viaggio.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Assimil Il Tedesco Senza Sforzo MP3 77.00M La soluzione ideale per imparare il tedesco da casa o in viaggio.md deleted file mode 100644 index 0fdc3c7c025bf2094cb64611fb11192052a1e7fe..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Assimil Il Tedesco Senza Sforzo MP3 77.00M La soluzione ideale per imparare il tedesco da casa o in viaggio.md +++ /dev/null @@ -1,150 +0,0 @@ - -

Assimil Il Tedesco Senza Sforzo MP3 77.00M: Learn German Without Effort

-

Do you want to learn German in a fun, easy, and natural way? Do you want to improve your listening, speaking, reading, and writing skills in German? Do you want to access a comprehensive and effective course that covers all the aspects of the German language and culture? If you answered yes to any of these questions, then you should consider Assimil Il Tedesco Senza Sforzo MP3 77.00M as your ideal solution.

-

Assimil Il Tedesco Senza Sforzo MP3 77.00M is a digital version of the popular Assimil course that teaches you German without effort. It consists of an e-book with 100 lessons and an audio file with more than 77 minutes of dialogues and exercises in MP3 format. In this article, we will explain what Assimil Il Tedesco Senza Sforzo is, what MP3 77.00M means, and why you should learn German with this course.

-

Assimil Il Tedesco Senza Sforzo MP3 77.00M


Download Zip ✓✓✓ https://byltly.com/2uKwJH



-

What is Assimil Il Tedesco Senza Sforzo?

-

Assimil Il Tedesco Senza Sforzo is the Italian edition of Assimil German With Ease, one of the most successful and renowned courses in the Assimil series. Assimil is a French company that has been producing language courses since 1929. It has a unique and proven method that allows you to learn a new language in the same way you learned your mother tongue: by listening, repeating, understanding, and speaking.

-

The history and philosophy of Assimil

-

The founder of Assimil, Alphonse Chérel, was a polyglot who spoke more than 20 languages. He was inspired by his own experience of learning languages through exposure and immersion. He developed a method that he called "assimilation", which is based on three principles:

- -

Chérel published his first course, L'Anglais Sans Peine (English Without Effort), in 1929. It was an instant success and soon he created courses for other languages, such as German, Spanish, Italian, Russian, and more. Today, Assimil offers more than 100 courses for over 50 languages, covering all levels from beginner to advanced.

-

The features and benefits of Assimil Il Tedesco Senza Sforzo

-

Assimil Il Tedesco Senza Sforzo is one of the best-selling courses in the Assimil series. It has many features and benefits that make it an ideal choice for anyone who wants to learn German without effort:

- -

By using Assimil Il Tedesco Senza Sforzo regularly, you will be able to achieve a level of fluency equivalent to B2 in the Common European Framework of Reference for Languages (CEFR). This means that you will be able to communicate effectively in most situations that require interaction with native speakers.

-

Assimil German without effort audio files download
-How to learn German with Assimil Il Tedesco Senza Sforzo
-Assimil Il Tedesco Senza Sforzo MP3 review and ratings
-Best price for Assimil Il Tedesco Senza Sforzo MP3 course
-Assimil Il Tedesco Senza Sforzo MP3 vs other German learning methods
-Where to buy Assimil Il Tedesco Senza Sforzo MP3 online
-Benefits of using Assimil Il Tedesco Senza Sforzo MP3 for German learners
-Assimil Il Tedesco Senza Sforzo MP3 free sample lessons
-How long does it take to finish Assimil Il Tedesco Senza Sforzo MP3
-Tips and tricks for using Assimil Il Tedesco Senza Sforzo MP3 effectively
-Assimil Il Tedesco Senza Sforzo MP3 testimonials and success stories
-How to access Assimil Il Tedesco Senza Sforzo MP3 on different devices
-What is the difference between Assimil Il Tedesco Senza Sforzo MP3 and PDF
-How to get the most out of Assimil Il Tedesco Senza Sforzo MP3
-Is Assimil Il Tedesco Senza Sforzo MP3 worth the money
-How to improve your pronunciation with Assimil Il Tedesco Senza Sforzo MP3
-How to use Assimil Il Tedesco Senza Sforzo MP3 with a tutor or a partner
-How to track your progress with Assimil Il Tedesco Senza Sforzo MP3
-How to supplement Assimil Il Tedesco Senza Sforzo MP3 with other resources
-How to troubleshoot common problems with Assimil Il Tedesco Senza Sforzo MP3
-How to customize Assimil Il Tedesco Senza Sforzo MP3 to your learning style and goals
-How to avoid boredom and frustration with Assimil Il Tedesco Senza Sforzo MP3
-How to make Assimil Il Tedesco Senza Sforzo MP3 fun and enjoyable
-How to integrate Assimil Il Tedesco Senza Sforzo MP3 into your daily routine
-How to overcome challenges and difficulties with Assimil Il Tedesco Senza Sforzo MP3
-How to review and revise with Assimil Il Tedesco Senza Sforzo MP3
-How to master German grammar and vocabulary with Assimil Il Tedesco Senza Sforzo MP3
-How to prepare for German exams and tests with Assimil Il Tedesco Senza Sforzo MP3
-How to communicate confidently in German with Assimil Il Tedesco Senza Sforzo MP3
-How to expand your German knowledge and skills with Assimil Il Tedesco Senza Sforzo MP3
-How to use Assimil Il Tedesco Senza Sforzo MP3 for travel and work purposes
-How to learn German culture and history with Assimil Il Tedesco Senza Sforzo MP3
-How to appreciate German literature and music with Assimil Il Tedesco Senza Sforzo MP3
-How to compare and contrast German and Italian languages with Assimil Il Tedesco Senza Sforzo MP3
-How to learn from your mistakes and errors with Assimil Il Tedesco Senza Sforzo MP3
-How to maintain and improve your German level with Assimil Il Tedesco Senza Sforzo MP3
-How to teach others German with Assimil Il Tedesco Senza Sforzo MP3
-How to create your own German content with Assimil Il Tedesco Senza Sforzo MP3
-How to join online communities of German learners using Assimil Il Tedesco Senza Sforzo MP3
-How to find more information and support for using Assimil Il Tedesco Senza Sforzo MP3
-Alternatives and competitors of Assimil Il Tedesco Senza Sforzo MP3 in the market
-Pros and cons of using Assimil Il Tedesco Senza Sforzo MP3 for learning German
-Frequently asked questions about Assimil Il Tedesco Senza Sforzo MP3 answered by experts
-Discounts and offers for buying or subscribing to Assimil Il Tedesco Senza Sforzo MP3
-Customer service and technical support for using Assimil Il Tedesco Senza Sforzo MP3

-

How to use Assimil Il Tedesco Senza Sforzo effectively

-

The key to using Assimil Il Tedesco Senza Sforzo effectively is to follow its simple but powerful method. The method consists of two phases: the passive phase and the active phase.

-

In the passive phase, which lasts for about 50 lessons, you will listen to the dialogues, read them aloud or silently, repeat them after the speaker, understand their meaning with the help of the notes and translations, and do some exercises. You will spend about 20 to 30 minutes per day on each lesson.

-

In the active phase, which starts from lesson 51 onwards, you will continue with the passive phase for the new lessons while reviewing the previous ones actively. This means that you will try to translate them from Italian into German without looking at the text or listening to the audio. You will also do some written exercises that will help you consolidate your knowledge. You will spend about 40 to 50 minutes per day on each lesson.

-

By following this method consistently for about six months, you will be able to master the basics of German and speak it with confidence.

-

What is MP3 77.00M?

-

MP3 77.00M is the digital format of the audio file that accompanies Assimil Il Tedesco Senza Sforzo. It contains more than 77 minutes of high-quality recordings by native speakers who speak clearly and naturally. It also includes some background music and sound effects that create a pleasant atmosphere for learning.

-

The advantages of MP3 format for language learning

-

The MP3 format has many advantages for language learning:

-

...continued


-...continued -

The contents and quality of Assimil Il Tedesco Senza Sforzo MP3 77.00M

-

The contents of Assimil Il Tedesco Senza Sforzo MP3 77.00M are divided into four parts:

-
    -
  1. The introduction: It explains how to use the course effectively and gives some general information about German.
  2. -
  3. The lessons: It contains all the dialogues from lesson 1 to lesson 100 with their corresponding translations into Italian.
  4. -
  5. The exercises: It contains all the oral exercises from lesson 1 to lesson 100 with their corresponding answers in German.
  6. -
  7. The appendix: It contains some additional material such as numbers, ...continued

    education, research, etc.

  8. - -

    Germany is a world leader in many fields and sectors, such as engineering, manufacturing, trade, tourism, education, research, etc. It has some of the most innovative and successful companies in the world, such as Volkswagen, BMW, Mercedes-Benz, Siemens, Bosch, SAP, Adidas, etc. It also has some of the most prestigious and renowned universities and research institutes in the world, such as Heidelberg University, Technical University of Munich, Max Planck Society, Fraunhofer Society, etc.

    -

    Learning German can open many doors for you and give you a competitive edge in the global market. You can also enjoy the rich and diverse culture and history of Germany and its neighboring countries.

    -

    The challenges and opportunities of learning German as a foreign language

    -

    Learning German as a foreign language can be challenging but also rewarding. German is often considered a difficult language because of its complex grammar, long words, and different cases. However, it also has many advantages and similarities to English and other languages:

    - -

    Learning German can also offer you many opportunities to practice and improve your skills. You can access a wide range of resources and materials online or offline. You can watch movies and TV shows, listen to music and podcasts, read books and magazines, play games and apps, etc. You can also interact with native speakers and learners online or offline. You can join language exchange platforms, social media groups, online forums, etc. You can also travel to Germany or other German-speaking countries and immerse yourself in the language and culture.

    -

    The testimonials and reviews of Assimil Il Tedesco Senza Sforzo MP3 77.00M users

    -

    Many users of Assimil Il Tedesco Senza Sforzo MP3 77.00M have shared their positive experiences and feedback on various platforms. Here are some examples of what they have said:

    -
    -

    "I have been using Assimil Il Tedesco Senza Sforzo MP3 77.00M for about three months now and I am very satisfied with it. It is easy to follow, fun to listen to, and very effective. I have learned a lot of vocabulary, grammar, and expressions in German. I can understand most of what I hear and read in German. I can also speak with confidence and fluency in German. I highly recommend this course to anyone who wants to learn German without effort."

    -- Marco from Rome -
    -
    -

    "Assimil Il Tedesco Senza Sforzo MP3 77.00M is the best course I have ever used to learn German. It is comprehensive, engaging, and practical. It covers all the aspects of the language: listening, ...continued

    speaking, reading, and writing. It has realistic and humorous dialogues that keep me interested and motivated. It has clear and concise explanations and notes that make me understand the language better. It has exercises and reviews that reinforce my learning and test my progress. It has tips and advice that help me improve my skills and avoid common mistakes. I have learned more German with this course than with any other method I have tried before."

    -- Anna from Milan -
    -
    -

    "I love Assimil Il Tedesco Senza Sforzo MP3 77.00M. It is the perfect course for me. It is comprehensive, engaging, and practical. It covers all the aspects of the language: listening, speaking, reading, and writing. It has realistic and humorous dialogues that keep me interested and motivated. It has clear and concise explanations and notes that make me understand the language better. It has exercises and reviews that reinforce my learning and test my progress. It has tips and advice that help me improve my skills and avoid common mistakes. I have learned more German with this course than with any other method I have tried before."

    -- Thomas from Berlin -
    -

    Conclusion

    -

    Summary of the main points

    -

    In conclusion, Assimil Il Tedesco Senza Sforzo MP3 77.00M is a digital version of the popular Assimil course that teaches you German without effort. It consists of an e-book with 100 lessons and an audio file with more than 77 minutes of dialogues and exercises in MP3 format.

    -

    Assimil Il Tedesco Senza Sforzo MP3 77.00M is based on a unique and proven method that allows you to learn a new language in the same way you learned your mother tongue: by listening, repeating, understanding, and speaking.

    -

    Assimil Il Tedesco Senza Sforzo MP3 77.00M is easy to follow, fun to listen to, and very effective. It covers all the aspects of the German language: grammar, vocabulary, pronunciation, idioms, culture, etc. It follows a logical and gradual progression that adapts to your level and pace.

    -

    Assimil Il Tedesco Senza Sforzo MP3 77.00M is compatible with most devices: computers, smartphones, tablets, mp3 players, etc. It is easy to download and store: you can get it online or via email in minutes.

    -

    Learning German with Assimil Il Tedesco Senza Sforzo MP3 77.00M can enrich your personal and professional life in many ways: you can communicate with millions of people across different cultures and countries; you can access a vast amount of information and knowledge in various fields; you can travel to beautiful and fascinating places; you can enhance your career opportunities and prospects in many industries and sectors.

    -

    Call to action

    -

    If you are interested in learning German without effort, don't hesitate to get Assimil Il Tedesco Senza Sforzo MP3 77.00M today. You will not regret it.

    -

    You can order it online from the official Assimil website or from other authorized sellers.

    -

    You can also try a free sample lesson before you buy it.

    -

    Don't miss this opportunity to learn one of the most widely spoken languages in the world with one of the most successful courses in the world.

    -

    Get Assimil Il Tedesco Senza Sforzo MP3 77.00M now and start your journey to German fluency.

    -

    Frequently Asked Questions

    -
      -
    1. What is the difference between Assimil Il Tedesco Senza Sforzo MP3 77.00M and Assimil Il Tedesco Senza Sforzo CD?
    2. -
    3. The main difference is the format of the audio file. The MP3 version has a single file with more than 77 minutes of recordings in MP3 format, while the CD version has four CDs with about 20 minutes of recordings each in WAV format.
    4. -
    5. How long does it take to complete Assimil Il Tedesco Senza Sforzo MP3 77.00M?
    6. -
    7. The duration of the course depends on your level, pace, and goals. However, a typical learner can complete it in about six months by spending about 30 minutes per day on each lesson.
    8. -
    9. Do I need any prior knowledge of German to use Assimil Il Tedesco Senza Sforzo MP3 77.00M?
    10. -
    11. No, you don't need any prior knowledge of German to use Assimil Il Tedesco Senza Sforzo MP3 77.00M. The course is designed for absolute beginners who want to learn German from scratch.
    12. -
    13. Can I use Assimil Il Tedesco Senza Sforzo MP3 77.00M without the e-book?
    14. -
    15. No, you can't use Assimil Il Tedesco Senza Sforzo MP3 77.00M without the e-book. The e-book is an essential part of the course that complements the audio file. It contains the dialogues, ...continued

      the translations, the notes, the exercises, and the appendix. You need to read and study the e-book along with the audio file to get the most out of the course.

    16. -
    17. What level of German can I achieve with Assimil Il Tedesco Senza Sforzo MP3 77.00M?
    18. -
    19. By using Assimil Il Tedesco Senza Sforzo MP3 77.00M regularly, you can achieve a level of fluency equivalent to B2 in the Common European Framework of Reference for Languages (CEFR). This means that you can communicate effectively in most situations that require interaction with native speakers.
    20. -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DAEMON Tools Pro Advanced V5.1.0.0333 Admin Crack Download Pc The Complete Review and Comparison.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DAEMON Tools Pro Advanced V5.1.0.0333 Admin Crack Download Pc The Complete Review and Comparison.md deleted file mode 100644 index 630a87ca87015eb033e1bd0be60decb14b5b53b9..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DAEMON Tools Pro Advanced V5.1.0.0333 Admin Crack Download Pc The Complete Review and Comparison.md +++ /dev/null @@ -1,113 +0,0 @@ -
    -

    AnyDVD-HD.7.2.3.0-Final-July 17,2k13.rar Serial Key Keygen

    -

    If you are a movie lover who wants to enjoy your DVD and Blu-ray collection on any device and software, you might be interested in AnyDVD HD. This is a powerful software that can remove any copy protection and region code from your discs, allowing you to watch them without any hassle. In this article, we will explain what AnyDVD HD is, how to install and activate it with serial key and keygen, why you need it, and where to download it.

    -

    What is AnyDVD HD?

    -

    AnyDVD HD is a software that works in the background to automatically and transparently enable read access of the contents of a movie DVD, Blu-ray, and HD DVD as soon as it's inserted into the drive. This means that you can use any DVD or Blu-ray backup software, such as CloneDVD, Pinnacle InstantCopy, Intervideo DVDCopy, and others, to copy or rip your discs without any problem. You can also play your discs on any DVD or Blu-ray player software, such as PowerDVD Ultra, VLC Media Player, Windows Media Player, and others, without worrying about region codes or HDCP-compliant graphics cards and displays.

    -

    AnyDVD-HD.7.2.3.0-Final-July 17,2k13.rar Serial Key Keygen


    DOWNLOAD ✪✪✪ https://byltly.com/2uKxJq



    -

    Features and benefits of AnyDVD HD

    -

    AnyDVD HD has many features and benefits that make it a must-have utility for the serious home theater enthusiast using a media center or home theater PC. Some of them are:

    - -

    How to install and activate AnyDVD HD with serial key and keygen

    -

    To install and activate AnyDVD HD with serial key and keygen, you need to follow these steps:

    -
      -
    1. Download the file AnyDVD-HD.7.2.3.0-Final-July 17,2k13.rar from a reliable source.
    2. -
    3. Extract the file using a program like WinRAR or 7-Zip.
    4. -
    5. Run the setup file SetupAnyDVD7230.exe and follow the instructions to install AnyDVD HD on your PC.
    6. -
    7. Run the keygen file Key.AnyDVDHD.exe and generate a serial key for AnyDVD HD.
    8. -
    9. Copy the serial key and paste it into the registration window of AnyDVD HD.
    10. -
    11. Click OK to activate AnyDVD HD with serial key.
    12. -
    -

    Congratulations! You have successfully installed and activated AnyDVD HD with serial key and keygen. You can now enjoy all the features and benefits of this amazing software.

    -

    Why do you need AnyDVD HD?

    -

    You might be wondering why you need AnyDVD HD when there are other DVD and Blu-ray ripping software available. The answer is simple: AnyDVD HD offers more than just ripping. It offers a complete solution for watching movies on any device and software without any restrictions or limitations. Here are some reasons why you need AnyDVD HD:

    -

    Bypass copy protection and region codes on DVDs and Blu-rays

    -

    One of the main reasons why you need AnyDVD HD is that it can bypass any copy protection and region code on DVDs and Blu-rays. This means that you can make backup copies of your discs for personal use or watch them on any device or software regardless of where they were purchased or where you live. You don't have to worry about damaging your discs or losing them due to theft or natural disasters. You also don't have to buy multiple copies of the same movie for different regions or devices. With AnyDVD HD, you can enjoy your movie collection anywhere and anytime.

    -

    AnyDVD HD 7.2.3.0 Crack Download
    -AnyDVD HD 7.2.3.0 Final Patch
    -AnyDVD HD 7.2.3.0 License Key Free
    -AnyDVD HD 7.2.3.0 Activation Code
    -AnyDVD HD 7.2.3.0 Full Version
    -AnyDVD HD 7.2.3.0 Keygen Torrent
    -AnyDVD HD 7.2.3.0 Serial Number
    -AnyDVD HD 7.2.3.0 Registration Code
    -AnyDVD HD 7.2.3.0 Product Key
    -AnyDVD HD 7.2.3.0 RAR File
    -AnyDVD HD 7.2.3.0 Chocolatey Package
    -AnyDVD HD 7.2.3.0 Blu-Ray Decrypter
    -AnyDVD HD 7.2.3.0 DVD Ripper
    -AnyDVD HD 7.2.3.0 Magic File Replacement
    -AnyDVD HD 7.2.3.0 UDF 2.5 File Ripper
    -AnyDVD HD 7.2.3.0 HDCP Bypass
    -AnyDVD HD 7.2.3.0 Region Free
    -AnyDVD HD 7.2.3.0 Download Link
    -AnyDVD HD 7.2.3.0 TechSpot Review
    -AnyDVD HD 7.2.3.0 Softpedia Rating
    -How to Install AnyDVD HD 7.2.3 Final
    -How to Use AnyDVD HD 7 Keygen
    -How to Update AnyDVD HD to Latest Version
    -How to Uninstall AnyDVD HD Completely
    -How to Backup DVD with AnyDVD HD
    -How to Remove Unwanted Features with AnyDVD HD
    -How to Remaster Discs with AnyDVD HD Scripts
    -How to Watch Blu-Ray Movies with AnyDVD HD and PowerDVD Ultra
    -How to Burn Disc Images with AnyDVD HD and PowerISO
    -How to Create Custom DVDs with AnyDVD HD and CDBurnerXP
    -Best Alternatives to AnyDVD HD for Windows/Mac/Linux
    -Compare Features of AnyDVD and AnyDVD HD
    -Pros and Cons of Using AnyDVD HD Software
    -Tips and Tricks for Getting the Most Out of AnyDVD HD
    -Troubleshooting Common Problems with AnyDVD HD
    -Customer Testimonials for AnyDVD HD Product
    -Discount Coupons for Buying AnyDVD HD License
    -Free Trial Download for AnyDVD HD Software
    -Frequently Asked Questions about AnyDVD HD Program
    -User Guide for AnyDVD HD Application

    -

    Watch movies on any device and software without restrictions

    -

    Another reason why you need AnyDVD HD is that it can enable you to watch movies on any device and software without restrictions. This means that you can play your discs on any DVD or Blu-ray player software, such as PowerDVD Ultra, VLC Media Player, Windows Media Player, etc., without having to install additional codecs or drivers. You can also watch your discs on any device that supports video playback, such as smartphones, tablets, laptops, TVs, etc., without having to convert them to different formats or resolutions. You don't have to worry about compatibility issues or quality loss. With AnyDVD HD, you can enjoy your movies on any device and software with ease.

    -

    Customize and enhance your movie experience with magic file replacement

    -

    A third reason why you need AnyDVD HD is that it can customize and enhance your movie experience with magic file replacement. This is a unique feature that allows you to remaster any commercial movie disc using simple XML scripts. You can change anything on the disc, such as menus, subtitles, audio tracks, logos, trailers, etc., according to your preferences. You can also add new features or enhancements, such as commentary tracks, deleted scenes, alternative endings, etc., that are not available on the original disc. You don't have to make a copy to hard disk or burn a new disc. You just need to insert the disc into your drive and let AnyDVD HD do its magic. With AnyDVD HD, you can customize and enhance your movie experience as you like.

    -

    Where to download AnyDVD HD?

    -

    If you are convinced that you need AnyDVD HD for your movie enjoyment, you might be wondering where to download it. There are several sources where you can get this software legally and safely. Here are some of them:

    -

    Official website of Redfox

    -

    The official website of Redfox is https://www.redfox.bz/en/anydvdhd.html . This is where you can find the latest version of AnyDVD HD along with other products from Redfox such as CloneBD, CloneCD, CloneDVD mobile etc. You can also find useful information such as FAQs , forums , news , updates , etc. You can download a free trial version of AnyDVD HD for 21 days from this website. If you want to buy a license for lifetime updates , you can do so for 109 EUR (about 123 USD) from this website. This is the most reliable source for downloading AnyDVD HD.

    -

    Chocolatey Software package manager

    -
  9. Run the keygen file Key.AnyDVDHD.exe and generate a serial key for AnyDVD HD.
  10. -
  11. Copy the serial key and paste it into the registration window of AnyDVD HD.
  12. -
  13. Click OK to activate AnyDVD HD with serial key.
  14. -
-

Congratulations! You have successfully installed and activated AnyDVD HD (8.6.4.0) with serial key and keygen using TechSpot.

-

Conclusion

-

In this article, we have explained what AnyDVD HD is, how to install and activate it with serial key and keygen, why you need it, and where to download it. We have shown you three sources where you can get this software legally and safely: the official website of Redfox, the Chocolatey Software package manager, and the TechSpot download portal. We have also highlighted some of the features and benefits of AnyDVD HD that make it a great software for movie lovers who want to enjoy their DVD and Blu-ray collection on any device and software without any restrictions or limitations.

-

Summary of the main points

- -

Call to action and disclaimer

-

If you are interested in trying out AnyDVD HD for yourself, you can download a free trial version for 21 days from the official website of Redfox or buy a license for lifetime updates. You can also use Chocolatey Software or TechSpot to install older or newer versions of AnyDVD HD on your PC. However, please note that we are not affiliated with any of these sources and we are not responsible for any issues or damages that may arise from using them. Please use AnyDVD HD at your own risk and only for personal use. Do not distribute or share your serial key or keygen with anyone else. Respect the intellectual property rights of the movie studios and producers.

-

FAQs

-
    -
  1. What is the difference between AnyDVD and AnyDVD HD?
  2. -

    AnyDVD is a software that can remove copy protection and region codes from DVDs only. AnyDVD HD is a software that can do the same for DVDs, Blu-rays, and HD DVDs. AnyDVD HD also has additional features for full Blu-ray Disc and HD DVD support.

    -
  3. Is AnyDVD HD legal?
  4. -

    AnyDVD HD is legal to use for personal use in most countries. However, some countries may have laws that prohibit circumventing copy protection or region codes on DVDs and Blu-rays. Please check your local laws before using AnyDVD HD.

    -
  5. Does AnyDVD HD work with Windows 10?
  6. -

    Yes, AnyDVD HD works with Windows 10 as well as Windows 7, 8, 8.1, Vista, XP, and Server 2003. However, you may need to install .NET Framework 4 or higher if you don't have it already.

    -
  7. Does AnyDVD HD work with Netflix?
  8. -

    No, AnyDVD HD does not work with Netflix or other streaming services. AnyDVD HD only works with physical discs that you insert into your drive.

    -
  9. How can I update AnyDVD HD?
  10. -

    You can update AnyDVD HD by downloading the latest version from the official website of Redfox or by using Chocolatey Software or TechSpot. You can also enable automatic updates in the settings of AnyDVD HD. However, you may need to generate a new serial key and keygen for each update.

    -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Eurosoft Diagnostics.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Eurosoft Diagnostics.md deleted file mode 100644 index d6e1eff41c7dd3f1ad1678f56f537e8bc0faee0f..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Eurosoft Diagnostics.md +++ /dev/null @@ -1,14 +0,0 @@ -
-

Eurosoft Diagnostics: The Best PC Diagnostic Software and Tools for Your Business

-

If you are looking for a reliable and comprehensive PC diagnostic software and tools for your business, you should consider Eurosoft Diagnostics. Eurosoft Diagnostics is a leading provider of PC diagnostic software and tools for various sectors such as computer manufacturing, repair, refurbishment, support, and education. Eurosoft Diagnostics helps you to quickly and accurately test and troubleshoot PC hardware issues, reduce costs, improve efficiency, and enhance customer satisfaction.

-

Eurosoft Diagnostics offers a range of PC diagnostic software and tools that suit different needs and scenarios. Some of the products include:

-

eurosoft diagnostics


Downloadhttps://byltly.com/2uKxda



- -

Eurosoft Diagnostics products are trusted by thousands of customers worldwide, including OEMs, ODMs, system builders, system integrators, R&D designers, Microsoft authorized refurbishers, IT asset recovery companies, computer recyclers, break-fix operations, repair depots, field technicians, computer shops, network administrators, IT professionals, managed service providers, help desk staff, training and education institutions, and more.

-

If you want to learn more about Eurosoft Diagnostics products and how they can benefit your business, visit https://www.eurosoft-uk.com/ today.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Autodesk Revit 2020.2 [Crack Patch ] Torrent! _TOP_.md b/spaces/1gistliPinn/ChatGPT4/Examples/Autodesk Revit 2020.2 [Crack Patch ] Torrent! _TOP_.md deleted file mode 100644 index e4a3a838ea3d6aea1ef7e85ce5667237767d961a..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Autodesk Revit 2020.2 [Crack Patch ] Torrent! _TOP_.md +++ /dev/null @@ -1,6 +0,0 @@ -

Autodesk Revit 2020.2 [Crack Patch ] Torrent!


Download Filehttps://imgfil.com/2uxXSB



-
-Autodesk Revit 2020.2.2.0 Crack is a powerful software for making the ... If such a collaboration tool is required, companies adopt Revit free download full version with crack for checking the ... Auto-update and manipulation. 1fdad05405
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Diablo Tactic Cm 03 04 25.md b/spaces/1gistliPinn/ChatGPT4/Examples/Diablo Tactic Cm 03 04 25.md deleted file mode 100644 index 08ac380c4e0a86c84518c85c14d1d9c6cb03d869..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Diablo Tactic Cm 03 04 25.md +++ /dev/null @@ -1,15 +0,0 @@ - -

How to Master the Diablo Tactic in Championship Manager 03/04

-

Championship Manager 03/04 is a classic football management game that still has a loyal fan base. One of the most popular and effective tactics in the game is the Diablo tactic, a wide 4-1-3-2 formation that produces a lot of goals and wins. Here are some tips on how to use this tactic and dominate your opponents.

-

diablo tactic cm 03 04 25


DOWNLOADhttps://imgfil.com/2uy0Dv



- -

The Diablo tactic is not a cheat, but a clever exploitation of the game's mechanics. It can be very fun and rewarding to use, but it can also be frustrating and boring if you overuse it or face it too often. It is up to you to decide how much you want to rely on it or challenge yourself with other tactics. Either way, Championship Manager 03/04 is a game that never gets old.

If you want to learn more about the Diablo tactic and other tactics in Championship Manager 03/04, you can check out some online forums and guides. There are many passionate and knowledgeable fans who share their tips and experiences with the game. You can also watch some videos on YouTube or Twitch of players who use this tactic or challenge themselves with different ones.

-

Championship Manager 03/04 is a game that has stood the test of time and still has a loyal fan base. It is a game that can make you feel like a real football manager, with all the joys and sorrows that come with it. It is a game that can make you addicted and obsessed, but also entertained and satisfied. It is a game that you should try if you love football and management games.

-

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download [UPDATED] Solidcam 2013 Full Crack.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download [UPDATED] Solidcam 2013 Full Crack.md deleted file mode 100644 index 73761d22ba190cd371db9301e9ca1bbb98e46c38..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download [UPDATED] Solidcam 2013 Full Crack.md +++ /dev/null @@ -1,40 +0,0 @@ -

download solidcam 2013 full crack


Download Zip --->>> https://imgfil.com/2uy0Gj



- -eXtension (X-STEP) tools for milling and drilling applications. - -It also has in the SolidCAM software a dedicated filter for the Open surface format, needed for rapid prototyping. - -Uses - -The High Speed Roughing module allows you to quickly roughen a mesh for fabrication. - -The High Speed Machining module allows you to precisely machine the mesh. - -Compatibility - -The cutter tools and main geometry of the High Speed Roughing and High Speed Machining modules are designed to work with each other. The modules can be imported and exported in the Open surface format, which is supported by most CAD systems. - -References - -External links - - Official website - - 3D Drafting & Milling on YouTube - - Official community - - Official Forum - -Category:CAM softwareThe present invention relates to a liquid crystal display device comprising a display screen on which an image is formed by a liquid crystal material. - -Recently, as the number of pixels in a display device increases, there is an increasing demand for realizing a large-screen, high-resolution, high-quality, and high-quality color display device. However, in order to realize such a display device, the display screen must have a sufficiently large area, and in this case, a display device with a large area tends to have a large number of pixels and a correspondingly high cost. - -As a method of realizing a display device with a large area and a correspondingly low cost, there is known an approach which comprises forming a desired display screen with only a plurality of pixels and connecting the pixels with thin film transistors (TFTs) so as to form a matrix, and a display device by the above-mentioned approach is generally called a flat panel display device. In particular, in a liquid crystal display device, since a display screen is formed by a liquid crystal material and a light transmittance of the liquid crystal material varies depending on an electric field applied to the liquid crystal material, it is possible to display a desired image by changing the electric field. - -In the case where the display device is a liquid crystal display device, a desired electric field is applied to a liquid crystal material by using a pair of electrodes sandwiching the liquid crystal material in the display screen, and a pair of electrodes sandwiching a liquid crystal material in the display screen in this way are generally referred to as a pixel electrode and a common electrode, respectively. - -As the above-mentioned liquid crystal display device 4fefd39f24
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator How to Play GameCube and Wii Games on Your PC.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator How to Play GameCube and Wii Games on Your PC.md deleted file mode 100644 index c19841b84e51c3dc0f512569fc828ca892082782..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator How to Play GameCube and Wii Games on Your PC.md +++ /dev/null @@ -1,88 +0,0 @@ -
-

How to Download Dolphin Emulator for PC

-

Dolphin emulator is a software that allows you to play games from Nintendo GameCube and Wii consoles on your computer. It is one of the most popular and advanced emulators available, with many features and options to enhance your gaming experience. In this article, I will provide you with some information on how to download, install, and configure dolphin emulator for pc, as well as some of the pros and cons of using it.

-

download dolphin emulator for pc


DOWNLOAD 🗹 https://urlin.us/2uT2oh



-

Downloading dolphin emulator

-

The first step to use dolphin emulator is to download it from the official website. You can choose between two types of versions: beta versions and development versions. Beta versions are released every month and are more stable and tested than development versions. Development versions are released every time a developer makes a change to the emulator, and may have new features or bug fixes, but also more potential issues. You can download either version from this page. The Windows versions require the 64-bit Visual C++ redistributable for Visual Studio 2022 to be installed, which you can get from here.

-

Installing dolphin emulator

-

Once you have downloaded the dolphin emulator file, you need to extract it into a new folder (preferably named after the version) or to replace an existing dolphin setup. You can use any program that can handle ZIP files, such as 7-Zip or WinRAR. After extracting the file, you can run the dolphin.exe file to launch the emulator. You don't need to install anything else.

-

If you are using Mac or Linux, you may need to make the file executable before running it. You can do this by right-clicking on the file, choosing Properties, and checking the Execute permission box. Alternatively, you can use the terminal command chmod +x filename.

-

Configuring dolphin emulator

-

Dolphin emulator has two main configuration windows: Dolphin configuration and Graphics settings. You can access them by clicking on the Config and Graphics buttons on the main toolbar. You can also apply settings per game via their GameINI files, which are located in the Dolphin Emulator folder under User/GameSettings.

-

Dolphin configuration

-

The Dolphin configuration window lets you adjust general settings such as emulation speed, dual core mode, audio output, controller input, memory cards, cheats, and more. Here are some recommended settings for optimal performance:

-

How to download dolphin emulator for pc windows 10
-Download dolphin emulator for pc 32 bit
-Download dolphin emulator for pc latest version
-Download dolphin emulator for pc with games
-Download dolphin emulator for pc full speed
-Download dolphin emulator for pc reddit
-Download dolphin emulator for pc free
-Download dolphin emulator for pc mac
-Download dolphin emulator for pc linux
-Download dolphin emulator for pc android
-Download dolphin emulator for pc apk
-Download dolphin emulator for pc iso
-Download dolphin emulator for pc roms
-Download dolphin emulator for pc wii
-Download dolphin emulator for pc gamecube
-Download dolphin emulator for pc 4k
-Download dolphin emulator for pc 60fps
-Download dolphin emulator for pc cheats
-Download dolphin emulator for pc bios
-Download dolphin emulator for pc setup
-Download dolphin emulator for pc offline installer
-Download dolphin emulator for pc highly compressed
-Download dolphin emulator for pc no lag
-Download dolphin emulator for pc best settings
-Download dolphin emulator for pc controller support
-Download dolphin emulator for pc keyboard and mouse
-Download dolphin emulator for pc netplay
-Download dolphin emulator for pc multiplayer
-Download dolphin emulator for pc steam
-Download dolphin emulator for pc portable
-Download dolphin emulator for pc zip file
-Download dolphin emulator for pc rar file
-Download dolphin emulator for pc softonic
-Download dolphin emulator for pc uptodown
-Download dolphin emulator for pc filehippo
-Download dolphin emulator for pc ocean of games
-Download dolphin emulator for pc igg games
-Download dolphin emulator for pc skidrow reloaded
-Download dolphin emulator for pc fitgirl repack
-Download dolphin emulator for pc crack only
-Download dolphin emulator for pc patch notes
-Download dolphin emulator for pc system requirements
-Download dolphin emulator for pc tutorial guide
-Download dolphin emulator for pc review rating
-Download dolphin emulator for pc comparison test
-Download dolphin emulator for pc tips tricks hacks
-Download dolphin emulator for pc mods addons plugins
-Download dolphin emulator for pc save data transfer
-Download dolphin emulator for pc error fix solution
-Download dolphin emulator for pc update download

- -

How do I update dolphin emulator to the latest version?

-

To update dolphin emulator to the latest version, you can either download the new version from the official website and replace your existing setup, or use the built-in updater feature. To use the updater, click on the Help button on the main toolbar, and choose Check for Updates. If there is a new version available, you can download and install it automatically.

-

How do I add games to dolphin emulator?

-

To add games to dolphin emulator, you need to have the game files in ISO or WBFS format. You can either dump your own games from original discs using a Wii console and a USB loader, or download them from legal sources such as Nintendo eShop. Once you have the game files, you can place them in any folder on your computer, and then add that folder to dolphin emulator's game list. To do that, click on the Config button on the main toolbar, and choose Paths. Then, click on Add and browse to the folder where your games are located. You can also remove or edit any existing paths.

-

How do I play online with dolphin emulator?

-

To play online with dolphin emulator, you have two options: Netplay or Wiimmfi. Netplay is a feature that allows you to play local multiplayer games over the internet with other dolphin users. To use Netplay, you need to have the same game and dolphin version as your partner, and a stable internet connection. You can either host or join a Netplay session by clicking on the Tools button on the main toolbar, and choosing Start Netplay. You can also chat with your partner using the built-in chat window.

-

Wiimmfi is a service that allows you to play online multiplayer games that originally used Nintendo Wi-Fi Connection, which was discontinued in 2014. To use Wiimmfi, you need to have a valid Wii console ID and a patched game ISO that supports Wiimmfi. You can find more information on how to get these from this page. Once you have them, you can launch the game from dolphin emulator and connect to Wiimmfi as you would normally do on a Wii console.

-

How do I fix common issues with dolphin emulator?

-

Dolphin emulator is a complex software that may encounter some issues depending on your system and game settings. Some of the common issues and their possible solutions are:

- -

Where can I find more information and support for dolphin emulator?

-

If you want to learn more about dolphin emulator and its features, you can visit the official website, where you can find documentation, guides, forums, blogs, videos, and more. You can also join the official Discord server, where you can chat with other users and developers, ask questions, share feedback, and get help.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 2048 Mod APK for Android and IOS The Ultimate Puzzle Game.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 2048 Mod APK for Android and IOS The Ultimate Puzzle Game.md deleted file mode 100644 index 2237507f8f8be98bbea9bc3ab22c07cd556d7c3c..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 2048 Mod APK for Android and IOS The Ultimate Puzzle Game.md +++ /dev/null @@ -1,116 +0,0 @@ -
-

2048 Mod APK: A Fun and Addictive Puzzle Game

-

If you are looking for a simple yet challenging puzzle game that can keep you entertained for hours, you might want to try 2048 mod apk. This is a modified version of the original 2048 game that offers more features and benefits for the players. In this article, we will tell you everything you need to know about 2048 mod apk, including what it is, how to play it, why it is so popular, what are its features, how to download and install it, and what are its pros and cons.

-

2048 mod apk


Download ->>> https://urlin.us/2uT1qw



-

What is 2048?

-

2048 is a puzzle game that was created by Gabriele Cirulli in 2014. The game is inspired by other similar games such as Threes and 1024. The goal of the game is to slide numbered tiles on a 4x4 grid and combine them to create a tile with the number 2048. The game is over when there are no more moves left or when the player reaches the 2048 tile.

-

How to play 2048?

-

The game is very easy to play. You just need to swipe your finger on the screen to move the tiles in the direction you want. When two tiles with the same number touch, they merge into one tile with the sum of their numbers. For example, if you swipe left and there are two tiles with the number 2 on the leftmost column, they will merge into one tile with the number 4. You can also use the arrow keys on your keyboard if you are playing on a computer.

-

Why is 2048 so popular?

-

There are many reasons why 2048 is so popular among puzzle game lovers. Some of them are:

- -

What is 2048 mod apk?

-

2048 mod apk is a modified version of the original 2048 game that offers more features and benefits for the players. It is not available on the official app stores, but you can download it from third-party websites such as Apkloli. By downloading and installing 2048 mod apk, you can enjoy the following features:

-

2048 mod apk unlimited money
-2048 mod apk download for android
-2048 mod apk latest version
-2048 mod apk no ads
-2048 mod apk ios
-2048 mod apk free download
-2048 mod apk hack
-2048 mod apk revdl
-2048 mod apk apkpure
-2048 mod apk rexdl
-2048 mod apk offline
-2048 mod apk online
-2048 mod apk with cheat menu
-2048 mod apk unlimited undo
-2048 mod apk unlimited coins
-2048 mod apk unlimited gems
-2048 mod apk unlimited moves
-2048 mod apk unlimited time
-2048 mod apk unlimited hints
-2048 mod apk unlimited stars
-2048 mod apk premium
-2048 mod apk pro
-2048 mod apk plus
-2048 mod apk mega
-2048 mod apk vip
-2048 mod apk original
-2048 mod apk classic
-2048 mod apk puzzle
-2048 mod apk adventure
-2048 mod apk challenge
-2048 mod apk fun
-2048 mod apk cute
-2048 mod apk cool
-2048 mod apk awesome
-2048 mod apk best
-2048 mod apk new
-2048 mod apk old
-2048 mod apk updated
-2048 mod apk full version
-2048 mod apk cracked version

-

Features of 2048 mod apk

-

Unlimited money

-

With 2048 mod apk, you can get unlimited money that you can use to buy various items in the game. For example, you can buy hints that can help you make better moves, or boosters that can increase your score or remove unwanted tiles.

-

No ads

-

Another benefit of 2048 mod apk is that it removes all the annoying ads that interrupt your gameplay. You can play the game without any distractions or interruptions.

-

Custom themes

-

If you are bored with the default theme of the game, you can change it with 2048 mod apk. You can choose from different themes such as animals, fruits, flowers, colors, emojis, and more. You can also create your own theme by using your own images and sounds.

-

Undo and redo moves

-

Sometimes, you might regret making a certain move or want to try a different strategy. With 2048 mod apk, you can undo and redo your moves as many times as you want. This can help you avoid mistakes and improve your chances of winning.

-

How to download and install 2048 mod apk?

-

If you want to download and install 2048 mod apk, you need to follow these simple steps:

-
    -
  1. Go to the website where you can download 2048 mod apk, such as Apkloli. Make sure you choose a reliable and safe source.
  2. -
  3. Click on the download button and wait for the file to be downloaded on your device.
  4. -
  5. Go to your device settings and enable the installation of apps from unknown sources. This is necessary because 2048 mod apk is not from the official app stores.
  6. -
  7. Locate the downloaded file and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to be completed.
  8. -
  9. Launch the game and enjoy playing 2048 mod apk with all its features.
  10. -
-

Pros and cons of 2048 mod apk

-

Like any other app, 2048 mod apk has its pros and cons. Here are some of them:

-

Pros

- -

Cons

- -

Conclusion

-

In conclusion, 2048 mod apk is a fun and addictive puzzle game that offers more features and benefits than the original 2048 game. It allows you to play the game with unlimited money, no ads, custom themes, undo and redo moves, and more. However, it also has some drawbacks, such as being unavailable on the official app stores, causing some technical issues, and violating some rules. Therefore, you should weigh the pros and cons before downloading and installing 2048 mod apk on your device. If you decide to try it, make sure you download it from a reliable and safe source, such as Apkloli. We hope this article has been helpful and informative for you. Thank you for reading!

-

Frequently Asked Questions

-

Here are some of the most common questions that people ask about 2048 mod apk:

-

Q: Is 2048 mod apk free?

-

A: Yes, 2048 mod apk is free to download and play. You do not need to pay any money to enjoy its features and benefits.

-

Q: Is 2048 mod apk safe?

-

A: It depends on where you download it from. Some websites may offer fake or malicious files that can harm your device or steal your data. Therefore, you should always download 2048 mod apk from a reputable and trusted source, such as Apkloli. You should also scan the file with an antivirus program before installing it.

-

Q: Is 2048 mod apk legal?

-

A: It is not clear whether 2048 mod apk is legal or not. It may depend on the laws and regulations of your country or region. Some countries may allow modifying or hacking apps for personal use, while others may prohibit or penalize such activities. You should also consider the rights and interests of the original game developer or publisher, who may not approve of modifying or distributing their app without their permission or consent. Therefore, you should use 2048 mod apk at your own risk and responsibility.

-

Q: How can I update 2048 mod apk?

-

A: Since 2048 mod apk is not from the official app stores, you cannot update it automatically or manually through them. You need to download the latest version of 2048 mod apk from the same website where you downloaded the previous version. You should also check the website regularly for any updates or news about 2048 mod apk.

-

Q: How can I uninstall 2048 mod apk?

-

A: If you want to uninstall 2048 mod apk from your device, you can follow these steps:

-
    -
  1. Go to your device settings and find the apps or applications section.
  2. -
  3. Find and tap on 2048 mod apk from the list of installed apps.
  4. -
  5. Tap on the uninstall button and confirm your action.
  6. -
  7. Wait for the app to be uninstalled from your device.
  8. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/1v1 Battle Challenge Your Friends and Enemies in Epic Duels.md b/spaces/1phancelerku/anime-remove-background/1v1 Battle Challenge Your Friends and Enemies in Epic Duels.md deleted file mode 100644 index 8f18db55cfbd8a8856baf9ad6a39ab7552ce1412..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/1v1 Battle Challenge Your Friends and Enemies in Epic Duels.md +++ /dev/null @@ -1,111 +0,0 @@ - -

What is a 1v1 Battle?

-

A 1v1 battle is a type of multiplayer video game that pits two players against each other in a virtual arena. The goal is to eliminate the opponent or score more points than them before the time runs out. 1v1 battles can be played in different genres, such as shooting, fighting, racing, or strategy games.

-

Some of the benefits of playing 1v1 battle games are:

-

1v1 battle


Download Zip ……… https://jinyurl.com/2uNQry



- -

Some of the challenges of playing 1v1 battle games are:

- -

Some of the popular 1v1 battle games are:

- - - - - - - -
GameDescriptionPlatform
FortniteA battle royale game that features building, editing, and shooting mechanics.PC, console, mobile
Call of DutyA first-person shooter game that features various weapons, maps, and modes.PC, console, mobile
Mortal KombatA fighting game that features brutal combat, fatalities, and characters.PC, console, mobile
Mario KartA racing game that features items, tracks, and characters from the Mario franchise.Console, mobile
ChessA strategy game that features pieces, moves, and rules based on medieval warfare.PC, mobile, board
-

How to Play 1v1 Battle Games?

-

The basic controls and mechanics of 1v1 battle games vary depending on the genre and the game. However, some common elements are:

- -

Some tips and tricks for winning 1v1 battles are:

- -

Some resources for learning and improving your 1v1 battle skills are:

- -

How to Enjoy 1v1 Battle Games?

-

Playing 1v1 battle games can be fun and exciting, but it can also be stressful and boring if you don't know how to enjoy them. Here are some ways to make your 1v1 battle gaming experience more enjoyable:

- -

Conclusion

-

1v1 battle games are a type of multiplayer video game that pits two players against each other in a virtual arena. They can be played in different genres, such as shooting, fighting, racing, or strategy games. They can test your skills and reflexes, allow you to showcase your creativity and style, and provide you with feedback and motivation. However, they can also be frustrating and stressful, repetitive and boring, and addictive and harmful. Therefore, you need to know how to play and enjoy them properly. You need to practice your skills regularly, study your opponent's behavior, use your environment and resources wisely, communicate and cooperate with your teammate, have fun and enjoy the game, try different game modes and features, customize and personalize your game settings and appearance, and socialize and compete with other players.

-

If you are interested in playing 1v1 battle games, you can check out some of the popular ones mentioned in this article. You can also watch videos or streams of professional or popular players, read articles or guides that explain the rules, strategies, and tips of different games and modes, join online communities or forums that discuss, share, or review 1v1 battle games and content, participate in tournaments or events that challenge your skills and reward your achievements, or ask for feedback or advice from other players or coaches who have more experience or knowledge.

-

1v1 battle royale
-1v1 build fight
-1v1.lol
-1v1 battle games
-1v1 battle simulator
-1v1 battle online
-1v1 battle crazy games
-1v1 battle codes
-1v1 battle fortnite
-1v1 battle arena
-1v1 battle minecraft
-1v1 battle roblox
-1v1 battle shooting
-1v1 battle unblocked
-1v1 battle apk
-1v1 battle app
-1v1 battle download
-1v1 battle hack
-1v1 battle mod
-1v1 battle pc
-1v1 build fight map
-1v1 build fight codes
-1v1 build fight simulator
-1v1 build fight practice
-1v1 build fight creative code
-1v1 build fight server
-1v1 build fight tips
-1v1 build fight training
-1v1 build fight tutorial
-1v1 build fight website
-1v.lol game
-1v.lol unblocked
-1v.lol aim trainer
-1v.lol justbuild.lol
-1v.lol box fight code
-1v.lol party mode code
-1v.lol controls pc
-1v.lol discord server
-1v.lol hacks download
-2 player games online free play now fighting games unblocked at school.

-

Are you ready to enter the 1v1 battle arena? Let us know what you think about 1v1 battle games in the comments below!

-

FAQs

-

Here are some of the frequently asked questions about 1v1 battle games:

-
    -
  1. What are the best 1v1 battle games?
    The answer to this question depends on your personal preference and taste. However, some of the factors that you can consider when choosing a 1v1 battle game are: the genre, the graphics, the gameplay, the difficulty level, the replay value, the popularity, the reviews, the price, and the availability.
  2. -
  3. How do I get better at 1v1 battle games?
    The best way to get better at 1v1 battle games is to practice regularly and learn from your mistakes. You can also watch videos or streams of professional or popular players and learn from their techniques and tips. You can also read articles or guides that explain the rules, strategies, and tips of different games and modes. You can also join online communities or forums that discuss, share, or review 1v1 battle games and content. You can also participate in tournaments or events that challenge your skills and reward your achievements. You can also ask for feedback or advice from other players or coaches who have more experience or knowledge.
  4. -
  5. How do I find opponents for 1v1 battle games?
    There are different ways to find opponents for 1v1 battle games. You can play online matches with random players who are matched with you based on your skill level or region. You can invite your friends or contacts to play with you privately or publicly. You can join a clan, guild, or team that has other members who play the same game as you. You can use a third-party platform or service that connects you with other players who are looking for 1v1 battles.
  6. -
  7. How do I deal with toxic players in 1v1 battle games?
    Toxic players are those who behave in a rude, abusive, or unsportsmanlike manner in 1v1 battle games. They may insult, harass, troll, cheat, or rage quit during or after the game. They may also ruin the game experience for other players by spamming, griefing, hacking, or teaming. To deal with toxic players, you can do the following: ignore them, mute them, block them, report them, or avoid them.
  8. -
  9. How do I balance my time and health when playing 1v1 battle games?
    Playing 1v1 battle games can be fun and rewarding, but it can also be addictive and harmful if you neglect your time and health. To balance your time and health when playing 1v1 battle games, you can do the following: set a limit on how long and how often you play, take breaks and stretch regularly, drink water and eat healthy snacks, sleep well and rest enough, exercise and stay active, socialize and interact with other people, and pursue other hobbies and interests.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/A00001/bingothoo/src/app/page.tsx b/spaces/A00001/bingothoo/src/app/page.tsx deleted file mode 100644 index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/app/page.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import dynamic from 'next/dynamic' - -const DynamicComponentWithNoSSR = dynamic( - () => import('../components/chat'), - { ssr: false } -) - -export default function IndexPage() { - return ( - <> -
- - - ) -} diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/modules/F0Predictor/HarvestF0Predictor.py deleted file mode 100644 index 98d4e98b353008f81bde2c37e7da818763a992c9..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +++ /dev/null @@ -1,86 +0,0 @@ -from infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class HarvestF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.hop_length, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/AIConsultant/MusicGen/tests/adversarial/test_discriminators.py b/spaces/AIConsultant/MusicGen/tests/adversarial/test_discriminators.py deleted file mode 100644 index fad89a0ae4534dc7967b6ccda194b9fd1dedbffe..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/tests/adversarial/test_discriminators.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random - -import torch - -from audiocraft.adversarial.discriminators import ( - MultiPeriodDiscriminator, - MultiScaleDiscriminator, - MultiScaleSTFTDiscriminator -) - - -class TestMultiPeriodDiscriminator: - - def test_mpd_discriminator(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - periods = [1, 2, 3] - mpd = MultiPeriodDiscriminator(periods=periods, in_channels=C) - logits, fmaps = mpd(t0) - - assert len(logits) == len(periods) - assert len(fmaps) == len(periods) - assert all([logit.shape[0] == N and len(logit.shape) == 4 for logit in logits]) - assert all([feature.shape[0] == N for fmap in fmaps for feature in fmap]) - - -class TestMultiScaleDiscriminator: - - def test_msd_discriminator(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - scale_norms = ['weight_norm', 'weight_norm'] - msd = MultiScaleDiscriminator(scale_norms=scale_norms, in_channels=C) - logits, fmaps = msd(t0) - - assert len(logits) == len(scale_norms) - assert len(fmaps) == len(scale_norms) - assert all([logit.shape[0] == N and len(logit.shape) == 3 for logit in logits]) - assert all([feature.shape[0] == N for fmap in fmaps for feature in fmap]) - - -class TestMultiScaleStftDiscriminator: - - def test_msstftd_discriminator(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - n_filters = 4 - n_ffts = [128, 256, 64] - hop_lengths = [32, 64, 16] - win_lengths = [128, 256, 64] - - msstftd = MultiScaleSTFTDiscriminator(filters=n_filters, n_ffts=n_ffts, hop_lengths=hop_lengths, - win_lengths=win_lengths, in_channels=C) - logits, fmaps = msstftd(t0) - - assert len(logits) == len(n_ffts) - assert len(fmaps) == len(n_ffts) - assert all([logit.shape[0] == N and len(logit.shape) == 4 for logit in logits]) - assert all([feature.shape[0] == N for fmap in fmaps for feature in fmap]) diff --git a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/model_eval_diff.py b/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/model_eval_diff.py deleted file mode 100644 index 2c29ef8fde2451d3f84e842d0d6a72754f0d4603..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/model_eval_diff.py +++ /dev/null @@ -1,110 +0,0 @@ -import os -import sys -import copy -import pickle - -import numpy as np -import pandas as pd -import fire - -sys.path.append(os.getcwd()) - - -def coco_score(refs, pred, scorer): - if scorer.method() == "Bleu": - scores = np.array([ 0.0 for n in range(4) ]) - else: - scores = 0 - num_cap_per_audio = len(refs[list(refs.keys())[0]]) - - for i in range(num_cap_per_audio): - if i > 0: - for key in refs: - refs[key].insert(0, res[key][0]) - res = {key: [refs[key].pop(),] for key in refs} - score, _ = scorer.compute_score(refs, pred) - - if scorer.method() == "Bleu": - scores += np.array(score) - else: - scores += score - - score = scores / num_cap_per_audio - - for key in refs: - refs[key].insert(0, res[key][0]) - score_allref, _ = scorer.compute_score(refs, pred) - diff = score_allref - score - return diff - -def embedding_score(refs, pred, scorer): - - num_cap_per_audio = len(refs[list(refs.keys())[0]]) - scores = 0 - - for i in range(num_cap_per_audio): - res = {key: [refs[key][i],] for key in refs.keys() if len(refs[key]) == num_cap_per_audio} - refs_i = {key: np.concatenate([refs[key][:i], refs[key][i+1:]]) for key in refs.keys() if len(refs[key]) == num_cap_per_audio} - score, _ = scorer.compute_score(refs_i, pred) - - scores += score - - score = scores / num_cap_per_audio - - score_allref, _ = scorer.compute_score(refs, pred) - diff = score_allref - score - return diff - -def main(output_file, eval_caption_file, eval_embedding_file, output, zh=False): - output_df = pd.read_json(output_file) - output_df["key"] = output_df["filename"].apply(lambda x: os.path.splitext(os.path.basename(x))[0]) - pred = output_df.groupby("key")["tokens"].apply(list).to_dict() - - label_df = pd.read_json(eval_caption_file) - if zh: - refs = label_df.groupby("key")["tokens"].apply(list).to_dict() - else: - refs = label_df.groupby("key")["caption"].apply(list).to_dict() - - from pycocoevalcap.bleu.bleu import Bleu - from pycocoevalcap.cider.cider import Cider - from pycocoevalcap.rouge.rouge import Rouge - - scorer = Bleu(zh=zh) - bleu_scores = coco_score(copy.deepcopy(refs), pred, scorer) - scorer = Cider(zh=zh) - cider_score = coco_score(copy.deepcopy(refs), pred, scorer) - scorer = Rouge(zh=zh) - rouge_score = coco_score(copy.deepcopy(refs), pred, scorer) - - if not zh: - from pycocoevalcap.meteor.meteor import Meteor - scorer = Meteor() - meteor_score = coco_score(copy.deepcopy(refs), pred, scorer) - - from pycocoevalcap.spice.spice import Spice - scorer = Spice() - spice_score = coco_score(copy.deepcopy(refs), pred, scorer) - - # from audiocaptioneval.sentbert.sentencebert import SentenceBert - # scorer = SentenceBert(zh=zh) - # with open(eval_embedding_file, "rb") as f: - # ref_embeddings = pickle.load(f) - - # sent_bert = embedding_score(ref_embeddings, pred, scorer) - - with open(output, "w") as f: - f.write("Diff:\n") - for n in range(4): - f.write("BLEU-{}: {:6.3f}\n".format(n+1, bleu_scores[n])) - f.write("CIDEr: {:6.3f}\n".format(cider_score)) - f.write("ROUGE: {:6.3f}\n".format(rouge_score)) - if not zh: - f.write("Meteor: {:6.3f}\n".format(meteor_score)) - f.write("SPICE: {:6.3f}\n".format(spice_score)) - # f.write("SentenceBert: {:6.3f}\n".format(sent_bert)) - - - -if __name__ == "__main__": - fire.Fire(main) diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/lr_scheduler.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/lr_scheduler.py deleted file mode 100644 index be39da9ca6dacc22bf3df9c7389bbb403a4a3ade..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/lr_scheduler.py +++ /dev/null @@ -1,98 +0,0 @@ -import numpy as np - - -class LambdaWarmUpCosineScheduler: - """ - note: use with a base_lr of 1.0 - """ - def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0): - self.lr_warm_up_steps = warm_up_steps - self.lr_start = lr_start - self.lr_min = lr_min - self.lr_max = lr_max - self.lr_max_decay_steps = max_decay_steps - self.last_lr = 0. - self.verbosity_interval = verbosity_interval - - def schedule(self, n, **kwargs): - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}") - if n < self.lr_warm_up_steps: - lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start - self.last_lr = lr - return lr - else: - t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps) - t = min(t, 1.0) - lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * ( - 1 + np.cos(t * np.pi)) - self.last_lr = lr - return lr - - def __call__(self, n, **kwargs): - return self.schedule(n,**kwargs) - - -class LambdaWarmUpCosineScheduler2: - """ - supports repeated iterations, configurable via lists - note: use with a base_lr of 1.0. - """ - def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0): - assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths) - self.lr_warm_up_steps = warm_up_steps - self.f_start = f_start - self.f_min = f_min - self.f_max = f_max - self.cycle_lengths = cycle_lengths - self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths)) - self.last_f = 0. - self.verbosity_interval = verbosity_interval - - def find_in_interval(self, n): - interval = 0 - for cl in self.cum_cycles[1:]: - if n <= cl: - return interval - interval += 1 - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle]) - t = min(t, 1.0) - f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * ( - 1 + np.cos(t * np.pi)) - self.last_f = f - return f - - def __call__(self, n, **kwargs): - return self.schedule(n, **kwargs) - - -class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2): - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle]) - self.last_f = f - return f - diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/vggishish/model.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/vggishish/model.py deleted file mode 100644 index d5069bad0d9311e6e2c082a63eca165f7a908675..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/vggishish/model.py +++ /dev/null @@ -1,77 +0,0 @@ -import torch -import torch.nn as nn - - -class VGGishish(nn.Module): - - def __init__(self, conv_layers, use_bn, num_classes): - ''' - Mostly from - https://pytorch.org/vision/0.8/_modules/torchvision/models/vgg.html - ''' - super().__init__() - layers = [] - in_channels = 1 - - # a list of channels with 'MP' (maxpool) from config - for v in conv_layers: - if v == 'MP': - layers += [nn.MaxPool2d(kernel_size=2, stride=2)] - else: - conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1, stride=1) - if use_bn: - layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)] - else: - layers += [conv2d, nn.ReLU(inplace=True)] - in_channels = v - self.features = nn.Sequential(*layers) - - self.avgpool = nn.AdaptiveAvgPool2d((5, 10)) - - self.flatten = nn.Flatten() - self.classifier = nn.Sequential( - nn.Linear(512 * 5 * 10, 4096), - nn.ReLU(True), - nn.Linear(4096, 4096), - nn.ReLU(True), - nn.Linear(4096, num_classes) - ) - - # weight init - self.reset_parameters() - - def forward(self, x): - # adding channel dim for conv2d (B, 1, F, T) <- - x = x.unsqueeze(1) - # backbone (B, 1, 5, 53) <- (B, 1, 80, 860) - x = self.features(x) - # adaptive avg pooling (B, 1, 5, 10) <- (B, 1, 5, 53) – if no MP is used as the end of VGG - x = self.avgpool(x) - # flatten - x = self.flatten(x) - # classify - x = self.classifier(x) - return x - - def reset_parameters(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - nn.init.constant_(m.bias, 0) - - -if __name__ == '__main__': - num_classes = 309 - inputs = torch.rand(3, 80, 848) - conv_layers = [64, 64, 'MP', 128, 128, 'MP', 256, 256, 256, 'MP', 512, 512, 512, 'MP', 512, 512, 512] - # conv_layers = [64, 'MP', 128, 'MP', 256, 256, 'MP', 512, 512, 'MP'] - model = VGGishish(conv_layers, use_bn=False, num_classes=num_classes) - outputs = model(inputs) - print(outputs.shape) diff --git a/spaces/AIZero2HeroBootcamp/MultiPDF-QA-ChatGPT-Langchain/README.md b/spaces/AIZero2HeroBootcamp/MultiPDF-QA-ChatGPT-Langchain/README.md deleted file mode 100644 index 3cf50f86a7888fbf02aa4eb54ebdd698b5d9533f..0000000000000000000000000000000000000000 --- a/spaces/AIZero2HeroBootcamp/MultiPDF-QA-ChatGPT-Langchain/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MultiPDF QA ChatGPT Langchain -emoji: 🏃 -colorFrom: pink -colorTo: indigo -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ASJMO/freegpt/g4f/Provider/__init__.py b/spaces/ASJMO/freegpt/g4f/Provider/__init__.py deleted file mode 100644 index 6ed51982755367e47c59199975be2c3539bfbee0..0000000000000000000000000000000000000000 --- a/spaces/ASJMO/freegpt/g4f/Provider/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -from . import Provider -from .Providers import ( - Aichat, - Ails, - AiService, - Bard, - Better, - Bing, - ChatFree, - ChatgptAi, - ChatgptLogin, - ChatgptLogin, - DeepAi, - Easychat, - Ezcht, - Fakeopen, - Forefront, - GetGpt, - Gravityengine, - H2o, - hteyun, - Liaobots, - Lockchat, - Mishalsgpt, - Phind, - Theb, - Vercel, - Weuseing, - Xiaor, - Yqcloud, - You, - Zeabur, - Wewordle -) - -Palm = Bard diff --git a/spaces/Accel/media-converter/styles.css b/spaces/Accel/media-converter/styles.css deleted file mode 100644 index 6386dbf86134ca6be4a9cea99e5cd7d927cd491f..0000000000000000000000000000000000000000 --- a/spaces/Accel/media-converter/styles.css +++ /dev/null @@ -1,9 +0,0 @@ -#outputtext { - color: green; -} -#acontrast { - width: 50%; -} -#button{ - width: 30% -} \ No newline at end of file diff --git a/spaces/Adapter/T2I-Adapter/ldm/util.py b/spaces/Adapter/T2I-Adapter/ldm/util.py deleted file mode 100644 index dc9e3c48b1924fbc1ac3ecdf7a2192e1a46d9228..0000000000000000000000000000000000000000 --- a/spaces/Adapter/T2I-Adapter/ldm/util.py +++ /dev/null @@ -1,200 +0,0 @@ -import importlib -import math - -import cv2 -import torch -import numpy as np - -import os -from safetensors.torch import load_file - -from inspect import isfunction -from PIL import Image, ImageDraw, ImageFont - - -def log_txt_as_img(wh, xc, size=10): - # wh a tuple of (width, height) - # xc a list of captions to plot - b = len(xc) - txts = list() - for bi in range(b): - txt = Image.new("RGB", wh, color="white") - draw = ImageDraw.Draw(txt) - font = ImageFont.truetype('assets/DejaVuSans.ttf', size=size) - nc = int(40 * (wh[0] / 256)) - lines = "\n".join(xc[bi][start:start + nc] for start in range(0, len(xc[bi]), nc)) - - try: - draw.text((0, 0), lines, fill="black", font=font) - except UnicodeEncodeError: - print("Cant encode string for logging. Skipping.") - - txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0 - txts.append(txt) - txts = np.stack(txts) - txts = torch.tensor(txts) - return txts - - -def ismap(x): - if not isinstance(x, torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] > 3) - - -def isimage(x): - if not isinstance(x, torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1) - - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def mean_flat(tensor): - """ - https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/nn.py#L86 - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def count_params(model, verbose=False): - total_params = sum(p.numel() for p in model.parameters()) - if verbose: - print(f"{model.__class__.__name__} has {total_params * 1.e-6:.2f} M params.") - return total_params - - -def instantiate_from_config(config): - if not "target" in config: - if config == '__is_first_stage__': - return None - elif config == "__is_unconditional__": - return None - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(**config.get("params", dict())) - - -def get_obj_from_str(string, reload=False): - module, cls = string.rsplit(".", 1) - if reload: - module_imp = importlib.import_module(module) - importlib.reload(module_imp) - return getattr(importlib.import_module(module, package=None), cls) - - -checkpoint_dict_replacements = { - 'cond_stage_model.transformer.text_model.embeddings.': 'cond_stage_model.transformer.embeddings.', - 'cond_stage_model.transformer.text_model.encoder.': 'cond_stage_model.transformer.encoder.', - 'cond_stage_model.transformer.text_model.final_layer_norm.': 'cond_stage_model.transformer.final_layer_norm.', -} - - -def transform_checkpoint_dict_key(k): - for text, replacement in checkpoint_dict_replacements.items(): - if k.startswith(text): - k = replacement + k[len(text):] - - return k - - -def get_state_dict_from_checkpoint(pl_sd): - pl_sd = pl_sd.pop("state_dict", pl_sd) - pl_sd.pop("state_dict", None) - - sd = {} - for k, v in pl_sd.items(): - new_key = transform_checkpoint_dict_key(k) - - if new_key is not None: - sd[new_key] = v - - pl_sd.clear() - pl_sd.update(sd) - - return pl_sd - - -def read_state_dict(checkpoint_file, print_global_state=False): - _, extension = os.path.splitext(checkpoint_file) - if extension.lower() == ".safetensors": - pl_sd = load_file(checkpoint_file, device='cpu') - else: - pl_sd = torch.load(checkpoint_file, map_location='cpu') - - if print_global_state and "global_step" in pl_sd: - print(f"Global Step: {pl_sd['global_step']}") - - sd = get_state_dict_from_checkpoint(pl_sd) - return sd - - -def load_model_from_config(config, ckpt, vae_ckpt=None, verbose=False): - print(f"Loading model from {ckpt}") - sd = read_state_dict(ckpt) - model = instantiate_from_config(config.model) - m, u = model.load_state_dict(sd, strict=False) - if len(m) > 0 and verbose: - print("missing keys:") - print(m) - if len(u) > 0 and verbose: - print("unexpected keys:") - print(u) - - if 'anything' in ckpt.lower() and vae_ckpt is None: - vae_ckpt = 'models/anything-v4.0.vae.pt' - - if vae_ckpt is not None and vae_ckpt != 'None': - print(f"Loading vae model from {vae_ckpt}") - vae_sd = torch.load(vae_ckpt, map_location="cpu") - if "global_step" in vae_sd: - print(f"Global Step: {vae_sd['global_step']}") - sd = vae_sd["state_dict"] - m, u = model.first_stage_model.load_state_dict(sd, strict=False) - if len(m) > 0 and verbose: - print("missing keys:") - print(m) - if len(u) > 0 and verbose: - print("unexpected keys:") - print(u) - - model.cuda() - model.eval() - return model - - -def resize_numpy_image(image, max_resolution=512 * 512, resize_short_edge=None): - h, w = image.shape[:2] - if resize_short_edge is not None: - k = resize_short_edge / min(h, w) - else: - k = max_resolution / (h * w) - k = k**0.5 - h = int(np.round(h * k / 64)) * 64 - w = int(np.round(w * k / 64)) * 64 - image = cv2.resize(image, (w, h), interpolation=cv2.INTER_LANCZOS4) - return image - - -# make uc and prompt shapes match via padding for long prompts -null_cond = None - -def fix_cond_shapes(model, prompt_condition, uc): - if uc is None: - return prompt_condition, uc - global null_cond - if null_cond is None: - null_cond = model.get_learned_conditioning([""]) - while prompt_condition.shape[1] > uc.shape[1]: - uc = torch.cat((uc, null_cond.repeat((uc.shape[0], 1, 1))), axis=1) - while prompt_condition.shape[1] < uc.shape[1]: - prompt_condition = torch.cat((prompt_condition, null_cond.repeat((prompt_condition.shape[0], 1, 1))), axis=1) - return prompt_condition, uc diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Factory.js deleted file mode 100644 index 7f85ebb1283bc434998ea7ce8a49d42b2623d67a..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import Ball from './Ball.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('ball', function (config) { - var gameObject = new Ball(this.scene, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.Spinner.Ball', Ball); - -export default Ball; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/childbehaviors/Fade.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/childbehaviors/Fade.js deleted file mode 100644 index 32566fc8ce0f8cc950e24ac06771041c0d56d781..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/childbehaviors/Fade.js +++ /dev/null @@ -1,36 +0,0 @@ -import IndexOf from '../../../../plugins/utils/object/IndexOf.js'; -import { WaitComplete } from '../../utils/WaitEvent.js'; - -export default { - fadeChild(child, duration, alpha) { - var key; - if (typeof (child) === 'string') { - key = child; - child = this.sizerChildren[key]; - } else { - key = IndexOf(this.sizerChildren, child); - } - if (duration === undefined) { - duration = 500; - } - if (alpha === undefined) { - alpha = (this.currentChildKey === key) ? 1 : 0; - } - - child.fadeIn(duration, { start: child.alpha, end: alpha }); - return this; - }, - - fadeChildPromise(child, duration, alpha) { - if (typeof (child) === 'string') { - child = this.sizerChildren[key]; - } - this.fadeChild(child, duration, alpha); - - if (child._fade) { - return WaitComplete(child._fade); - } else { - return Promise.resolve(); - } - } -} \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/README_sdxl.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/README_sdxl.md deleted file mode 100644 index 8e3e6c881235dc20436966b7f59c773eadda297b..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/README_sdxl.md +++ /dev/null @@ -1,148 +0,0 @@ -# InstructPix2Pix SDXL training example - -***This is based on the original InstructPix2Pix training example.*** - -[Stable Diffusion XL](https://huggingface.co/papers/2307.01952) (or SDXL) is the latest image generation model that is tailored towards more photorealistic outputs with more detailed imagery and composition compared to previous SD models. It leverages a three times larger UNet backbone. The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. - -The `train_instruct_pix2pix_xl.py` script shows how to implement the training procedure and adapt it for Stable Diffusion XL. - -***Disclaimer: Even though `train_instruct_pix2pix_xl.py` implements the InstructPix2Pix -training procedure while being faithful to the [original implementation](https://github.com/timothybrooks/instruct-pix2pix) we have only tested it on a [small-scale dataset](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples). This can impact the end results. For better results, we recommend longer training runs with a larger dataset. [Here](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered) you can find a large dataset for InstructPix2Pix training.*** - -## Running locally with PyTorch - -### Installing the dependencies - -Refer to the original InstructPix2Pix training example for installing the dependencies. - -You will also need to get access of SDXL by filling the [form](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). - -### Toy example - -As mentioned before, we'll use a [small toy dataset](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples) for training. The dataset -is a smaller version of the [original dataset](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered) used in the InstructPix2Pix paper. - -Configure environment variables such as the dataset identifier and the Stable Diffusion -checkpoint: - -```bash -export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0" -export DATASET_ID="fusing/instructpix2pix-1000-samples" -``` - -Now, we can launch training: - -```bash -python train_instruct_pix2pix_xl.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --dataset_name=$DATASET_ID \ - --enable_xformers_memory_efficient_attention \ - --resolution=256 --random_flip \ - --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \ - --max_train_steps=15000 \ - --checkpointing_steps=5000 --checkpoints_total_limit=1 \ - --learning_rate=5e-05 --max_grad_norm=1 --lr_warmup_steps=0 \ - --conditioning_dropout_prob=0.05 \ - --seed=42 -``` - -Additionally, we support performing validation inference to monitor training progress -with Weights and Biases. You can enable this feature with `report_to="wandb"`: - -```bash -python train_instruct_pix2pix_xl.py \ - --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \ - --dataset_name=$DATASET_ID \ - --use_ema \ - --enable_xformers_memory_efficient_attention \ - --resolution=512 --random_flip \ - --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \ - --max_train_steps=15000 \ - --checkpointing_steps=5000 --checkpoints_total_limit=1 \ - --learning_rate=5e-05 --lr_warmup_steps=0 \ - --conditioning_dropout_prob=0.05 \ - --seed=42 \ - --val_image_url_or_path="https://datasets-server.huggingface.co/assets/fusing/instructpix2pix-1000-samples/--/fusing--instructpix2pix-1000-samples/train/23/input_image/image.jpg" \ - --validation_prompt="make it in japan" \ - --report_to=wandb - ``` - - We recommend this type of validation as it can be useful for model debugging. Note that you need `wandb` installed to use this. You can install `wandb` by running `pip install wandb`. - - [Here](https://wandb.ai/sayakpaul/instruct-pix2pix/runs/ctr3kovq), you can find an example training run that includes some validation samples and the training hyperparameters. - - ***Note: In the original paper, the authors observed that even when the model is trained with an image resolution of 256x256, it generalizes well to bigger resolutions such as 512x512. This is likely because of the larger dataset they used during training.*** - - ## Training with multiple GPUs - -`accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch) -for running distributed training with `accelerate`. Here is an example command: - -```bash -accelerate launch --mixed_precision="fp16" --multi_gpu train_instruct_pix2pix.py \ - --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \ - --dataset_name=$DATASET_ID \ - --use_ema \ - --enable_xformers_memory_efficient_attention \ - --resolution=512 --random_flip \ - --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \ - --max_train_steps=15000 \ - --checkpointing_steps=5000 --checkpoints_total_limit=1 \ - --learning_rate=5e-05 --lr_warmup_steps=0 \ - --conditioning_dropout_prob=0.05 \ - --seed=42 \ - --val_image_url_or_path="https://datasets-server.huggingface.co/assets/fusing/instructpix2pix-1000-samples/--/fusing--instructpix2pix-1000-samples/train/23/input_image/image.jpg" \ - --validation_prompt="make it in japan" \ - --report_to=wandb -``` - - ## Inference - - Once training is complete, we can perform inference: - - ```python -import PIL -import requests -import torch -from diffusers import StableDiffusionXLInstructPix2PixPipeline - -model_id = "your_model_id" # <- replace this -pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") -generator = torch.Generator("cuda").manual_seed(0) - -url = "https://datasets-server.huggingface.co/assets/fusing/instructpix2pix-1000-samples/--/fusing--instructpix2pix-1000-samples/train/23/input_image/image.jpg" - - -def download_image(url): - image = PIL.Image.open(requests.get(url, stream=True).raw) - image = PIL.ImageOps.exif_transpose(image) - image = image.convert("RGB") - return image - -image = download_image(url) -prompt = "make it Japan" -num_inference_steps = 20 -image_guidance_scale = 1.5 -guidance_scale = 10 - -edited_image = pipe(prompt, - image=image, - num_inference_steps=num_inference_steps, - image_guidance_scale=image_guidance_scale, - guidance_scale=guidance_scale, - generator=generator, -).images[0] -edited_image.save("edited_image.png") -``` - -We encourage you to play with the following three parameters to control -speed and quality during performance: - -* `num_inference_steps` -* `image_guidance_scale` -* `guidance_scale` - -Particularly, `image_guidance_scale` and `guidance_scale` can have a profound impact -on the generated ("edited") image (see [here](https://twitter.com/RisingSayak/status/1628392199196151808?s=20) for an example). - -If you're looking for some interesting ways to use the InstructPix2Pix training methodology, we welcome you to check out this blog post: [Instruction-tuning Stable Diffusion with InstructPix2Pix](https://huggingface.co/blog/instruction-tuning-sd). diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/testing_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/testing_utils.py deleted file mode 100644 index 3976be0fd7d5366210c61d5e3f949a864dff2eb2..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/testing_utils.py +++ /dev/null @@ -1,684 +0,0 @@ -import inspect -import io -import logging -import multiprocessing -import os -import random -import re -import struct -import tempfile -import unittest -import urllib.parse -from contextlib import contextmanager -from distutils.util import strtobool -from io import BytesIO, StringIO -from pathlib import Path -from typing import List, Optional, Union - -import numpy as np -import PIL.Image -import PIL.ImageOps -import requests -from packaging import version - -from .import_utils import ( - BACKENDS_MAPPING, - is_compel_available, - is_flax_available, - is_note_seq_available, - is_onnx_available, - is_opencv_available, - is_torch_available, - is_torch_version, - is_torchsde_available, -) -from .logging import get_logger - - -global_rng = random.Random() - -logger = get_logger(__name__) - -if is_torch_available(): - import torch - - if "DIFFUSERS_TEST_DEVICE" in os.environ: - torch_device = os.environ["DIFFUSERS_TEST_DEVICE"] - - available_backends = ["cuda", "cpu", "mps"] - if torch_device not in available_backends: - raise ValueError( - f"unknown torch backend for diffusers tests: {torch_device}. Available backends are:" - f" {available_backends}" - ) - logger.info(f"torch_device overrode to {torch_device}") - else: - torch_device = "cuda" if torch.cuda.is_available() else "cpu" - is_torch_higher_equal_than_1_12 = version.parse( - version.parse(torch.__version__).base_version - ) >= version.parse("1.12") - - if is_torch_higher_equal_than_1_12: - # Some builds of torch 1.12 don't have the mps backend registered. See #892 for more details - mps_backend_registered = hasattr(torch.backends, "mps") - torch_device = "mps" if (mps_backend_registered and torch.backends.mps.is_available()) else torch_device - - -def torch_all_close(a, b, *args, **kwargs): - if not is_torch_available(): - raise ValueError("PyTorch needs to be installed to use this function.") - if not torch.allclose(a, b, *args, **kwargs): - assert False, f"Max diff is absolute {(a - b).abs().max()}. Diff tensor is {(a - b).abs()}." - return True - - -def print_tensor_test(tensor, filename="test_corrections.txt", expected_tensor_name="expected_slice"): - test_name = os.environ.get("PYTEST_CURRENT_TEST") - if not torch.is_tensor(tensor): - tensor = torch.from_numpy(tensor) - - tensor_str = str(tensor.detach().cpu().flatten().to(torch.float32)).replace("\n", "") - # format is usually: - # expected_slice = np.array([-0.5713, -0.3018, -0.9814, 0.04663, -0.879, 0.76, -1.734, 0.1044, 1.161]) - output_str = tensor_str.replace("tensor", f"{expected_tensor_name} = np.array") - test_file, test_class, test_fn = test_name.split("::") - test_fn = test_fn.split()[0] - with open(filename, "a") as f: - print(";".join([test_file, test_class, test_fn, output_str]), file=f) - - -def get_tests_dir(append_path=None): - """ - Args: - append_path: optional path to append to the tests dir path - Return: - The full path to the `tests` dir, so that the tests can be invoked from anywhere. Optionally `append_path` is - joined after the `tests` dir the former is provided. - """ - # this function caller's __file__ - caller__file__ = inspect.stack()[1][1] - tests_dir = os.path.abspath(os.path.dirname(caller__file__)) - - while not tests_dir.endswith("tests"): - tests_dir = os.path.dirname(tests_dir) - - if append_path: - return os.path.join(tests_dir, append_path) - else: - return tests_dir - - -def parse_flag_from_env(key, default=False): - try: - value = os.environ[key] - except KeyError: - # KEY isn't set, default to `default`. - _value = default - else: - # KEY is set, convert it to True or False. - try: - _value = strtobool(value) - except ValueError: - # More values are supported, but let's keep the message simple. - raise ValueError(f"If set, {key} must be yes or no.") - return _value - - -_run_slow_tests = parse_flag_from_env("RUN_SLOW", default=False) -_run_nightly_tests = parse_flag_from_env("RUN_NIGHTLY", default=False) - - -def floats_tensor(shape, scale=1.0, rng=None, name=None): - """Creates a random float32 tensor""" - if rng is None: - rng = global_rng - - total_dims = 1 - for dim in shape: - total_dims *= dim - - values = [] - for _ in range(total_dims): - values.append(rng.random() * scale) - - return torch.tensor(data=values, dtype=torch.float).view(shape).contiguous() - - -def slow(test_case): - """ - Decorator marking a test as slow. - - Slow tests are skipped by default. Set the RUN_SLOW environment variable to a truthy value to run them. - - """ - return unittest.skipUnless(_run_slow_tests, "test is slow")(test_case) - - -def nightly(test_case): - """ - Decorator marking a test that runs nightly in the diffusers CI. - - Slow tests are skipped by default. Set the RUN_NIGHTLY environment variable to a truthy value to run them. - - """ - return unittest.skipUnless(_run_nightly_tests, "test is nightly")(test_case) - - -def require_torch(test_case): - """ - Decorator marking a test that requires PyTorch. These tests are skipped when PyTorch isn't installed. - """ - return unittest.skipUnless(is_torch_available(), "test requires PyTorch")(test_case) - - -def require_torch_2(test_case): - """ - Decorator marking a test that requires PyTorch 2. These tests are skipped when it isn't installed. - """ - return unittest.skipUnless(is_torch_available() and is_torch_version(">=", "2.0.0"), "test requires PyTorch 2")( - test_case - ) - - -def require_torch_gpu(test_case): - """Decorator marking a test that requires CUDA and PyTorch.""" - return unittest.skipUnless(is_torch_available() and torch_device == "cuda", "test requires PyTorch+CUDA")( - test_case - ) - - -def skip_mps(test_case): - """Decorator marking a test to skip if torch_device is 'mps'""" - return unittest.skipUnless(torch_device != "mps", "test requires non 'mps' device")(test_case) - - -def require_flax(test_case): - """ - Decorator marking a test that requires JAX & Flax. These tests are skipped when one / both are not installed - """ - return unittest.skipUnless(is_flax_available(), "test requires JAX & Flax")(test_case) - - -def require_compel(test_case): - """ - Decorator marking a test that requires compel: https://github.com/damian0815/compel. These tests are skipped when - the library is not installed. - """ - return unittest.skipUnless(is_compel_available(), "test requires compel")(test_case) - - -def require_onnxruntime(test_case): - """ - Decorator marking a test that requires onnxruntime. These tests are skipped when onnxruntime isn't installed. - """ - return unittest.skipUnless(is_onnx_available(), "test requires onnxruntime")(test_case) - - -def require_note_seq(test_case): - """ - Decorator marking a test that requires note_seq. These tests are skipped when note_seq isn't installed. - """ - return unittest.skipUnless(is_note_seq_available(), "test requires note_seq")(test_case) - - -def require_torchsde(test_case): - """ - Decorator marking a test that requires torchsde. These tests are skipped when torchsde isn't installed. - """ - return unittest.skipUnless(is_torchsde_available(), "test requires torchsde")(test_case) - - -def load_numpy(arry: Union[str, np.ndarray], local_path: Optional[str] = None) -> np.ndarray: - if isinstance(arry, str): - # local_path = "/home/patrick_huggingface_co/" - if local_path is not None: - # local_path can be passed to correct images of tests - return os.path.join(local_path, "/".join([arry.split("/")[-5], arry.split("/")[-2], arry.split("/")[-1]])) - elif arry.startswith("http://") or arry.startswith("https://"): - response = requests.get(arry) - response.raise_for_status() - arry = np.load(BytesIO(response.content)) - elif os.path.isfile(arry): - arry = np.load(arry) - else: - raise ValueError( - f"Incorrect path or url, URLs must start with `http://` or `https://`, and {arry} is not a valid path" - ) - elif isinstance(arry, np.ndarray): - pass - else: - raise ValueError( - "Incorrect format used for numpy ndarray. Should be an url linking to an image, a local path, or a" - " ndarray." - ) - - return arry - - -def load_pt(url: str): - response = requests.get(url) - response.raise_for_status() - arry = torch.load(BytesIO(response.content)) - return arry - - -def load_image(image: Union[str, PIL.Image.Image]) -> PIL.Image.Image: - """ - Loads `image` to a PIL Image. - - Args: - image (`str` or `PIL.Image.Image`): - The image to convert to the PIL Image format. - Returns: - `PIL.Image.Image`: - A PIL Image. - """ - if isinstance(image, str): - if image.startswith("http://") or image.startswith("https://"): - image = PIL.Image.open(requests.get(image, stream=True).raw) - elif os.path.isfile(image): - image = PIL.Image.open(image) - else: - raise ValueError( - f"Incorrect path or url, URLs must start with `http://` or `https://`, and {image} is not a valid path" - ) - elif isinstance(image, PIL.Image.Image): - image = image - else: - raise ValueError( - "Incorrect format used for image. Should be an url linking to an image, a local path, or a PIL image." - ) - image = PIL.ImageOps.exif_transpose(image) - image = image.convert("RGB") - return image - - -def preprocess_image(image: PIL.Image, batch_size: int): - w, h = image.size - w, h = (x - x % 8 for x in (w, h)) # resize to integer multiple of 8 - image = image.resize((w, h), resample=PIL.Image.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = np.vstack([image[None].transpose(0, 3, 1, 2)] * batch_size) - image = torch.from_numpy(image) - return 2.0 * image - 1.0 - - -def export_to_gif(image: List[PIL.Image.Image], output_gif_path: str = None) -> str: - if output_gif_path is None: - output_gif_path = tempfile.NamedTemporaryFile(suffix=".gif").name - - image[0].save( - output_gif_path, - save_all=True, - append_images=image[1:], - optimize=False, - duration=100, - loop=0, - ) - return output_gif_path - - -@contextmanager -def buffered_writer(raw_f): - f = io.BufferedWriter(raw_f) - yield f - f.flush() - - -def export_to_ply(mesh, output_ply_path: str = None): - """ - Write a PLY file for a mesh. - """ - if output_ply_path is None: - output_ply_path = tempfile.NamedTemporaryFile(suffix=".ply").name - - coords = mesh.verts.detach().cpu().numpy() - faces = mesh.faces.cpu().numpy() - rgb = np.stack([mesh.vertex_channels[x].detach().cpu().numpy() for x in "RGB"], axis=1) - - with buffered_writer(open(output_ply_path, "wb")) as f: - f.write(b"ply\n") - f.write(b"format binary_little_endian 1.0\n") - f.write(bytes(f"element vertex {len(coords)}\n", "ascii")) - f.write(b"property float x\n") - f.write(b"property float y\n") - f.write(b"property float z\n") - if rgb is not None: - f.write(b"property uchar red\n") - f.write(b"property uchar green\n") - f.write(b"property uchar blue\n") - if faces is not None: - f.write(bytes(f"element face {len(faces)}\n", "ascii")) - f.write(b"property list uchar int vertex_index\n") - f.write(b"end_header\n") - - if rgb is not None: - rgb = (rgb * 255.499).round().astype(int) - vertices = [ - (*coord, *rgb) - for coord, rgb in zip( - coords.tolist(), - rgb.tolist(), - ) - ] - format = struct.Struct("<3f3B") - for item in vertices: - f.write(format.pack(*item)) - else: - format = struct.Struct("<3f") - for vertex in coords.tolist(): - f.write(format.pack(*vertex)) - - if faces is not None: - format = struct.Struct(" str: - if is_opencv_available(): - import cv2 - else: - raise ImportError(BACKENDS_MAPPING["opencv"][1].format("export_to_video")) - if output_video_path is None: - output_video_path = tempfile.NamedTemporaryFile(suffix=".mp4").name - - fourcc = cv2.VideoWriter_fourcc(*"mp4v") - h, w, c = video_frames[0].shape - video_writer = cv2.VideoWriter(output_video_path, fourcc, fps=8, frameSize=(w, h)) - for i in range(len(video_frames)): - img = cv2.cvtColor(video_frames[i], cv2.COLOR_RGB2BGR) - video_writer.write(img) - return output_video_path - - -def load_hf_numpy(path) -> np.ndarray: - if not path.startswith("http://") or path.startswith("https://"): - path = os.path.join( - "https://huggingface.co/datasets/fusing/diffusers-testing/resolve/main", urllib.parse.quote(path) - ) - - return load_numpy(path) - - -# --- pytest conf functions --- # - -# to avoid multiple invocation from tests/conftest.py and examples/conftest.py - make sure it's called only once -pytest_opt_registered = {} - - -def pytest_addoption_shared(parser): - """ - This function is to be called from `conftest.py` via `pytest_addoption` wrapper that has to be defined there. - - It allows loading both `conftest.py` files at once without causing a failure due to adding the same `pytest` - option. - - """ - option = "--make-reports" - if option not in pytest_opt_registered: - parser.addoption( - option, - action="store", - default=False, - help="generate report files. The value of this option is used as a prefix to report names", - ) - pytest_opt_registered[option] = 1 - - -def pytest_terminal_summary_main(tr, id): - """ - Generate multiple reports at the end of test suite run - each report goes into a dedicated file in the current - directory. The report files are prefixed with the test suite name. - - This function emulates --duration and -rA pytest arguments. - - This function is to be called from `conftest.py` via `pytest_terminal_summary` wrapper that has to be defined - there. - - Args: - - tr: `terminalreporter` passed from `conftest.py` - - id: unique id like `tests` or `examples` that will be incorporated into the final reports filenames - this is - needed as some jobs have multiple runs of pytest, so we can't have them overwrite each other. - - NB: this functions taps into a private _pytest API and while unlikely, it could break should - pytest do internal changes - also it calls default internal methods of terminalreporter which - can be hijacked by various `pytest-` plugins and interfere. - - """ - from _pytest.config import create_terminal_writer - - if not len(id): - id = "tests" - - config = tr.config - orig_writer = config.get_terminal_writer() - orig_tbstyle = config.option.tbstyle - orig_reportchars = tr.reportchars - - dir = "reports" - Path(dir).mkdir(parents=True, exist_ok=True) - report_files = { - k: f"{dir}/{id}_{k}.txt" - for k in [ - "durations", - "errors", - "failures_long", - "failures_short", - "failures_line", - "passes", - "stats", - "summary_short", - "warnings", - ] - } - - # custom durations report - # note: there is no need to call pytest --durations=XX to get this separate report - # adapted from https://github.com/pytest-dev/pytest/blob/897f151e/src/_pytest/runner.py#L66 - dlist = [] - for replist in tr.stats.values(): - for rep in replist: - if hasattr(rep, "duration"): - dlist.append(rep) - if dlist: - dlist.sort(key=lambda x: x.duration, reverse=True) - with open(report_files["durations"], "w") as f: - durations_min = 0.05 # sec - f.write("slowest durations\n") - for i, rep in enumerate(dlist): - if rep.duration < durations_min: - f.write(f"{len(dlist)-i} durations < {durations_min} secs were omitted") - break - f.write(f"{rep.duration:02.2f}s {rep.when:<8} {rep.nodeid}\n") - - def summary_failures_short(tr): - # expecting that the reports were --tb=long (default) so we chop them off here to the last frame - reports = tr.getreports("failed") - if not reports: - return - tr.write_sep("=", "FAILURES SHORT STACK") - for rep in reports: - msg = tr._getfailureheadline(rep) - tr.write_sep("_", msg, red=True, bold=True) - # chop off the optional leading extra frames, leaving only the last one - longrepr = re.sub(r".*_ _ _ (_ ){10,}_ _ ", "", rep.longreprtext, 0, re.M | re.S) - tr._tw.line(longrepr) - # note: not printing out any rep.sections to keep the report short - - # use ready-made report funcs, we are just hijacking the filehandle to log to a dedicated file each - # adapted from https://github.com/pytest-dev/pytest/blob/897f151e/src/_pytest/terminal.py#L814 - # note: some pytest plugins may interfere by hijacking the default `terminalreporter` (e.g. - # pytest-instafail does that) - - # report failures with line/short/long styles - config.option.tbstyle = "auto" # full tb - with open(report_files["failures_long"], "w") as f: - tr._tw = create_terminal_writer(config, f) - tr.summary_failures() - - # config.option.tbstyle = "short" # short tb - with open(report_files["failures_short"], "w") as f: - tr._tw = create_terminal_writer(config, f) - summary_failures_short(tr) - - config.option.tbstyle = "line" # one line per error - with open(report_files["failures_line"], "w") as f: - tr._tw = create_terminal_writer(config, f) - tr.summary_failures() - - with open(report_files["errors"], "w") as f: - tr._tw = create_terminal_writer(config, f) - tr.summary_errors() - - with open(report_files["warnings"], "w") as f: - tr._tw = create_terminal_writer(config, f) - tr.summary_warnings() # normal warnings - tr.summary_warnings() # final warnings - - tr.reportchars = "wPpsxXEf" # emulate -rA (used in summary_passes() and short_test_summary()) - with open(report_files["passes"], "w") as f: - tr._tw = create_terminal_writer(config, f) - tr.summary_passes() - - with open(report_files["summary_short"], "w") as f: - tr._tw = create_terminal_writer(config, f) - tr.short_test_summary() - - with open(report_files["stats"], "w") as f: - tr._tw = create_terminal_writer(config, f) - tr.summary_stats() - - # restore: - tr._tw = orig_writer - tr.reportchars = orig_reportchars - config.option.tbstyle = orig_tbstyle - - -# Taken from: https://github.com/huggingface/transformers/blob/3658488ff77ff8d45101293e749263acf437f4d5/src/transformers/testing_utils.py#L1787 -def run_test_in_subprocess(test_case, target_func, inputs=None, timeout=None): - """ - To run a test in a subprocess. In particular, this can avoid (GPU) memory issue. - - Args: - test_case (`unittest.TestCase`): - The test that will run `target_func`. - target_func (`Callable`): - The function implementing the actual testing logic. - inputs (`dict`, *optional*, defaults to `None`): - The inputs that will be passed to `target_func` through an (input) queue. - timeout (`int`, *optional*, defaults to `None`): - The timeout (in seconds) that will be passed to the input and output queues. If not specified, the env. - variable `PYTEST_TIMEOUT` will be checked. If still `None`, its value will be set to `600`. - """ - if timeout is None: - timeout = int(os.environ.get("PYTEST_TIMEOUT", 600)) - - start_methohd = "spawn" - ctx = multiprocessing.get_context(start_methohd) - - input_queue = ctx.Queue(1) - output_queue = ctx.JoinableQueue(1) - - # We can't send `unittest.TestCase` to the child, otherwise we get issues regarding pickle. - input_queue.put(inputs, timeout=timeout) - - process = ctx.Process(target=target_func, args=(input_queue, output_queue, timeout)) - process.start() - # Kill the child process if we can't get outputs from it in time: otherwise, the hanging subprocess prevents - # the test to exit properly. - try: - results = output_queue.get(timeout=timeout) - output_queue.task_done() - except Exception as e: - process.terminate() - test_case.fail(e) - process.join(timeout=timeout) - - if results["error"] is not None: - test_case.fail(f'{results["error"]}') - - -class CaptureLogger: - """ - Args: - Context manager to capture `logging` streams - logger: 'logging` logger object - Returns: - The captured output is available via `self.out` - Example: - ```python - >>> from diffusers import logging - >>> from diffusers.testing_utils import CaptureLogger - - >>> msg = "Testing 1, 2, 3" - >>> logging.set_verbosity_info() - >>> logger = logging.get_logger("diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.py") - >>> with CaptureLogger(logger) as cl: - ... logger.info(msg) - >>> assert cl.out, msg + "\n" - ``` - """ - - def __init__(self, logger): - self.logger = logger - self.io = StringIO() - self.sh = logging.StreamHandler(self.io) - self.out = "" - - def __enter__(self): - self.logger.addHandler(self.sh) - return self - - def __exit__(self, *exc): - self.logger.removeHandler(self.sh) - self.out = self.io.getvalue() - - def __repr__(self): - return f"captured: {self.out}\n" - - -def enable_full_determinism(): - """ - Helper function for reproducible behavior during distributed training. See - - https://pytorch.org/docs/stable/notes/randomness.html for pytorch - """ - # Enable PyTorch deterministic mode. This potentially requires either the environment - # variable 'CUDA_LAUNCH_BLOCKING' or 'CUBLAS_WORKSPACE_CONFIG' to be set, - # depending on the CUDA version, so we set them both here - os.environ["CUDA_LAUNCH_BLOCKING"] = "1" - os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" - torch.use_deterministic_algorithms(True) - - # Enable CUDNN deterministic mode - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - torch.backends.cuda.matmul.allow_tf32 = False - - -def disable_full_determinism(): - os.environ["CUDA_LAUNCH_BLOCKING"] = "0" - os.environ["CUBLAS_WORKSPACE_CONFIG"] = "" - torch.use_deterministic_algorithms(False) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/sparse_rcnn/README.md b/spaces/Andy1621/uniformer_image_detection/configs/sparse_rcnn/README.md deleted file mode 100644 index 60cc8a93ed4cc88a14bf9294d671674d032a63a8..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/sparse_rcnn/README.md +++ /dev/null @@ -1,28 +0,0 @@ -# Sparse R-CNN: End-to-End Object Detection with Learnable Proposals - -## Introduction - -[ALGORITHM] - -``` -@article{peize2020sparse, - title = {{SparseR-CNN}: End-to-End Object Detection with Learnable Proposals}, - author = {Peize Sun and Rufeng Zhang and Yi Jiang and Tao Kong and Chenfeng Xu and Wei Zhan and Masayoshi Tomizuka and Lei Li and Zehuan Yuan and Changhu Wang and Ping Luo}, - journal = {arXiv preprint arXiv:2011.12450}, - year = {2020} -} -``` - -## Results and Models - -| Model | Backbone | Style | Lr schd | Number of Proposals |Multi-Scale| RandomCrop | box AP | Config | Download | -|:------------:|:---------:|:-------:|:-------:|:-------: |:-------: |:---------:|:------:|:------:|:--------:| -| Sparse R-CNN | R-50-FPN | pytorch | 1x | 100 | False | False | 37.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/sparse_rcnn/sparse_rcnn_r50_fpn_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/sparse_rcnn/sparse_rcnn_r50_fpn_1x_coco/sparse_rcnn_r50_fpn_1x_coco_20201222_214453-dc79b137.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/sparse_rcnn/sparse_rcnn_r50_fpn_1x_coco/sparse_rcnn_r50_fpn_1x_coco_20201222_214453-dc79b137.log.json) | -| Sparse R-CNN | R-50-FPN | pytorch | 3x | 100 | True | False | 42.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/sparse_rcnn/sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/sparse_rcnn/sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco/sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco_20201218_154234-7bc5c054.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/sparse_rcnn/sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco/sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco_20201218_154234-7bc5c054.log.json) | -| Sparse R-CNN | R-50-FPN | pytorch | 3x | 300 | True | True | 45.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/sparse_rcnn/sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/sparse_rcnn/sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco/sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco_20201223_024605-9fe92701.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/sparse_rcnn/sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco/sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco_20201223_024605-9fe92701.log.json) | -| Sparse R-CNN | R-101-FPN | pytorch | 3x | 100 | True | False | 44.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/sparse_rcnn/sparse_rcnn_r101_fpn_mstrain_480-800_3x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/sparse_rcnn/sparse_rcnn_r101_fpn_mstrain_480-800_3x_coco/sparse_rcnn_r101_fpn_mstrain_480-800_3x_coco_20201223_121552-6c46c9d6.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/sparse_rcnn/sparse_rcnn_r101_fpn_mstrain_480-800_3x_coco/sparse_rcnn_r101_fpn_mstrain_480-800_3x_coco_20201223_121552-6c46c9d6.log.json) | -| Sparse R-CNN | R-101-FPN | pytorch | 3x | 300 | True | True | 46.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/sparse_rcnn/sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/sparse_rcnn/sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco/sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco_20201223_023452-c23c3564.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/sparse_rcnn/sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco/sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco_20201223_023452-c23c3564.log.json) | - -### Notes - -We observe about 0.3 AP noise especially when using ResNet-101 as the backbone. diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/anchor/builder.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/anchor/builder.py deleted file mode 100644 index d79b448ebca9f2b21d455046623172c48c5c3ef0..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/anchor/builder.py +++ /dev/null @@ -1,7 +0,0 @@ -from mmcv.utils import Registry, build_from_cfg - -ANCHOR_GENERATORS = Registry('Anchor generator') - - -def build_anchor_generator(cfg, default_args=None): - return build_from_cfg(cfg, ANCHOR_GENERATORS, default_args) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/reppoints_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/reppoints_head.py deleted file mode 100644 index 499cc4f71c968704a40ab2bb7a6b22dd079d82de..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/reppoints_head.py +++ /dev/null @@ -1,763 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init -from mmcv.ops import DeformConv2d - -from mmdet.core import (PointGenerator, build_assigner, build_sampler, - images_to_levels, multi_apply, multiclass_nms, unmap) -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - - -@HEADS.register_module() -class RepPointsHead(AnchorFreeHead): - """RepPoint head. - - Args: - point_feat_channels (int): Number of channels of points features. - gradient_mul (float): The multiplier to gradients from - points refinement and recognition. - point_strides (Iterable): points strides. - point_base_scale (int): bbox scale for assigning labels. - loss_cls (dict): Config of classification loss. - loss_bbox_init (dict): Config of initial points loss. - loss_bbox_refine (dict): Config of points loss in refinement. - use_grid_points (bool): If we use bounding box representation, the - reppoints is represented as grid points on the bounding box. - center_init (bool): Whether to use center point assignment. - transform_method (str): The methods to transform RepPoints to bbox. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - point_feat_channels=256, - num_points=9, - gradient_mul=0.1, - point_strides=[8, 16, 32, 64, 128], - point_base_scale=4, - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox_init=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=0.5), - loss_bbox_refine=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - use_grid_points=False, - center_init=True, - transform_method='moment', - moment_mul=0.01, - **kwargs): - self.num_points = num_points - self.point_feat_channels = point_feat_channels - self.use_grid_points = use_grid_points - self.center_init = center_init - - # we use deform conv to extract points features - self.dcn_kernel = int(np.sqrt(num_points)) - self.dcn_pad = int((self.dcn_kernel - 1) / 2) - assert self.dcn_kernel * self.dcn_kernel == num_points, \ - 'The points number should be a square number.' - assert self.dcn_kernel % 2 == 1, \ - 'The points number should be an odd square number.' - dcn_base = np.arange(-self.dcn_pad, - self.dcn_pad + 1).astype(np.float64) - dcn_base_y = np.repeat(dcn_base, self.dcn_kernel) - dcn_base_x = np.tile(dcn_base, self.dcn_kernel) - dcn_base_offset = np.stack([dcn_base_y, dcn_base_x], axis=1).reshape( - (-1)) - self.dcn_base_offset = torch.tensor(dcn_base_offset).view(1, -1, 1, 1) - - super().__init__(num_classes, in_channels, loss_cls=loss_cls, **kwargs) - - self.gradient_mul = gradient_mul - self.point_base_scale = point_base_scale - self.point_strides = point_strides - self.point_generators = [PointGenerator() for _ in self.point_strides] - - self.sampling = loss_cls['type'] not in ['FocalLoss'] - if self.train_cfg: - self.init_assigner = build_assigner(self.train_cfg.init.assigner) - self.refine_assigner = build_assigner( - self.train_cfg.refine.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.transform_method = transform_method - if self.transform_method == 'moment': - self.moment_transfer = nn.Parameter( - data=torch.zeros(2), requires_grad=True) - self.moment_mul = moment_mul - - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - if self.use_sigmoid_cls: - self.cls_out_channels = self.num_classes - else: - self.cls_out_channels = self.num_classes + 1 - self.loss_bbox_init = build_loss(loss_bbox_init) - self.loss_bbox_refine = build_loss(loss_bbox_refine) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - pts_out_dim = 4 if self.use_grid_points else 2 * self.num_points - self.reppoints_cls_conv = DeformConv2d(self.feat_channels, - self.point_feat_channels, - self.dcn_kernel, 1, - self.dcn_pad) - self.reppoints_cls_out = nn.Conv2d(self.point_feat_channels, - self.cls_out_channels, 1, 1, 0) - self.reppoints_pts_init_conv = nn.Conv2d(self.feat_channels, - self.point_feat_channels, 3, - 1, 1) - self.reppoints_pts_init_out = nn.Conv2d(self.point_feat_channels, - pts_out_dim, 1, 1, 0) - self.reppoints_pts_refine_conv = DeformConv2d(self.feat_channels, - self.point_feat_channels, - self.dcn_kernel, 1, - self.dcn_pad) - self.reppoints_pts_refine_out = nn.Conv2d(self.point_feat_channels, - pts_out_dim, 1, 1, 0) - - def init_weights(self): - """Initialize weights of the head.""" - for m in self.cls_convs: - normal_init(m.conv, std=0.01) - for m in self.reg_convs: - normal_init(m.conv, std=0.01) - bias_cls = bias_init_with_prob(0.01) - normal_init(self.reppoints_cls_conv, std=0.01) - normal_init(self.reppoints_cls_out, std=0.01, bias=bias_cls) - normal_init(self.reppoints_pts_init_conv, std=0.01) - normal_init(self.reppoints_pts_init_out, std=0.01) - normal_init(self.reppoints_pts_refine_conv, std=0.01) - normal_init(self.reppoints_pts_refine_out, std=0.01) - - def points2bbox(self, pts, y_first=True): - """Converting the points set into bounding box. - - :param pts: the input points sets (fields), each points - set (fields) is represented as 2n scalar. - :param y_first: if y_first=True, the point set is represented as - [y1, x1, y2, x2 ... yn, xn], otherwise the point set is - represented as [x1, y1, x2, y2 ... xn, yn]. - :return: each points set is converting to a bbox [x1, y1, x2, y2]. - """ - pts_reshape = pts.view(pts.shape[0], -1, 2, *pts.shape[2:]) - pts_y = pts_reshape[:, :, 0, ...] if y_first else pts_reshape[:, :, 1, - ...] - pts_x = pts_reshape[:, :, 1, ...] if y_first else pts_reshape[:, :, 0, - ...] - if self.transform_method == 'minmax': - bbox_left = pts_x.min(dim=1, keepdim=True)[0] - bbox_right = pts_x.max(dim=1, keepdim=True)[0] - bbox_up = pts_y.min(dim=1, keepdim=True)[0] - bbox_bottom = pts_y.max(dim=1, keepdim=True)[0] - bbox = torch.cat([bbox_left, bbox_up, bbox_right, bbox_bottom], - dim=1) - elif self.transform_method == 'partial_minmax': - pts_y = pts_y[:, :4, ...] - pts_x = pts_x[:, :4, ...] - bbox_left = pts_x.min(dim=1, keepdim=True)[0] - bbox_right = pts_x.max(dim=1, keepdim=True)[0] - bbox_up = pts_y.min(dim=1, keepdim=True)[0] - bbox_bottom = pts_y.max(dim=1, keepdim=True)[0] - bbox = torch.cat([bbox_left, bbox_up, bbox_right, bbox_bottom], - dim=1) - elif self.transform_method == 'moment': - pts_y_mean = pts_y.mean(dim=1, keepdim=True) - pts_x_mean = pts_x.mean(dim=1, keepdim=True) - pts_y_std = torch.std(pts_y - pts_y_mean, dim=1, keepdim=True) - pts_x_std = torch.std(pts_x - pts_x_mean, dim=1, keepdim=True) - moment_transfer = (self.moment_transfer * self.moment_mul) + ( - self.moment_transfer.detach() * (1 - self.moment_mul)) - moment_width_transfer = moment_transfer[0] - moment_height_transfer = moment_transfer[1] - half_width = pts_x_std * torch.exp(moment_width_transfer) - half_height = pts_y_std * torch.exp(moment_height_transfer) - bbox = torch.cat([ - pts_x_mean - half_width, pts_y_mean - half_height, - pts_x_mean + half_width, pts_y_mean + half_height - ], - dim=1) - else: - raise NotImplementedError - return bbox - - def gen_grid_from_reg(self, reg, previous_boxes): - """Base on the previous bboxes and regression values, we compute the - regressed bboxes and generate the grids on the bboxes. - - :param reg: the regression value to previous bboxes. - :param previous_boxes: previous bboxes. - :return: generate grids on the regressed bboxes. - """ - b, _, h, w = reg.shape - bxy = (previous_boxes[:, :2, ...] + previous_boxes[:, 2:, ...]) / 2. - bwh = (previous_boxes[:, 2:, ...] - - previous_boxes[:, :2, ...]).clamp(min=1e-6) - grid_topleft = bxy + bwh * reg[:, :2, ...] - 0.5 * bwh * torch.exp( - reg[:, 2:, ...]) - grid_wh = bwh * torch.exp(reg[:, 2:, ...]) - grid_left = grid_topleft[:, [0], ...] - grid_top = grid_topleft[:, [1], ...] - grid_width = grid_wh[:, [0], ...] - grid_height = grid_wh[:, [1], ...] - intervel = torch.linspace(0., 1., self.dcn_kernel).view( - 1, self.dcn_kernel, 1, 1).type_as(reg) - grid_x = grid_left + grid_width * intervel - grid_x = grid_x.unsqueeze(1).repeat(1, self.dcn_kernel, 1, 1, 1) - grid_x = grid_x.view(b, -1, h, w) - grid_y = grid_top + grid_height * intervel - grid_y = grid_y.unsqueeze(2).repeat(1, 1, self.dcn_kernel, 1, 1) - grid_y = grid_y.view(b, -1, h, w) - grid_yx = torch.stack([grid_y, grid_x], dim=2) - grid_yx = grid_yx.view(b, -1, h, w) - regressed_bbox = torch.cat([ - grid_left, grid_top, grid_left + grid_width, grid_top + grid_height - ], 1) - return grid_yx, regressed_bbox - - def forward(self, feats): - return multi_apply(self.forward_single, feats) - - def forward_single(self, x): - """Forward feature map of a single FPN level.""" - dcn_base_offset = self.dcn_base_offset.type_as(x) - # If we use center_init, the initial reppoints is from center points. - # If we use bounding bbox representation, the initial reppoints is - # from regular grid placed on a pre-defined bbox. - if self.use_grid_points or not self.center_init: - scale = self.point_base_scale / 2 - points_init = dcn_base_offset / dcn_base_offset.max() * scale - bbox_init = x.new_tensor([-scale, -scale, scale, - scale]).view(1, 4, 1, 1) - else: - points_init = 0 - cls_feat = x - pts_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - pts_feat = reg_conv(pts_feat) - # initialize reppoints - pts_out_init = self.reppoints_pts_init_out( - self.relu(self.reppoints_pts_init_conv(pts_feat))) - if self.use_grid_points: - pts_out_init, bbox_out_init = self.gen_grid_from_reg( - pts_out_init, bbox_init.detach()) - else: - pts_out_init = pts_out_init + points_init - # refine and classify reppoints - pts_out_init_grad_mul = (1 - self.gradient_mul) * pts_out_init.detach( - ) + self.gradient_mul * pts_out_init - dcn_offset = pts_out_init_grad_mul - dcn_base_offset - cls_out = self.reppoints_cls_out( - self.relu(self.reppoints_cls_conv(cls_feat, dcn_offset))) - pts_out_refine = self.reppoints_pts_refine_out( - self.relu(self.reppoints_pts_refine_conv(pts_feat, dcn_offset))) - if self.use_grid_points: - pts_out_refine, bbox_out_refine = self.gen_grid_from_reg( - pts_out_refine, bbox_out_init.detach()) - else: - pts_out_refine = pts_out_refine + pts_out_init.detach() - return cls_out, pts_out_init, pts_out_refine - - def get_points(self, featmap_sizes, img_metas, device): - """Get points according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - - Returns: - tuple: points of each image, valid flags of each image - """ - num_imgs = len(img_metas) - num_levels = len(featmap_sizes) - - # since feature map sizes of all images are the same, we only compute - # points center for one time - multi_level_points = [] - for i in range(num_levels): - points = self.point_generators[i].grid_points( - featmap_sizes[i], self.point_strides[i], device) - multi_level_points.append(points) - points_list = [[point.clone() for point in multi_level_points] - for _ in range(num_imgs)] - - # for each image, we compute valid flags of multi level grids - valid_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = [] - for i in range(num_levels): - point_stride = self.point_strides[i] - feat_h, feat_w = featmap_sizes[i] - h, w = img_meta['pad_shape'][:2] - valid_feat_h = min(int(np.ceil(h / point_stride)), feat_h) - valid_feat_w = min(int(np.ceil(w / point_stride)), feat_w) - flags = self.point_generators[i].valid_flags( - (feat_h, feat_w), (valid_feat_h, valid_feat_w), device) - multi_level_flags.append(flags) - valid_flag_list.append(multi_level_flags) - - return points_list, valid_flag_list - - def centers_to_bboxes(self, point_list): - """Get bboxes according to center points. - - Only used in :class:`MaxIoUAssigner`. - """ - bbox_list = [] - for i_img, point in enumerate(point_list): - bbox = [] - for i_lvl in range(len(self.point_strides)): - scale = self.point_base_scale * self.point_strides[i_lvl] * 0.5 - bbox_shift = torch.Tensor([-scale, -scale, scale, - scale]).view(1, 4).type_as(point[0]) - bbox_center = torch.cat( - [point[i_lvl][:, :2], point[i_lvl][:, :2]], dim=1) - bbox.append(bbox_center + bbox_shift) - bbox_list.append(bbox) - return bbox_list - - def offset_to_pts(self, center_list, pred_list): - """Change from point offset to point coordinate.""" - pts_list = [] - for i_lvl in range(len(self.point_strides)): - pts_lvl = [] - for i_img in range(len(center_list)): - pts_center = center_list[i_img][i_lvl][:, :2].repeat( - 1, self.num_points) - pts_shift = pred_list[i_lvl][i_img] - yx_pts_shift = pts_shift.permute(1, 2, 0).view( - -1, 2 * self.num_points) - y_pts_shift = yx_pts_shift[..., 0::2] - x_pts_shift = yx_pts_shift[..., 1::2] - xy_pts_shift = torch.stack([x_pts_shift, y_pts_shift], -1) - xy_pts_shift = xy_pts_shift.view(*yx_pts_shift.shape[:-1], -1) - pts = xy_pts_shift * self.point_strides[i_lvl] + pts_center - pts_lvl.append(pts) - pts_lvl = torch.stack(pts_lvl, 0) - pts_list.append(pts_lvl) - return pts_list - - def _point_target_single(self, - flat_proposals, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - label_channels=1, - stage='init', - unmap_outputs=True): - inside_flags = valid_flags - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample proposals - proposals = flat_proposals[inside_flags, :] - - if stage == 'init': - assigner = self.init_assigner - pos_weight = self.train_cfg.init.pos_weight - else: - assigner = self.refine_assigner - pos_weight = self.train_cfg.refine.pos_weight - assign_result = assigner.assign(proposals, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - sampling_result = self.sampler.sample(assign_result, proposals, - gt_bboxes) - - num_valid_proposals = proposals.shape[0] - bbox_gt = proposals.new_zeros([num_valid_proposals, 4]) - pos_proposals = torch.zeros_like(proposals) - proposals_weights = proposals.new_zeros([num_valid_proposals, 4]) - labels = proposals.new_full((num_valid_proposals, ), - self.num_classes, - dtype=torch.long) - label_weights = proposals.new_zeros( - num_valid_proposals, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - pos_gt_bboxes = sampling_result.pos_gt_bboxes - bbox_gt[pos_inds, :] = pos_gt_bboxes - pos_proposals[pos_inds, :] = proposals[pos_inds, :] - proposals_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of proposals - if unmap_outputs: - num_total_proposals = flat_proposals.size(0) - labels = unmap(labels, num_total_proposals, inside_flags) - label_weights = unmap(label_weights, num_total_proposals, - inside_flags) - bbox_gt = unmap(bbox_gt, num_total_proposals, inside_flags) - pos_proposals = unmap(pos_proposals, num_total_proposals, - inside_flags) - proposals_weights = unmap(proposals_weights, num_total_proposals, - inside_flags) - - return (labels, label_weights, bbox_gt, pos_proposals, - proposals_weights, pos_inds, neg_inds) - - def get_targets(self, - proposals_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - stage='init', - label_channels=1, - unmap_outputs=True): - """Compute corresponding GT box and classification targets for - proposals. - - Args: - proposals_list (list[list]): Multi level points/bboxes of each - image. - valid_flag_list (list[list]): Multi level valid flags of each - image. - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_bboxes_list (list[Tensor]): Ground truth labels of each box. - stage (str): `init` or `refine`. Generate target for init stage or - refine stage - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each level. # noqa: E501 - - bbox_gt_list (list[Tensor]): Ground truth bbox of each level. - - proposal_list (list[Tensor]): Proposals(points/bboxes) of each level. # noqa: E501 - - proposal_weights_list (list[Tensor]): Proposal weights of each level. # noqa: E501 - - num_total_pos (int): Number of positive samples in all images. # noqa: E501 - - num_total_neg (int): Number of negative samples in all images. # noqa: E501 - """ - assert stage in ['init', 'refine'] - num_imgs = len(img_metas) - assert len(proposals_list) == len(valid_flag_list) == num_imgs - - # points number of multi levels - num_level_proposals = [points.size(0) for points in proposals_list[0]] - - # concat all level points and flags to a single tensor - for i in range(num_imgs): - assert len(proposals_list[i]) == len(valid_flag_list[i]) - proposals_list[i] = torch.cat(proposals_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_labels, all_label_weights, all_bbox_gt, all_proposals, - all_proposal_weights, pos_inds_list, neg_inds_list) = multi_apply( - self._point_target_single, - proposals_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - stage=stage, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - # no valid points - if any([labels is None for labels in all_labels]): - return None - # sampled points of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - labels_list = images_to_levels(all_labels, num_level_proposals) - label_weights_list = images_to_levels(all_label_weights, - num_level_proposals) - bbox_gt_list = images_to_levels(all_bbox_gt, num_level_proposals) - proposals_list = images_to_levels(all_proposals, num_level_proposals) - proposal_weights_list = images_to_levels(all_proposal_weights, - num_level_proposals) - return (labels_list, label_weights_list, bbox_gt_list, proposals_list, - proposal_weights_list, num_total_pos, num_total_neg) - - def loss_single(self, cls_score, pts_pred_init, pts_pred_refine, labels, - label_weights, bbox_gt_init, bbox_weights_init, - bbox_gt_refine, bbox_weights_refine, stride, - num_total_samples_init, num_total_samples_refine): - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - cls_score = cls_score.contiguous() - loss_cls = self.loss_cls( - cls_score, - labels, - label_weights, - avg_factor=num_total_samples_refine) - - # points loss - bbox_gt_init = bbox_gt_init.reshape(-1, 4) - bbox_weights_init = bbox_weights_init.reshape(-1, 4) - bbox_pred_init = self.points2bbox( - pts_pred_init.reshape(-1, 2 * self.num_points), y_first=False) - bbox_gt_refine = bbox_gt_refine.reshape(-1, 4) - bbox_weights_refine = bbox_weights_refine.reshape(-1, 4) - bbox_pred_refine = self.points2bbox( - pts_pred_refine.reshape(-1, 2 * self.num_points), y_first=False) - normalize_term = self.point_base_scale * stride - loss_pts_init = self.loss_bbox_init( - bbox_pred_init / normalize_term, - bbox_gt_init / normalize_term, - bbox_weights_init, - avg_factor=num_total_samples_init) - loss_pts_refine = self.loss_bbox_refine( - bbox_pred_refine / normalize_term, - bbox_gt_refine / normalize_term, - bbox_weights_refine, - avg_factor=num_total_samples_refine) - return loss_cls, loss_pts_init, loss_pts_refine - - def loss(self, - cls_scores, - pts_preds_init, - pts_preds_refine, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == len(self.point_generators) - device = cls_scores[0].device - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - # target for initial stage - center_list, valid_flag_list = self.get_points(featmap_sizes, - img_metas, device) - pts_coordinate_preds_init = self.offset_to_pts(center_list, - pts_preds_init) - if self.train_cfg.init.assigner['type'] == 'PointAssigner': - # Assign target for center list - candidate_list = center_list - else: - # transform center list to bbox list and - # assign target for bbox list - bbox_list = self.centers_to_bboxes(center_list) - candidate_list = bbox_list - cls_reg_targets_init = self.get_targets( - candidate_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - stage='init', - label_channels=label_channels) - (*_, bbox_gt_list_init, candidate_list_init, bbox_weights_list_init, - num_total_pos_init, num_total_neg_init) = cls_reg_targets_init - num_total_samples_init = ( - num_total_pos_init + - num_total_neg_init if self.sampling else num_total_pos_init) - - # target for refinement stage - center_list, valid_flag_list = self.get_points(featmap_sizes, - img_metas, device) - pts_coordinate_preds_refine = self.offset_to_pts( - center_list, pts_preds_refine) - bbox_list = [] - for i_img, center in enumerate(center_list): - bbox = [] - for i_lvl in range(len(pts_preds_refine)): - bbox_preds_init = self.points2bbox( - pts_preds_init[i_lvl].detach()) - bbox_shift = bbox_preds_init * self.point_strides[i_lvl] - bbox_center = torch.cat( - [center[i_lvl][:, :2], center[i_lvl][:, :2]], dim=1) - bbox.append(bbox_center + - bbox_shift[i_img].permute(1, 2, 0).reshape(-1, 4)) - bbox_list.append(bbox) - cls_reg_targets_refine = self.get_targets( - bbox_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - stage='refine', - label_channels=label_channels) - (labels_list, label_weights_list, bbox_gt_list_refine, - candidate_list_refine, bbox_weights_list_refine, num_total_pos_refine, - num_total_neg_refine) = cls_reg_targets_refine - num_total_samples_refine = ( - num_total_pos_refine + - num_total_neg_refine if self.sampling else num_total_pos_refine) - - # compute loss - losses_cls, losses_pts_init, losses_pts_refine = multi_apply( - self.loss_single, - cls_scores, - pts_coordinate_preds_init, - pts_coordinate_preds_refine, - labels_list, - label_weights_list, - bbox_gt_list_init, - bbox_weights_list_init, - bbox_gt_list_refine, - bbox_weights_list_refine, - self.point_strides, - num_total_samples_init=num_total_samples_init, - num_total_samples_refine=num_total_samples_refine) - loss_dict_all = { - 'loss_cls': losses_cls, - 'loss_pts_init': losses_pts_init, - 'loss_pts_refine': losses_pts_refine - } - return loss_dict_all - - def get_bboxes(self, - cls_scores, - pts_preds_init, - pts_preds_refine, - img_metas, - cfg=None, - rescale=False, - with_nms=True): - assert len(cls_scores) == len(pts_preds_refine) - device = cls_scores[0].device - bbox_preds_refine = [ - self.points2bbox(pts_pred_refine) - for pts_pred_refine in pts_preds_refine - ] - num_levels = len(cls_scores) - mlvl_points = [ - self.point_generators[i].grid_points(cls_scores[i].size()[-2:], - self.point_strides[i], device) - for i in range(num_levels) - ] - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds_refine[i][img_id].detach() - for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score_list, bbox_pred_list, - mlvl_points, img_shape, - scale_factor, cfg, rescale, - with_nms) - result_list.append(proposals) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - mlvl_points, - img_shape, - scale_factor, - cfg, - rescale=False, - with_nms=True): - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(mlvl_points) - mlvl_bboxes = [] - mlvl_scores = [] - for i_lvl, (cls_score, bbox_pred, points) in enumerate( - zip(cls_scores, bbox_preds, mlvl_points)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4) - nms_pre = cfg.get('nms_pre', -1) - if nms_pre > 0 and scores.shape[0] > nms_pre: - if self.use_sigmoid_cls: - max_scores, _ = scores.max(dim=1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = scores[:, :-1].max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - points = points[topk_inds, :] - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - bbox_pos_center = torch.cat([points[:, :2], points[:, :2]], dim=1) - bboxes = bbox_pred * self.point_strides[i_lvl] + bbox_pos_center - x1 = bboxes[:, 0].clamp(min=0, max=img_shape[1]) - y1 = bboxes[:, 1].clamp(min=0, max=img_shape[0]) - x2 = bboxes[:, 2].clamp(min=0, max=img_shape[1]) - y2 = bboxes[:, 3].clamp(min=0, max=img_shape[0]) - bboxes = torch.stack([x1, y1, x2, y2], dim=-1) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_bboxes = torch.cat(mlvl_bboxes) - if rescale: - mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor) - mlvl_scores = torch.cat(mlvl_scores) - if self.use_sigmoid_cls: - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1) - mlvl_scores = torch.cat([mlvl_scores, padding], dim=1) - if with_nms: - det_bboxes, det_labels = multiclass_nms(mlvl_bboxes, mlvl_scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - return det_bboxes, det_labels - else: - return mlvl_bboxes, mlvl_scores diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/datasets/drive.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/datasets/drive.py deleted file mode 100644 index 06e8ff606e0d2a4514ec8b7d2c6c436a32efcbf4..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/datasets/drive.py +++ /dev/null @@ -1,59 +0,0 @@ -# dataset settings -dataset_type = 'DRIVEDataset' -data_root = 'data/DRIVE' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -img_scale = (584, 565) -crop_size = (64, 64) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=40000, - dataset=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/training', - ann_dir='annotations/training', - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_80k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_80k_ade20k.py deleted file mode 100644 index 53fd3a909585367ca59eb827c2fbbab4cdf234ea..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_512x512_80k_ade20k.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './deeplabv3plus_r50-d8_512x512_80k_ade20k.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_40k_voc12aug.py deleted file mode 100644 index eafefaa67565513c277c5eb42e3661a88133cb27..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_40k_voc12aug.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './fcn_r50-d8_512x512_40k_voc12aug.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/AnnonSubmission/xai-cl/ssl_models/dino.py b/spaces/AnnonSubmission/xai-cl/ssl_models/dino.py deleted file mode 100644 index 0e35215338cb438a95661964f146f74d40ba066b..0000000000000000000000000000000000000000 --- a/spaces/AnnonSubmission/xai-cl/ssl_models/dino.py +++ /dev/null @@ -1,181 +0,0 @@ -import torch -import torch.nn as nn -import torchvision -import torch.nn.functional as F -import numpy as np -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -""" from https://github.com/facebookresearch/dino""" - -class DINOHead(nn.Module): - - def __init__(self, in_dim, out_dim, use_bn, norm_last_layer, nlayers, hidden_dim, bottleneck_dim): - super().__init__() - - nlayers = max(nlayers, 1) - if nlayers == 1: - self.mlp = nn.Linear(in_dim, bottleneck_dim) - else: - layers = [nn.Linear(in_dim, hidden_dim)] - if use_bn: - layers.append(nn.BatchNorm1d(hidden_dim)) - layers.append(nn.GELU()) - for _ in range(nlayers - 2): - layers.append(nn.Linear(hidden_dim, hidden_dim)) - if use_bn: - layers.append(nn.BatchNorm1d(hidden_dim)) - layers.append(nn.GELU()) - layers.append(nn.Linear(hidden_dim, bottleneck_dim)) - self.mlp = nn.Sequential(*layers) - - self.last_layer = nn.utils.weight_norm(nn.Linear(bottleneck_dim, out_dim, bias=False)) - self.last_layer.weight_g.data.fill_(1) - if norm_last_layer: - self.last_layer.weight_g.requires_grad = False - - def forward(self, x): - x = self.mlp(x) - x = F.normalize(x, dim=-1, p=2) - x = self.last_layer(x) - return x - -class MultiCropWrapper(nn.Module): - def __init__(self, backbone, head): - super(MultiCropWrapper, self).__init__() - backbone.fc, backbone.head = nn.Identity(), nn.Identity() - self.backbone = backbone - self.head = head - - def forward(self, x): - return self.head(self.backbone(x)) - -class DINOLoss(nn.Module): - def __init__(self, out_dim, warmup_teacher_temp, teacher_temp, warmup_teacher_temp_epochs, nepochs, - student_temp=0.1, center_momentum=0.9): - super().__init__() - - self.student_temp = student_temp - self.center_momentum = center_momentum - self.register_buffer("center", torch.zeros(1, out_dim)) - self.nepochs = nepochs - self.teacher_temp_schedule = np.concatenate((np.linspace(warmup_teacher_temp, teacher_temp, warmup_teacher_temp_epochs), - np.ones(nepochs - warmup_teacher_temp_epochs) * teacher_temp)) - - def forward(self, student_output, teacher_output): - student_out = student_output / self.student_temp - temp = self.teacher_temp_schedule[self.nepochs - 1] # last one - teacher_out = F.softmax((teacher_output - self.center) / temp, dim=-1) - teacher_out = teacher_out.detach() - loss = torch.sum(-teacher_out * F.log_softmax(student_out, dim=-1), dim=-1).mean() - return loss - - -class ResNet(nn.Module): - def __init__(self, backbone): - super().__init__() - - modules = list(backbone.children())[:-2] - self.net = nn.Sequential(*modules) - - def forward(self, x): - return self.net(x).mean(dim=[2, 3]) - -class RestructuredDINO(nn.Module): - - def __init__(self, student, teacher): - super().__init__() - - self.encoder_student = ResNet(student.backbone) - self.encoder = ResNet(teacher.backbone) - - self.contrastive_head_student = student.head - self.contrastive_head = teacher.head - - - def forward(self, x, run_teacher): - - if run_teacher: - x = self.encoder(x) - x = self.contrastive_head(x) - else: - x = self.encoder_student(x) - x = self.contrastive_head_student(x) - - return x - - -def get_dino_model_without_loss(ckpt_path = 'dino_resnet50_pretrain_full_checkpoint.pth'): - state_dict = torch.load('pretrained_models/dino_models/' + ckpt_path, map_location='cpu') - state_dict_student = state_dict['student'] - state_dict_teacher = state_dict['teacher'] - - state_dict_student = {k.replace("module.", ""): v for k, v in state_dict_student.items()} - state_dict_teacher = {k.replace("module.", ""): v for k, v in state_dict_teacher.items()} - - student_backbone = torchvision.models.resnet50() - teacher_backbone = torchvision.models.resnet50() - embed_dim = student_backbone.fc.weight.shape[1] - - student_head = DINOHead(in_dim = embed_dim, out_dim = 60000, use_bn=True, norm_last_layer=True, nlayers=2, hidden_dim=4096, bottleneck_dim=256) - teacher_head = DINOHead(in_dim = embed_dim, out_dim = 60000, use_bn =True, norm_last_layer=True, nlayers=2, hidden_dim=4096, bottleneck_dim=256) - student_head.last_layer = nn.Linear(256, 60000, bias = False) - teacher_head.last_layer = nn.Linear(256, 60000, bias = False) - - student = MultiCropWrapper(student_backbone, student_head) - teacher = MultiCropWrapper(teacher_backbone, teacher_head) - - student.load_state_dict(state_dict_student) - teacher.load_state_dict(state_dict_teacher) - - restructured_model = RestructuredDINO(student, teacher) - - return restructured_model.to(device) - - -def get_dino_model_with_loss(ckpt_path = 'dino_rn50_checkpoint.pth'): - state_dict = torch.load('pretrained_models/dino_models/' + ckpt_path, map_location='cpu') - - state_dict_student = state_dict['student'] - state_dict_teacher = state_dict['teacher'] - state_dict_args = vars(state_dict['args']) - state_dic_dino_loss = state_dict['dino_loss'] - - state_dict_student = {k.replace("module.", ""): v for k, v in state_dict_student.items()} - state_dict_teacher = {k.replace("module.", ""): v for k, v in state_dict_teacher.items()} - - student_backbone = torchvision.models.resnet50() - teacher_backbone = torchvision.models.resnet50() - embed_dim = student_backbone.fc.weight.shape[1] - - student_head = DINOHead(in_dim = embed_dim, - out_dim = state_dict_args['out_dim'], - use_bn = state_dict_args['use_bn_in_head'], - norm_last_layer = state_dict_args['norm_last_layer'], - nlayers = 3, - hidden_dim = 2048, - bottleneck_dim = 256) - - teacher_head = DINOHead(in_dim = embed_dim, - out_dim = state_dict_args['out_dim'], - use_bn = state_dict_args['use_bn_in_head'], - norm_last_layer = state_dict_args['norm_last_layer'], - nlayers = 3, - hidden_dim = 2048, - bottleneck_dim = 256) - - loss = DINOLoss(out_dim = state_dict_args['out_dim'], - warmup_teacher_temp = state_dict_args['warmup_teacher_temp'], - teacher_temp = state_dict_args['teacher_temp'], - warmup_teacher_temp_epochs = state_dict_args['warmup_teacher_temp_epochs'], - nepochs = state_dict_args['epochs']) - - student = MultiCropWrapper(student_backbone, student_head) - teacher = MultiCropWrapper(teacher_backbone, teacher_head) - - student.load_state_dict(state_dict_student) - teacher.load_state_dict(state_dict_teacher) - loss.load_state_dict(state_dic_dino_loss) - - restructured_model = RestructuredDINO(student, teacher) - - return restructured_model.to(device), loss.to(device) \ No newline at end of file diff --git a/spaces/Annotation-AI/fast-segment-everything-with-text-prompt/README.md b/spaces/Annotation-AI/fast-segment-everything-with-text-prompt/README.md deleted file mode 100644 index d985cfb10b2840e52098335cf335631c2703b0e8..0000000000000000000000000000000000000000 --- a/spaces/Annotation-AI/fast-segment-everything-with-text-prompt/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Fast Segment Anything With Text Prompt -emoji: 🐨 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/dpt_depth.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/dpt_depth.py deleted file mode 100644 index 4e9aab5d2767dffea39da5b3f30e2798688216f1..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/dpt_depth.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .base_model import BaseModel -from .blocks import ( - FeatureFusionBlock, - FeatureFusionBlock_custom, - Interpolate, - _make_encoder, - forward_vit, -) - - -def _make_fusion_block(features, use_bn): - return FeatureFusionBlock_custom( - features, - nn.ReLU(False), - deconv=False, - bn=use_bn, - expand=False, - align_corners=True, - ) - - -class DPT(BaseModel): - def __init__( - self, - head, - features=256, - backbone="vitb_rn50_384", - readout="project", - channels_last=False, - use_bn=False, - ): - - super(DPT, self).__init__() - - self.channels_last = channels_last - - hooks = { - "vitb_rn50_384": [0, 1, 8, 11], - "vitb16_384": [2, 5, 8, 11], - "vitl16_384": [5, 11, 17, 23], - } - - # Instantiate backbone and reassemble blocks - self.pretrained, self.scratch = _make_encoder( - backbone, - features, - False, # Set to true of you want to train from scratch, uses ImageNet weights - groups=1, - expand=False, - exportable=False, - hooks=hooks[backbone], - use_readout=readout, - ) - - self.scratch.refinenet1 = _make_fusion_block(features, use_bn) - self.scratch.refinenet2 = _make_fusion_block(features, use_bn) - self.scratch.refinenet3 = _make_fusion_block(features, use_bn) - self.scratch.refinenet4 = _make_fusion_block(features, use_bn) - - self.scratch.output_conv = head - - - def forward(self, x): - if self.channels_last == True: - x.contiguous(memory_format=torch.channels_last) - - layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return out - - -class DPTDepthModel(DPT): - def __init__(self, path=None, non_negative=True, **kwargs): - features = kwargs["features"] if "features" in kwargs else 256 - - head = nn.Sequential( - nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear", align_corners=True), - nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - super().__init__(head, **kwargs) - - if path is not None: - self.load(path) - - def forward(self, x): - return super().forward(x).squeeze(dim=1) - diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/backbone/backbone.py b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/backbone/backbone.py deleted file mode 100644 index c8340c723fad8e07e2fc62daaa3912487498814b..0000000000000000000000000000000000000000 --- a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/backbone/backbone.py +++ /dev/null @@ -1,221 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copied from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -""" -Backbone modules. -""" - -from typing import Dict, List - -import torch -import torch.nn.functional as F -import torchvision -from torch import nn -from torchvision.models._utils import IntermediateLayerGetter - -from groundingdino.util.misc import NestedTensor, clean_state_dict, is_main_process - -from .position_encoding import build_position_encoding -from .swin_transformer import build_swin_transformer - - -class FrozenBatchNorm2d(torch.nn.Module): - """ - BatchNorm2d where the batch statistics and the affine parameters are fixed. - - Copy-paste from torchvision.misc.ops with added eps before rqsrt, - without which any other models than torchvision.models.resnet[18,34,50,101] - produce nans. - """ - - def __init__(self, n): - super(FrozenBatchNorm2d, self).__init__() - self.register_buffer("weight", torch.ones(n)) - self.register_buffer("bias", torch.zeros(n)) - self.register_buffer("running_mean", torch.zeros(n)) - self.register_buffer("running_var", torch.ones(n)) - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - num_batches_tracked_key = prefix + "num_batches_tracked" - if num_batches_tracked_key in state_dict: - del state_dict[num_batches_tracked_key] - - super(FrozenBatchNorm2d, self)._load_from_state_dict( - state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ) - - def forward(self, x): - # move reshapes to the beginning - # to make it fuser-friendly - w = self.weight.reshape(1, -1, 1, 1) - b = self.bias.reshape(1, -1, 1, 1) - rv = self.running_var.reshape(1, -1, 1, 1) - rm = self.running_mean.reshape(1, -1, 1, 1) - eps = 1e-5 - scale = w * (rv + eps).rsqrt() - bias = b - rm * scale - return x * scale + bias - - -class BackboneBase(nn.Module): - def __init__( - self, - backbone: nn.Module, - train_backbone: bool, - num_channels: int, - return_interm_indices: list, - ): - super().__init__() - for name, parameter in backbone.named_parameters(): - if ( - not train_backbone - or "layer2" not in name - and "layer3" not in name - and "layer4" not in name - ): - parameter.requires_grad_(False) - - return_layers = {} - for idx, layer_index in enumerate(return_interm_indices): - return_layers.update( - {"layer{}".format(5 - len(return_interm_indices) + idx): "{}".format(layer_index)} - ) - - # if len: - # if use_stage1_feature: - # return_layers = {"layer1": "0", "layer2": "1", "layer3": "2", "layer4": "3"} - # else: - # return_layers = {"layer2": "0", "layer3": "1", "layer4": "2"} - # else: - # return_layers = {'layer4': "0"} - self.body = IntermediateLayerGetter(backbone, return_layers=return_layers) - self.num_channels = num_channels - - def forward(self, tensor_list: NestedTensor): - xs = self.body(tensor_list.tensors) - out: Dict[str, NestedTensor] = {} - for name, x in xs.items(): - m = tensor_list.mask - assert m is not None - mask = F.interpolate(m[None].float(), size=x.shape[-2:]).to(torch.bool)[0] - out[name] = NestedTensor(x, mask) - # import ipdb; ipdb.set_trace() - return out - - -class Backbone(BackboneBase): - """ResNet backbone with frozen BatchNorm.""" - - def __init__( - self, - name: str, - train_backbone: bool, - dilation: bool, - return_interm_indices: list, - batch_norm=FrozenBatchNorm2d, - ): - if name in ["resnet18", "resnet34", "resnet50", "resnet101"]: - backbone = getattr(torchvision.models, name)( - replace_stride_with_dilation=[False, False, dilation], - pretrained=is_main_process(), - norm_layer=batch_norm, - ) - else: - raise NotImplementedError("Why you can get here with name {}".format(name)) - # num_channels = 512 if name in ('resnet18', 'resnet34') else 2048 - assert name not in ("resnet18", "resnet34"), "Only resnet50 and resnet101 are available." - assert return_interm_indices in [[0, 1, 2, 3], [1, 2, 3], [3]] - num_channels_all = [256, 512, 1024, 2048] - num_channels = num_channels_all[4 - len(return_interm_indices) :] - super().__init__(backbone, train_backbone, num_channels, return_interm_indices) - - -class Joiner(nn.Sequential): - def __init__(self, backbone, position_embedding): - super().__init__(backbone, position_embedding) - - def forward(self, tensor_list: NestedTensor): - xs = self[0](tensor_list) - out: List[NestedTensor] = [] - pos = [] - for name, x in xs.items(): - out.append(x) - # position encoding - pos.append(self[1](x).to(x.tensors.dtype)) - - return out, pos - - -def build_backbone(args): - """ - Useful args: - - backbone: backbone name - - lr_backbone: - - dilation - - return_interm_indices: available: [0,1,2,3], [1,2,3], [3] - - backbone_freeze_keywords: - - use_checkpoint: for swin only for now - - """ - position_embedding = build_position_encoding(args) - train_backbone = True - if not train_backbone: - raise ValueError("Please set lr_backbone > 0") - return_interm_indices = args.return_interm_indices - assert return_interm_indices in [[0, 1, 2, 3], [1, 2, 3], [3]] - args.backbone_freeze_keywords - use_checkpoint = getattr(args, "use_checkpoint", False) - - if args.backbone in ["resnet50", "resnet101"]: - backbone = Backbone( - args.backbone, - train_backbone, - args.dilation, - return_interm_indices, - batch_norm=FrozenBatchNorm2d, - ) - bb_num_channels = backbone.num_channels - elif args.backbone in [ - "swin_T_224_1k", - "swin_B_224_22k", - "swin_B_384_22k", - "swin_L_224_22k", - "swin_L_384_22k", - ]: - pretrain_img_size = int(args.backbone.split("_")[-2]) - backbone = build_swin_transformer( - args.backbone, - pretrain_img_size=pretrain_img_size, - out_indices=tuple(return_interm_indices), - dilation=False, - use_checkpoint=use_checkpoint, - ) - - bb_num_channels = backbone.num_features[4 - len(return_interm_indices) :] - else: - raise NotImplementedError("Unknown backbone {}".format(args.backbone)) - - assert len(bb_num_channels) == len( - return_interm_indices - ), f"len(bb_num_channels) {len(bb_num_channels)} != len(return_interm_indices) {len(return_interm_indices)}" - - model = Joiner(backbone, position_embedding) - model.num_channels = bb_num_channels - assert isinstance( - bb_num_channels, List - ), "bb_num_channels is expected to be a List but {}".format(type(bb_num_channels)) - # import ipdb; ipdb.set_trace() - return model diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/dotenv/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/dotenv/__init__.py deleted file mode 100644 index 7f4c631ba11786bceebd22591f91bd378d8b232c..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/dotenv/__init__.py +++ /dev/null @@ -1,49 +0,0 @@ -from typing import Any, Optional - -from .main import (dotenv_values, find_dotenv, get_key, load_dotenv, set_key, - unset_key) - - -def load_ipython_extension(ipython: Any) -> None: - from .ipython import load_ipython_extension - load_ipython_extension(ipython) - - -def get_cli_string( - path: Optional[str] = None, - action: Optional[str] = None, - key: Optional[str] = None, - value: Optional[str] = None, - quote: Optional[str] = None, -): - """Returns a string suitable for running as a shell script. - - Useful for converting a arguments passed to a fabric task - to be passed to a `local` or `run` command. - """ - command = ['dotenv'] - if quote: - command.append(f'-q {quote}') - if path: - command.append(f'-f {path}') - if action: - command.append(action) - if key: - command.append(key) - if value: - if ' ' in value: - command.append(f'"{value}"') - else: - command.append(value) - - return ' '.join(command).strip() - - -__all__ = ['get_cli_string', - 'load_dotenv', - 'dotenv_values', - 'get_key', - 'set_key', - 'unset_key', - 'find_dotenv', - 'load_ipython_extension'] diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/cache.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/cache.py deleted file mode 100644 index a81a23985198d2eaa3c25ad1f77924f0fcdb037b..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/cache.py +++ /dev/null @@ -1,69 +0,0 @@ -"""HTTP cache implementation. -""" - -import os -from contextlib import contextmanager -from typing import Generator, Optional - -from pip._vendor.cachecontrol.cache import BaseCache -from pip._vendor.cachecontrol.caches import FileCache -from pip._vendor.requests.models import Response - -from pip._internal.utils.filesystem import adjacent_tmp_file, replace -from pip._internal.utils.misc import ensure_dir - - -def is_from_cache(response: Response) -> bool: - return getattr(response, "from_cache", False) - - -@contextmanager -def suppressed_cache_errors() -> Generator[None, None, None]: - """If we can't access the cache then we can just skip caching and process - requests as if caching wasn't enabled. - """ - try: - yield - except OSError: - pass - - -class SafeFileCache(BaseCache): - """ - A file based cache which is safe to use even when the target directory may - not be accessible or writable. - """ - - def __init__(self, directory: str) -> None: - assert directory is not None, "Cache directory must not be None." - super().__init__() - self.directory = directory - - def _get_cache_path(self, name: str) -> str: - # From cachecontrol.caches.file_cache.FileCache._fn, brought into our - # class for backwards-compatibility and to avoid using a non-public - # method. - hashed = FileCache.encode(name) - parts = list(hashed[:5]) + [hashed] - return os.path.join(self.directory, *parts) - - def get(self, key: str) -> Optional[bytes]: - path = self._get_cache_path(key) - with suppressed_cache_errors(): - with open(path, "rb") as f: - return f.read() - - def set(self, key: str, value: bytes, expires: Optional[int] = None) -> None: - path = self._get_cache_path(key) - with suppressed_cache_errors(): - ensure_dir(os.path.dirname(path)) - - with adjacent_tmp_file(path) as f: - f.write(value) - - replace(f.name, path) - - def delete(self, key: str) -> None: - path = self._get_cache_path(key) - with suppressed_cache_errors(): - os.remove(path) diff --git a/spaces/AutoGeneralAI/chatgpt-clone/README.md b/spaces/AutoGeneralAI/chatgpt-clone/README.md deleted file mode 100644 index 53643e9b3c3541f78bbc7154a7f2e5262581299d..0000000000000000000000000000000000000000 --- a/spaces/AutoGeneralAI/chatgpt-clone/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chatgpt Clone -emoji: 🐠 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Bart92/RVC_HF/infer/lib/slicer2.py b/spaces/Bart92/RVC_HF/infer/lib/slicer2.py deleted file mode 100644 index 5b29ee262aa54045e807be2cffeb41687499ba58..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/infer/lib/slicer2.py +++ /dev/null @@ -1,260 +0,0 @@ -import numpy as np - - -# This function is obtained from librosa. -def get_rms( - y, - frame_length=2048, - hop_length=512, - pad_mode="constant", -): - padding = (int(frame_length // 2), int(frame_length // 2)) - y = np.pad(y, padding, mode=pad_mode) - - axis = -1 - # put our new within-frame axis at the end for now - out_strides = y.strides + tuple([y.strides[axis]]) - # Reduce the shape on the framing axis - x_shape_trimmed = list(y.shape) - x_shape_trimmed[axis] -= frame_length - 1 - out_shape = tuple(x_shape_trimmed) + tuple([frame_length]) - xw = np.lib.stride_tricks.as_strided(y, shape=out_shape, strides=out_strides) - if axis < 0: - target_axis = axis - 1 - else: - target_axis = axis + 1 - xw = np.moveaxis(xw, -1, target_axis) - # Downsample along the target axis - slices = [slice(None)] * xw.ndim - slices[axis] = slice(0, None, hop_length) - x = xw[tuple(slices)] - - # Calculate power - power = np.mean(np.abs(x) ** 2, axis=-2, keepdims=True) - - return np.sqrt(power) - - -class Slicer: - def __init__( - self, - sr: int, - threshold: float = -40.0, - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000, - ): - if not min_length >= min_interval >= hop_size: - raise ValueError( - "The following condition must be satisfied: min_length >= min_interval >= hop_size" - ) - if not max_sil_kept >= hop_size: - raise ValueError( - "The following condition must be satisfied: max_sil_kept >= hop_size" - ) - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.0) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[ - :, begin * self.hop_size : min(waveform.shape[1], end * self.hop_size) - ] - else: - return waveform[ - begin * self.hop_size : min(waveform.shape[0], end * self.hop_size) - ] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = waveform.mean(axis=0) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return [waveform] - rms_list = get_rms( - y=samples, frame_length=self.win_size, hop_length=self.hop_size - ).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = ( - i - silence_start >= self.min_interval - and i - clip_start >= self.min_length - ) - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start : i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[ - i - self.max_sil_kept : silence_start + self.max_sil_kept + 1 - ].argmin() - pos += i - self.max_sil_kept - pos_l = ( - rms_list[ - silence_start : silence_start + self.max_sil_kept + 1 - ].argmin() - + silence_start - ) - pos_r = ( - rms_list[i - self.max_sil_kept : i + 1].argmin() - + i - - self.max_sil_kept - ) - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = ( - rms_list[ - silence_start : silence_start + self.max_sil_kept + 1 - ].argmin() - + silence_start - ) - pos_r = ( - rms_list[i - self.max_sil_kept : i + 1].argmin() - + i - - self.max_sil_kept - ) - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if ( - silence_start is not None - and total_frames - silence_start >= self.min_interval - ): - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start : silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return [waveform] - else: - chunks = [] - if sil_tags[0][0] > 0: - chunks.append(self._apply_slice(waveform, 0, sil_tags[0][0])) - for i in range(len(sil_tags) - 1): - chunks.append( - self._apply_slice(waveform, sil_tags[i][1], sil_tags[i + 1][0]) - ) - if sil_tags[-1][1] < total_frames: - chunks.append( - self._apply_slice(waveform, sil_tags[-1][1], total_frames) - ) - return chunks - - -def main(): - import os.path - from argparse import ArgumentParser - - import librosa - import soundfile - - parser = ArgumentParser() - parser.add_argument("audio", type=str, help="The audio to be sliced") - parser.add_argument( - "--out", type=str, help="Output directory of the sliced audio clips" - ) - parser.add_argument( - "--db_thresh", - type=float, - required=False, - default=-40, - help="The dB threshold for silence detection", - ) - parser.add_argument( - "--min_length", - type=int, - required=False, - default=5000, - help="The minimum milliseconds required for each sliced audio clip", - ) - parser.add_argument( - "--min_interval", - type=int, - required=False, - default=300, - help="The minimum milliseconds for a silence part to be sliced", - ) - parser.add_argument( - "--hop_size", - type=int, - required=False, - default=10, - help="Frame length in milliseconds", - ) - parser.add_argument( - "--max_sil_kept", - type=int, - required=False, - default=500, - help="The maximum silence length kept around the sliced clip, presented in milliseconds", - ) - args = parser.parse_args() - out = args.out - if out is None: - out = os.path.dirname(os.path.abspath(args.audio)) - audio, sr = librosa.load(args.audio, sr=None, mono=False) - slicer = Slicer( - sr=sr, - threshold=args.db_thresh, - min_length=args.min_length, - min_interval=args.min_interval, - hop_size=args.hop_size, - max_sil_kept=args.max_sil_kept, - ) - chunks = slicer.slice(audio) - if not os.path.exists(out): - os.makedirs(out) - for i, chunk in enumerate(chunks): - if len(chunk.shape) > 1: - chunk = chunk.T - soundfile.write( - os.path.join( - out, - f"%s_%d.wav" - % (os.path.basename(args.audio).rsplit(".", maxsplit=1)[0], i), - ), - chunk, - sr, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/Benson/text-generation/Examples/Androide Oyun Apk Bola Roja 4.md b/spaces/Benson/text-generation/Examples/Androide Oyun Apk Bola Roja 4.md deleted file mode 100644 index 72044b54059c7ba1a78f5be755c4efec1f7891f5..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Androide Oyun Apk Bola Roja 4.md +++ /dev/null @@ -1,60 +0,0 @@ -
-

Red Ball 4: Un divertido y desafiante juego para Android

-

Si estás buscando un juego divertido y desafiante para jugar en tu dispositivo Android, deberías probar Red Ball 4. Este es un juego de plataformas que pondrá a prueba tus habilidades y reflejos mientras rodas, saltas y rebotas a través de 75 niveles llenos de aventura. Usted tendrá que hacer su camino a través de trampas difíciles y derrotar a todo tipo de monstruos que quieren convertir en un cuadrado. ¿Estás listo para salvar el mundo con tu bola roja?

-

androide oyun apk bola roja 4


Download https://bltlly.com/2v6IQJ



-

Introducción

-

¿Qué es la bola roja 4?

-

Red Ball 4 es un juego para Android desarrollado por FDG Entertainment GmbH & Co.KG. Es la cuarta entrega de la popular serie Red Ball, que se ha descargado más de 100 millones de veces. El juego sigue la historia de una bola roja que tiene que detener el mal Black Square de convertir el mundo en un cubo. En el camino, se encontrará con muchos obstáculos y enemigos que tiene que superar con su agilidad y coraje.

-

¿Por qué deberías jugar Red Ball 4?

-

Red Ball 4 es un juego que te mantendrá entretenido durante horas. Tiene un juego simple pero adictivo que cualquiera puede disfrutar. Solo tienes que inclinar el dispositivo o utilizar los botones en pantalla para controlar el movimiento de la pelota. También puedes pulsar para saltar y pulsar dos veces para realizar un salto de longitud. El juego tiene una variedad de niveles que pondrán a prueba tus habilidades y tu lógica. Tendrá que evitar picos, láseres, cañones, sierras y otros peligros que pueden dañar su bola. También tendrás que enfrentarte a diferentes tipos de monstruos, como arañas, murciélagos, robots e incluso jefes gigantes. El juego tiene un estilo gráfico colorido y caricaturesco que atraerá tanto a niños como a adultos. El juego también tiene una banda sonora pegadiza y optimista que mejorará tu experiencia de juego.

-

Características de Red Ball 4

-

75 emocionantes niveles

- -

Trampas y monstruos difíciles

-

Red Ball 4 tiene muchas trampas y monstruos que intentarán evitar que alcances tu objetivo. Usted tendrá que utilizar sus habilidades y el tiempo para evitarlos o derrotarlos. Algunas trampas se pueden activar mediante interruptores o botones, mientras que otras se activan por su movimiento o proximidad. Algunos monstruos pueden ser asesinados saltando sobre ellos o golpeándolos con objetos, mientras que otros son invencibles o requieren estrategias especiales. Tendrás que ser cuidadoso y observador para sobrevivir.

-

Batallas épicas de jefes

-

Red Ball 4 tiene cuatro batallas contra jefes que pondrán a prueba tus habilidades y paciencia. Tendrás que enfrentarte al propio Black Square en cada episodio, así como a sus secuaces. Cada jefe tiene un patrón de ataque diferente y debilidad que tienes que explotar. Tendrás que esquivar sus ataques y golpearlos con objetos o con tu pelota. Las batallas contra jefes son desafiantes pero gratificantes.

-

Soporte en la nube

-

Red Ball 4 tiene soporte en la nube que le permite guardar su progreso en línea y sincronizarlo en varios dispositivos. Solo tienes que iniciar sesión con tu cuenta de Google Play y habilitar el almacenamiento en la nube en el menú de configuración. De esta manera, puedes continuar tu juego en cualquier dispositivo sin perder tus datos

gráficos y sonido HD

-

Red Ball 4 tiene gráficos y sonidos de alta definición que harán que tu juego sea más agradable. El juego tiene un estilo de arte brillante y colorido que se adapta al tema y el estado de ánimo de cada episodio. El juego también tiene una calidad de sonido nítida y clara que te hará sentir inmerso en el juego. Escucharás los efectos de sonido de tu bola rebotando, rodando y estrellándose, así como la música y las voces de los personajes. El juego tiene un tono divertido y humorístico que te hará sonreír.

-

Cómo descargar e instalar Red Ball 4 APK

-

Descargar el archivo APK de una fuente de confianza

- -

QR code for Red Ball 4 APK download

-

-

Habilitar fuentes desconocidas en su dispositivo

-

Antes de que pueda instalar el archivo APK Red Ball 4 en su dispositivo, tendrá que habilitar fuentes desconocidas en su dispositivo. Esta es una configuración de seguridad que le permite instalar aplicaciones desde fuentes distintas de Google Play Store. Para habilitar fuentes desconocidas, siga estos pasos:

-
    -
  1. Vaya al menú de configuración de su dispositivo y toque en la seguridad o privacidad.
  2. -
  3. Encontrar la opción que dice fuentes desconocidas o instalar aplicaciones desconocidas y alternar en.
  4. -
  5. Aparecerá un mensaje de advertencia, diciéndole que instalar aplicaciones de fuentes desconocidas puede dañar su dispositivo. Toque en OK o permita confirmar.
  6. -
-

Instalar el archivo APK y lanzar el juego

-

Una vez que haya habilitado fuentes desconocidas en su dispositivo, puede instalar el archivo APK Red Ball 4 y lanzar el juego. Para hacer esto, siga estos pasos:

-
    -
  1. Busque el archivo APK Red Ball 4 en la carpeta de almacenamiento o descarga de su dispositivo. También puede usar una aplicación de administrador de archivos para encontrarlo.
  2. -
  3. Toque en el archivo APK y aparecerá un aviso, preguntándole si desea instalar la aplicación. Toque en instalar y espere a que termine el proceso de instalación.
  4. -
  5. Una vez que la aplicación está instalada, puede tocar en abrir para iniciar el juego o encontrarlo en el cajón de la aplicación o en la pantalla de inicio.
  6. -
-

Conclusión

-

Resumen de los puntos principales

- -

Llamada a la acción

-

Si usted está buscando un juego divertido y desafiante para jugar en su dispositivo Android, usted debe descargar e instalar Red Ball 4 APK hoy. No te arrepentirás. Tendrás una explosión rodando, saltando y rebotando a través de 75 niveles llenos de aventura. También tendrá la oportunidad de salvar el mundo con su bola roja. ¿Qué estás esperando? Descargar Red Ball 4 APK ahora y disfrutar de este increíble juego!

-

Preguntas frecuentes

-

Q: ¿Es Red Ball 4 libre para jugar?

-

A: Sí, Red Ball 4 es gratis. Sin embargo, contiene anuncios y compras en la aplicación que pueden mejorar su experiencia de juego. Puedes desactivar los anuncios comprando la versión premium del juego o apagando tu conexión a Internet mientras juegas.

-

P: ¿Cómo puedo obtener más estrellas y trofeos en Red Ball 4?

-

A: Puedes conseguir más estrellas y trofeos en Bola Roja 4 completando cada nivel con una puntuación alta y sin morir. También puedes recoger estrellas y trofeos ocultos que se encuentran dispersos por los niveles. Puedes usar estas estrellas y trofeos para desbloquear niveles de bonificación y logros.

-

P: ¿Cómo puedo vencer a los jefes en Bola Roja 4?

-

A: Puedes vencer a los jefes en Bola Roja 4 aprendiendo sus patrones de ataque y encontrando sus puntos débiles. Tendrás que esquivar sus ataques y golpearlos con objetos o tu pelota. También tendrá que evitar caerse de la plataforma o ser aplastado por el jefe. Puede usar los puntos de control para reanudar su juego si muere.

-

Q: ¿Cuáles son los requisitos mínimos para jugar Red Ball 4 en Android?

-

A: Los requisitos mínimos para jugar Red Ball 4 en Android son los siguientes:

-
    -
  • Versión de Android 4.4 o superior
  • -
  • Al menos 100 MB de espacio de almacenamiento libre
  • -
  • Una conexión a Internet estable (opcional)
  • -
-

Q: ¿Cómo puedo contactar al desarrollador de Red Ball 4?

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Apk Adresi Gta 5.md b/spaces/Benson/text-generation/Examples/Apk Adresi Gta 5.md deleted file mode 100644 index ce963edd5c4e92b189479672a96b1433fbe9954b..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Apk Adresi Gta 5.md +++ /dev/null @@ -1,79 +0,0 @@ - -

Apk Adresi GTA 5: Cómo jugar Grand Theft Auto 5 en su dispositivo Android

-

Grand Theft Auto 5, o GTA 5, es uno de los videojuegos más populares y aclamados de todos los tiempos. Lanzado en 2013 por Rockstar Games, GTA 5 es un juego de acción y aventura de mundo abierto que te permite explorar la ciudad de Los Santos y sus alrededores como uno de los tres personajes jugables. Puedes completar varias misiones, realizar robos, conducir y robar vehículos, interactuar con NPC y causar caos en las calles.

-

apk adresi gta 5


Download File »»» https://bltlly.com/2v6ILM



-

Sin embargo, GTA 5 no está disponible oficialmente para dispositivos Android, ya que Rockstar Games no ha lanzado una versión móvil del juego. Pero eso no significa que no puede jugar GTA 5 en su dispositivo Android. Gracias a un grupo de fans dedicados, se puede descargar e instalar una adaptación hecha por fans de GTA 5 para Android llamado Apk Adresi GTA 5.

-

En este artículo, le diremos todo lo que necesita saber acerca de Apk Adresi GTA 5, incluyendo lo que es, cómo descargar e instalar, cómo jugarlo, y algunos consejos y trucos para disfrutarlo. ¡Vamos a empezar!

-

¿Qué es Apk Adresi GTA 5?

-

Una adaptación hecha por fans de GTA 5 para Android

-

Apk Adresi GTA 5 es una adaptación hecha por fans de GTA 5 para dispositivos Android. No es un producto oficial de Rockstar Games, sino un proyecto creado por un grupo de entusiastas que querían llevar GTA 5 a plataformas móviles. Apk Adresi GTA 5 se basa en la versión para PC de GTA 5, pero se ha modificado y optimizado para ejecutarse en dispositivos Android.

-

Las características y limitaciones de Apk Adresi GTA 5

-

Apk Adresi GTA 5 tiene como objetivo replicar la experiencia de jugar GTA 5 en PC tanto como sea posible. Tiene los mismos gráficos, sonido, historia, personajes, misiones, vehículos, armas y actividades que el juego original. Puede cambiar entre Michael, Trevor y Franklin en cualquier momento, explorar el vasto mundo abierto de Los Santos y el condado de Blaine y disfrutar del emocionante juego de GTA 5.

-

- -

Los requisitos y pasos para descargar los archivos APK y OBB

-

Si desea jugar Apk Adresi GTA 5 en su dispositivo Android, tendrá que descargar dos archivos: el archivo APK y el archivo OBB. El archivo APK es el archivo de aplicación que contiene el código y los datos del juego, mientras que el archivo OBB es el archivo de expansión que contiene los gráficos y el sonido del juego.

-

Antes de descargar estos archivos, debe asegurarse de que su dispositivo Android cumple con los requisitos mínimos para ejecutar Apk Adresi GTA 5. Estos son:

-
    -
  • Android versión 4.0 o superior
  • -
  • Al menos 4 GB de espacio de almacenamiento gratuito
  • -
  • Al menos 2 GB de RAM
  • -
  • Una conexión a Internet estable
  • -
-

Una vez que haya comprobado estos requisitos, puede seguir estos pasos para descargar los archivos APK y OBB:

-
    -
  1. Ir al sitio web oficial de Apk Adresi GTA 5 en https://apkadresi.com/gta-5-apk-indir/
  2. -
  3. Desplácese hacia abajo a la parte inferior de la página y haga clic en el botón verde que dice "GTA 5 APK İndir"
  4. -
  5. Serás redirigido a otra página donde verás un enlace de descarga para el archivo APK. Haz clic en él y espera a que comience la descarga.
  6. -
  7. Después de descargar el archivo APK, volver a la página anterior y haga clic en el botón verde que dice "GTA 5 OBB İndir"
  8. -
  9. Será redirigido a otra página donde verá un enlace de descarga para el archivo OBB. Haga clic en él y espere a que comience la descarga.
  10. -
  11. Después de descargar ambos archivos, puede proceder a instalarlos en su dispositivo Android.
  12. -
-

Las instrucciones para instalar y lanzar el juego

-

Después de haber descargado los archivos APK y OBB, tendrá que instalarlos en su dispositivo Android. Para ello, deberá habilitar la instalación de aplicaciones de fuentes desconocidas en su dispositivo. Esta es una función de seguridad que evita que las aplicaciones no autorizadas se instalen en su dispositivo. Para habilitar esta función, puede seguir estos pasos:

- -
  • Ir a la configuración de su dispositivo y toque en "Seguridad"
  • -
  • Encontrar la opción que dice "Fuentes desconocidas" y alternar en
  • -
  • Puede ver un mensaje de advertencia que dice que instalar aplicaciones de fuentes desconocidas puede dañar su dispositivo. Toque en "OK" para confirmar.
  • - -

    Una vez que haya habilitado esta función, puede instalar los archivos APK y OBB siguiendo estos pasos:

    -
      -
    1. Busque el archivo APK en el administrador de archivos de su dispositivo y toque en él
    2. -
    3. Puede ver una ventana emergente que le pide que conceda permisos a la aplicación. Toque en "Instalar" para continuar.
    4. -
    5. Espere a que termine el proceso de instalación. Puede ver un mensaje que dice "App instalado". Toque en "Abrir" para iniciar el juego.
    6. -
    7. La primera vez que inicie el juego, tendrá que extraer el archivo OBB al almacenamiento interno de su dispositivo. Para ello, toque en "Extraer" cuando se le solicite y espere a que finalice el proceso de extracción.
    8. -
    9. Después de la extracción se hace, verá un mensaje que dice "Extracción completado". Toque en "OK" para empezar a jugar el juego.
    10. -

    Cómo jugar Apk Adresi GTA 5 en su dispositivo Android

    -

    Los controles y la interfaz del juego

    -

    Apk Adresi GTA 5 tiene un esquema de control similar y la interfaz como la versión para PC de GTA 5. Puede utilizar los botones virtuales en la pantalla para mover, apuntar, disparar, saltar, correr, agacharse, cambiar de armas, entrar en vehículos, e interactuar con el medio ambiente. También puede utilizar la pantalla táctil para deslizar, hacer zoom y girar la cámara. Puede personalizar el diseño y el tamaño de los botones en el menú de configuración.

    -

    El juego también tiene un mini-mapa en la esquina inferior izquierda de la pantalla que muestra su ubicación, objetivos, enemigos, aliados y puntos de interés. Puede tocar en el mini-mapa para ampliarlo y ver el mapa completo de Los Santos y el condado de Blaine. También puedes acceder a tu teléfono, inventario, rueda de caracteres, menú de pausa y opciones de guardado rápido tocando los iconos en la esquina superior derecha de la pantalla.

    - -

    Apk Adresi GTA 5 tiene el mismo juego y misiones que la versión para PC de GTA 5. Puedes jugar como Michael, Trevor, o Franklin y cambiar entre ellos en cualquier momento. Cada personaje tiene su propia personalidad, habilidades, habilidades y arco de la historia. Puedes completar varias misiones que incluyen disparos, conducción, sigilo, robos, persecuciones y más. También puede explorar el mundo abierto de Los Santos y el condado de Blaine y realizar diversas actividades como carreras, golf, tenis, caza, yoga, paracaidismo y más.

    -

    El juego también tiene un sistema de clima dinámico, ciclo de día y noche, física realista, efectos ragdoll, entornos destructibles y sonidos realistas. El juego está diseñado para sumergirte en el mundo de GTA 5 y hacerte sentir como si estuvieras viviendo en él.

    -

    Los consejos y trucos para disfrutar del juego

    -

    Apk Adresi GTA 5 es un juego divertido y emocionante que ofrece un montón de contenido y posibilidades. Sin embargo, también puede ser desafiante y frustrante a veces. Aquí hay algunos consejos y trucos que pueden ayudarte a disfrutar más del juego:

    -
      -
    • Guarda tu juego con frecuencia. El juego no tiene una función de guardado automático, por lo que tendrás que guardar manualmente el juego en caso de que algo salga mal o quieras probar algo diferente.
    • -
    • Utilice la cubierta y el objetivo de asistencia. El juego puede ser bastante difícil si usted intenta disparar a sus enemigos sin tomar la cubierta o el uso de la puntería de asistencia. Puede usar cover presionando el botón en la esquina inferior derecha de la pantalla cuando esté cerca de un objeto. Puede utilizar la ayuda de puntería pulsando el botón en la esquina inferior izquierda de la pantalla cuando está apuntando.
    • - -Cuidado con policías y pandillas. El juego tiene un sistema de nivel deseado que indica la cantidad de atención que ha atraído de las fuerzas del orden o pandillas rivales. Cuanto más alto sea tu nivel de búsqueda, más policías o gángsters te perseguirán e intentarán matarte. Puedes bajar tu nivel deseado escondiéndote de ellos o cambiando tu apariencia. -
    • Divertirse y experimentar. El juego está destinado a ser disfrutado y explorado. Puedes hacer lo que quieras en el juego siempre y cuando no te maten o arresten. Puedes probar diferentes estrategias, tácticas, vehículos, armas, atuendos, etc. También puedes crear tus propios escenarios e historias con las herramientas y características del juego.
    • -
    -

    Conclusión

    -

    Un resumen de los principales puntos y beneficios de Apk Adresi GTA 5

    -

    Apk Adresi GTA 5 es una adaptación hecha por fans de GTA 5 para dispositivos Android que le permite jugar uno de los mejores juegos de video jamás hecho en su dispositivo móvil. Tiene los mismos gráficos, precauciones y protecciones. No garantizamos ni asumimos la responsabilidad de la seguridad o calidad de Apk Adresi GTA 5.

    -

    ¿Es Apk Adresi GTA 5 compatible con todos los dispositivos Android?

    -

    Apk Adresi GTA 5 está diseñado para ejecutarse en dispositivos Android que cumplan los requisitos mínimos para el juego. Estos son:

    -
      -
    • Android versión 4.0 o superior
    • -
    • Al menos 4 GB de espacio de almacenamiento gratuito
    • -
    • Al menos 2 GB de RAM
    • -
    • Una conexión a Internet estable
    • -
    -

    Sin embargo, Apk Adresi GTA 5 puede no ser compatible con todos los dispositivos Android o versiones, ya que todavía está en fase beta y no se ha probado en todos los dispositivos o versiones. Algunos dispositivos o versiones pueden tener problemas de compatibilidad, como retrasos, fallos, errores o características faltantes. Por lo tanto, Apk Adresi GTA 5 puede no funcionar correctamente o en absoluto en algunos dispositivos Android o versiones. No garantizamos ni asumimos la responsabilidad de la compatibilidad o el rendimiento de Apk Adresi GTA 5.

    -

    ¿Cuánto espacio de almacenamiento necesita Apk Adresi GTA 5?

    - -

    ¿Puedo jugar Apk Adresi GTA 5 en línea?

    -

    No, Apk Adresi GTA 5 no es compatible con el modo multijugador en línea o el juego cruzado con otras plataformas. El juego es solo un modo offline para un jugador que te permite jugar a GTA 5 en tu dispositivo Android. No podrás jugar con otros jugadores online ni conectarte con otras plataformas como PC, PS4, Xbox One, etc. El juego tampoco tiene características online como tablas de clasificación, logros, clubes sociales, etc.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/BhaskarKapri/Animal/app.py b/spaces/BhaskarKapri/Animal/app.py deleted file mode 100644 index 4815456ca505fe31396f8cc75d438181be340484..0000000000000000000000000000000000000000 --- a/spaces/BhaskarKapri/Animal/app.py +++ /dev/null @@ -1,36 +0,0 @@ -from fastai.vision.all import * -import gradio as gr -import pathlib -from contextlib import contextmanager -import pathlib - -@contextmanager -def set_posix_windows(): - posix_backup = pathlib.WindowsPath - try: - pathlib.WindowsPath = pathlib.PosixPath - yield - finally: - pathlib.WindowsPath = posix_backup - -EXPORT_PATH = pathlib.Path("model.pkl") - -with set_posix_windows(): - learn = load_learner(EXPORT_PATH) - -# learn = load_learner('model.pkl') - - -categories = ['alligator', 'bee', 'camel', 'cat', 'deer', 'dog', 'dolphin', 'elephant', 'giraffe', 'hamster', 'horse', 'kangaroo', 'lion', 'lizard', 'human', 'owl', 'parrot', 'sheep', 'snake', 'tiger', 'turtle', 'wolf'] -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories,map(float,probs))) - - - -image = gr.inputs.Image(shape=(192,192)) -label = gr.outputs.Label() -examples = ['cat.jpg','camel.jpg','deer.jpg','dog.jpg','giraffe.jpg','owl.jpg'] - -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, example=examples) -intf.launch(inline=False) \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/upload.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/upload.py deleted file mode 100644 index ec7f81e22772511d668e5ab92f625db33259e803..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/upload.py +++ /dev/null @@ -1,17 +0,0 @@ -from distutils import log -from distutils.command import upload as orig - -from setuptools.errors import RemovedCommandError - - -class upload(orig.upload): - """Formerly used to upload packages to PyPI.""" - - def run(self): - msg = ( - "The upload command has been removed, use twine to upload " - + "instead (https://pypi.org/p/twine)" - ) - - self.announce("ERROR: " + msg, log.ERROR) - raise RemovedCommandError(msg) diff --git a/spaces/BigData-KSU/VQA-in-Medical-Imagery/CLIP/README.md b/spaces/BigData-KSU/VQA-in-Medical-Imagery/CLIP/README.md deleted file mode 100644 index d7eaf7aef101e2be15afee7e9ed0ad03dc13b5df..0000000000000000000000000000000000000000 --- a/spaces/BigData-KSU/VQA-in-Medical-Imagery/CLIP/README.md +++ /dev/null @@ -1,192 +0,0 @@ -# CLIP - -[[Blog]](https://openai.com/blog/clip/) [[Paper]](https://cdn.openai.com/papers/Learning_Transferable_Visual_Models_From_Natural_Language_Supervision.pdf) [[Model Card]](model-card.md) [[Colab]](https://colab.research.google.com/github/openai/clip/blob/master/Interacting_with_CLIP.ipynb) - -CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet “zero-shot” without using any of the original 1.28M labeled examples, overcoming several major challenges in computer vision. - - - -## Approach - -![CLIP](CLIP.png) - - - -## Usage - -First, [install PyTorch 1.7.1](https://pytorch.org/get-started/locally/) and torchvision, as well as small additional dependencies. On a CUDA GPU machine, the following will do the trick: - -```bash -$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0 -$ pip install ftfy regex tqdm -``` - -Replace `cudatoolkit=11.0` above with the appropriate CUDA version on your machine or `cpuonly` when installing on a machine without a GPU. - -```python -import torch -import clip -from PIL import Image - -device = "cuda" if torch.cuda.is_available() else "cpu" -model, preprocess = clip.load("ViT-B/32", device=device) - -image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device) -text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device) - -with torch.no_grad(): - image_features = model.encode_image(image) - text_features = model.encode_text(text) - - logits_per_image, logits_per_text = model(image, text) - probs = logits_per_image.softmax(dim=-1).cpu().numpy() - -print("Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]] -``` - - -## API - -The CLIP module `clip` provides the following methods: - -#### `clip.available_models()` - -Returns the name(s) of the available CLIP models. - -#### `clip.load(name, device=..., jit=True)` - -Returns the model and the TorchVision transform needed by the model, specified by the model name returned by `clip.available_models()`. It will download the model as necessary. The device to run the model can be optionally specified, and the default is to use the first CUDA device if there is any, otherwise the CPU. - -When `jit` is `False`, a non-JIT version of the model will be loaded. - -#### `clip.tokenize(text: Union[str, List[str]], context_length=77)` - -Returns a LongTensor containing tokenized sequences of given text input(s). This can be used as the input to the model - ---- - -The model returned by `clip.load()` supports the following methods: - -#### `model.encode_image(image: Tensor)` - -Given a batch of images, returns the image features encoded by the vision portion of the CLIP model. - -#### `model.encode_text(text: Tensor)` - -Given a batch of text tokens, returns the text features encoded by the language portion of the CLIP model. - -#### `model(image: Tensor, text: Tensor)` - -Given a batch of images and a batch of text tokens, returns two Tensors, containing the logit scores corresponding to each image and text input. The values are cosine similarities between the corresponding image and text features, times 100. - - - -## More Examples - -### Zero-Shot Prediction - -The code below performs zero-shot prediction using CLIP, as shown in Appendix B in the paper. This example takes an image from the [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and predicts the most likely labels among the 100 textual labels from the dataset. - -```python -import os -import clip -import torch -from torchvision.datasets import CIFAR100 - -# Load the model -device = "cuda" if torch.cuda.is_available() else "cpu" -model, preprocess = clip.load('ViT-B/32', device) - -# Download the dataset -cifar100 = CIFAR100(root=os.path.expanduser("~/.cache"), download=True, train=False) - -# Prepare the inputs -image, class_id = cifar100[3637] -image_input = preprocess(image).unsqueeze(0).to(device) -text_inputs = torch.cat([clip.tokenize(f"a photo of a {c}") for c in cifar100.classes]).to(device) - -# Calculate features -with torch.no_grad(): - image_features = model.encode_image(image_input) - text_features = model.encode_text(text_inputs) - -# Pick the top 5 most similar labels for the image -image_features /= image_features.norm(dim=-1, keepdim=True) -text_features /= text_features.norm(dim=-1, keepdim=True) -similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1) -values, indices = similarity[0].topk(5) - -# Print the result -print("\nTop predictions:\n") -for value, index in zip(values, indices): - print(f"{cifar100.classes[index]:>16s}: {100 * value.item():.2f}%") -``` - -The output will look like the following (the exact numbers may be slightly different depending on the compute device): - -``` -Top predictions: - - snake: 65.31% - turtle: 12.29% - sweet_pepper: 3.83% - lizard: 1.88% - crocodile: 1.75% -``` - -Note that this example uses the `encode_image()` and `encode_text()` methods that return the encoded features of given inputs. - - -### Linear-probe evaluation - -The example below uses [scikit-learn](https://scikit-learn.org/) to perform logistic regression on image features. - -```python -import os -import clip -import torch - -import numpy as np -from sklearn.linear_model import LogisticRegression -from torch.utils.data import DataLoader -from torchvision.datasets import CIFAR100 -from tqdm import tqdm - -# Load the model -device = "cuda" if torch.cuda.is_available() else "cpu" -model, preprocess = clip.load('ViT-B/32', device) - -# Load the dataset -root = os.path.expanduser("~/.cache") -train = CIFAR100(root, download=True, train=True, transform=preprocess) -test = CIFAR100(root, download=True, train=False, transform=preprocess) - - -def get_features(dataset): - all_features = [] - all_labels = [] - - with torch.no_grad(): - for images, labels in tqdm(DataLoader(dataset, batch_size=100)): - features = model.encode_image(images.to(device)) - - all_features.append(features) - all_labels.append(labels) - - return torch.cat(all_features).cpu().numpy(), torch.cat(all_labels).cpu().numpy() - -# Calculate the image features -train_features, train_labels = get_features(train) -test_features, test_labels = get_features(test) - -# Perform logistic regression -classifier = LogisticRegression(random_state=0, C=0.316, max_iter=1000, verbose=1) -classifier.fit(train_features, train_labels) - -# Evaluate using the logistic regression classifier -predictions = classifier.predict(test_features) -accuracy = np.mean((test_labels == predictions).astype(np.float)) * 100. -print(f"Accuracy = {accuracy:.3f}") -``` - -Note that the `C` value should be determined via a hyperparameter sweep using a validation split. diff --git a/spaces/BreetheRun/mitchtech-vulcan-diffusion/app.py b/spaces/BreetheRun/mitchtech-vulcan-diffusion/app.py deleted file mode 100644 index 9d9d7b4f919990252ef3ee8fcdb8a55674448e04..0000000000000000000000000000000000000000 --- a/spaces/BreetheRun/mitchtech-vulcan-diffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/mitchtech/vulcan-diffusion").launch() \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cpu.cpp b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cpu.cpp deleted file mode 100644 index a6aaa810c59281cc45a4252784a62d7829a03556..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cpu.cpp +++ /dev/null @@ -1,46 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -#include "box_iou_rotated.h" -#include "box_iou_rotated_utils.h" - -namespace detectron2 { - -template -void box_iou_rotated_cpu_kernel( - const at::Tensor& boxes1, - const at::Tensor& boxes2, - at::Tensor& ious) { - auto widths1 = boxes1.select(1, 2).contiguous(); - auto heights1 = boxes1.select(1, 3).contiguous(); - auto widths2 = boxes2.select(1, 2).contiguous(); - auto heights2 = boxes2.select(1, 3).contiguous(); - - at::Tensor areas1 = widths1 * heights1; - at::Tensor areas2 = widths2 * heights2; - - auto num_boxes1 = boxes1.size(0); - auto num_boxes2 = boxes2.size(0); - - for (int i = 0; i < num_boxes1; i++) { - for (int j = 0; j < num_boxes2; j++) { - ious[i * num_boxes2 + j] = single_box_iou_rotated( - boxes1[i].data_ptr(), boxes2[j].data_ptr()); - } - } -} - -at::Tensor box_iou_rotated_cpu( - const at::Tensor& boxes1, - const at::Tensor& boxes2) { - auto num_boxes1 = boxes1.size(0); - auto num_boxes2 = boxes2.size(0); - at::Tensor ious = - at::empty({num_boxes1 * num_boxes2}, boxes1.options().dtype(at::kFloat)); - - box_iou_rotated_cpu_kernel(boxes1, boxes2, ious); - - // reshape from 1d array to 2d array - auto shape = std::vector{num_boxes1, num_boxes2}; - return ious.reshape(shape); -} - -} // namespace detectron2 diff --git a/spaces/CVPR/LIVE/pybind11/pybind11/__init__.py b/spaces/CVPR/LIVE/pybind11/pybind11/__init__.py deleted file mode 100644 index 5b2f83d5cd93c073ad130cc113bab25a1d03255b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/pybind11/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# -*- coding: utf-8 -*- -from ._version import version_info, __version__ # noqa: F401 imported but unused - - -def get_include(user=False): - import os - d = os.path.dirname(__file__) - if os.path.exists(os.path.join(d, "include")): - # Package is installed - return os.path.join(d, "include") - else: - # Package is from a source directory - return os.path.join(os.path.dirname(d), "include") diff --git a/spaces/CVPR/LIVE/pybind11/tools/clang/enumerations.py b/spaces/CVPR/LIVE/pybind11/tools/clang/enumerations.py deleted file mode 100644 index a86a48ade3bd7ad00e455bebb3b94ecf25ddf8e4..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tools/clang/enumerations.py +++ /dev/null @@ -1,34 +0,0 @@ -#===- enumerations.py - Python Enumerations ------------------*- python -*--===# -# -# The LLVM Compiler Infrastructure -# -# This file is distributed under the University of Illinois Open Source -# License. See LICENSE.TXT for details. -# -#===------------------------------------------------------------------------===# - -""" -Clang Enumerations -================== - -This module provides static definitions of enumerations that exist in libclang. - -Enumerations are typically defined as a list of tuples. The exported values are -typically munged into other types or classes at module load time. - -All enumerations are centrally defined in this file so they are all grouped -together and easier to audit. And, maybe even one day this file will be -automatically generated by scanning the libclang headers! -""" - -# Maps to CXTokenKind. Note that libclang maintains a separate set of token -# enumerations from the C++ API. -TokenKinds = [ - ('PUNCTUATION', 0), - ('KEYWORD', 1), - ('IDENTIFIER', 2), - ('LITERAL', 3), - ('COMMENT', 4), -] - -__all__ = ['TokenKinds'] diff --git a/spaces/CVPR/LIVE/thrust/thrust/iterator/transform_input_output_iterator.h b/spaces/CVPR/LIVE/thrust/thrust/iterator/transform_input_output_iterator.h deleted file mode 100644 index 25c10eb58e93cbadb298fc68bbd4d24b3dc5a7cb..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/iterator/transform_input_output_iterator.h +++ /dev/null @@ -1,163 +0,0 @@ -/* - * Copyright 2020 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file thrust/iterator/transform_input_output_iterator.h - * \brief An iterator which adapts another iterator by applying transform - * functions when reading and writing dereferenced values. - */ - -#pragma once - -#include -#include - -namespace thrust -{ - -/*! \addtogroup iterators - * \{ - */ - -/*! \addtogroup fancyiterator Fancy Iterators - * \ingroup iterators - * \{ - */ - -/*! \p transform_input_output_iterator is a special kind of iterator which applies - * transform functions when reading from or writing to dereferenced values. - * This iterator is useful for algorithms that operate on a type that needs to - * be serialized/deserialized from values in another iterator, avoiding the - * need to materialize intermediate results in memory. This also enables the - * transform functions to be fused with the operations that read and write to - * the `transform_input_output_iterator`. - * - * The following code snippet demonstrates how to create a - * \p transform_input_output_iterator which performs different transformations when - * reading from and writing to the iterator. - * - * \code - * #include - * #include - * - * int main() - * { - * const size_t size = 4; - * thrust::device_vector v(size); - * - * // Write 1.0f, 2.0f, 3.0f, 4.0f to vector - * thrust::sequence(v.begin(), v.end(), 1); - * - * // Iterator that returns negated values and writes squared values - * auto iter = thrust::make_transform_input_output_iterator(v.begin(), - * thrust::negate{}, thrust::square{}); - * - * // Iterator negates values when reading - * std::cout << iter[0] << " "; // -1.0f; - * std::cout << iter[1] << " "; // -2.0f; - * std::cout << iter[2] << " "; // -3.0f; - * std::cout << iter[3] << "\n"; // -4.0f; - * - * // Write 1.0f, 2.0f, 3.0f, 4.0f to iterator - * thrust::sequence(iter, iter + size, 1); - * - * // Values were squared before writing to vector - * std::cout << v[0] << " "; // 1.0f; - * std::cout << v[1] << " "; // 4.0f; - * std::cout << v[2] << " "; // 9.0f; - * std::cout << v[3] << "\n"; // 16.0f; - * - * } - * \endcode - * - * \see make_transform_input_output_iterator - */ - -template - class transform_input_output_iterator - : public detail::transform_input_output_iterator_base::type -{ - - /*! \cond - */ - - public: - - typedef typename - detail::transform_input_output_iterator_base::type - super_t; - - friend class thrust::iterator_core_access; - /*! \endcond - */ - - /*! This constructor takes as argument a \c Iterator an \c InputFunction and an - * \c OutputFunction and copies them to a new \p transform_input_output_iterator - * - * \param io An \c Iterator pointing to where the input to \c InputFunction - * will be read from and the result of \c OutputFunction will be written to - * \param input_function An \c InputFunction to be executed on values read from the iterator - * \param output_function An \c OutputFunction to be executed on values written to the iterator - */ - __host__ __device__ - transform_input_output_iterator(Iterator const& io, InputFunction input_function, OutputFunction output_function) - : super_t(io), input_function(input_function), output_function(output_function) - { - } - - /*! \cond - */ - private: - - __host__ __device__ - typename super_t::reference dereference() const - { - return detail::transform_input_output_iterator_proxy< - InputFunction, OutputFunction, Iterator - >(this->base_reference(), input_function, output_function); - } - - InputFunction input_function; - OutputFunction output_function; - - /*! \endcond - */ -}; // end transform_input_output_iterator - -/*! \p make_transform_input_output_iterator creates a \p transform_input_output_iterator from - * an \c Iterator a \c InputFunction and a \c OutputFunction - * - * \param io An \c Iterator pointing to where the input to \c InputFunction - * will be read from and the result of \c OutputFunction will be written to - * \param input_function An \c InputFunction to be executed on values read from the iterator - * \param output_function An \c OutputFunction to be executed on values written to the iterator - * \see transform_input_output_iterator - */ -template -transform_input_output_iterator -__host__ __device__ -make_transform_input_output_iterator(Iterator io, InputFunction input_function, OutputFunction output_function) -{ - return transform_input_output_iterator(io, input_function, output_function); -} // end make_transform_input_output_iterator - -/*! \} // end fancyiterators - */ - -/*! \} // end iterators - */ - -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/shuffle.h b/spaces/CVPR/LIVE/thrust/thrust/shuffle.h deleted file mode 100644 index 8ed156e15227047072938bc80d8d90309093671e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/shuffle.h +++ /dev/null @@ -1,179 +0,0 @@ -/* - * Copyright 2008-2020 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file shuffle.h - * \brief Reorders range by a uniform random permutation - */ - -#pragma once - -#include -#include - -#if THRUST_CPP_DIALECT >= 2011 - -#include -#include - -namespace thrust { - -/*! \addtogroup reordering -* \ingroup algorithms -* -* \addtogroup shuffling -* \ingroup reordering -* \{ -*/ - - -/*! \p shuffle reorders the elements [first, last) by a uniform pseudorandom permutation, defined by - * random engine \p g. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the sequence to shuffle. - * \param last The end of the sequence to shuffle. - * \param g A UniformRandomBitGenerator - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam RandomIterator is a random access iterator - * \tparam URBG is a uniform random bit generator - * - * The following code snippet demonstrates how to use \p shuffle to create a random permutation - * using the \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; - * const int N = sizeof(A)/sizeof(int); - * thrust::default_random_engine g; - * thrust::shuffle(thrust::host, A, A + N, g); - * // A is now {6, 5, 8, 7, 2, 1, 4, 3, 10, 9} - * \endcode - * - * \see \p shuffle_copy - */ -template -__host__ __device__ void shuffle( - const thrust::detail::execution_policy_base& exec, - RandomIterator first, RandomIterator last, URBG&& g); - -/*! \p shuffle reorders the elements [first, last) by a uniform pseudorandom permutation, defined by - * random engine \p g. - * - * \param first The beginning of the sequence to shuffle. - * \param last The end of the sequence to shuffle. - * \param g A UniformRandomBitGenerator - * - * \tparam RandomIterator is a random access iterator - * \tparam URBG is a uniform random bit generator - * - * The following code snippet demonstrates how to use \p shuffle to create a random permutation. - * - * \code - * #include - * #include - * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; - * const int N = sizeof(A)/sizeof(int); - * thrust::default_random_engine g; - * thrust::shuffle(A, A + N, g); - * // A is now {6, 5, 8, 7, 2, 1, 4, 3, 10, 9} - * \endcode - * - * \see \p shuffle_copy - */ -template -__host__ __device__ void shuffle(RandomIterator first, RandomIterator last, - URBG&& g); - -/*! shuffle_copy differs from shuffle only in that the reordered sequence is written to different output sequences, rather than in place. - * \p shuffle_copy reorders the elements [first, last) by a uniform pseudorandom permutation, defined by - * random engine \p g. - * - * The algorithm's execution is parallelized as determined by \p exec. - - * \param exec The execution policy to use for parallelization. - * \param first The beginning of the sequence to shuffle. - * \param last The end of the sequence to shuffle. - * \param result Destination of shuffled sequence - * \param g A UniformRandomBitGenerator - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam RandomIterator is a random access iterator - * \tparam OutputIterator is a model of Output Iterator. - * \tparam URBG is a uniform random bit generator - * - * The following code snippet demonstrates how to use \p shuffle_copy to create a random permutation. - * - * \code - * #include - * #include - * #include - * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; - * int result[10]; - * const int N = sizeof(A)/sizeof(int); - * thrust::default_random_engine g; - * thrust::shuffle_copy(thrust::host, A, A + N, result, g); - * // result is now {6, 5, 8, 7, 2, 1, 4, 3, 10, 9} - * \endcode - * - * \see \p shuffle - */ -template -__host__ __device__ void shuffle_copy( - const thrust::detail::execution_policy_base& exec, - RandomIterator first, RandomIterator last, OutputIterator result, URBG&& g); - -/*! shuffle_copy differs from shuffle only in that the reordered sequence is written to different output sequences, rather than in place. - *\p shuffle_copy reorders the elements [first, last) by a uniform pseudorandom permutation, defined by - * random engine \p g. - * - * \param first The beginning of the sequence to shuffle. - * \param last The end of the sequence to shuffle. - * \param result Destination of shuffled sequence - * \param g A UniformRandomBitGenerator - * - * \tparam RandomIterator is a random access iterator - * \tparam OutputIterator is a model of Output Iterator. - * \tparam URBG is a uniform random bit generator - * - * The following code snippet demonstrates how to use \p shuffle_copy to create a random permutation. - * - * \code - * #include - * #include - * int A[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; - * int result[10]; - * const int N = sizeof(A)/sizeof(int); - * thrust::default_random_engine g; - * thrust::shuffle_copy(A, A + N, result, g); - * // result is now {6, 5, 8, 7, 2, 1, 4, 3, 10, 9} - * \endcode - * - * \see \p shuffle - */ -template -__host__ __device__ void shuffle_copy(RandomIterator first, RandomIterator last, - OutputIterator result, URBG&& g); - -} // namespace thrust - -#include -#endif diff --git a/spaces/CVPR/MonoScene/monoscene/flosp.py b/spaces/CVPR/MonoScene/monoscene/flosp.py deleted file mode 100644 index 2d502197a72ee120773a47f239e86743f5a1e2d4..0000000000000000000000000000000000000000 --- a/spaces/CVPR/MonoScene/monoscene/flosp.py +++ /dev/null @@ -1,41 +0,0 @@ -import torch -import torch.nn as nn - - -class FLoSP(nn.Module): - def __init__(self, scene_size, dataset, project_scale): - super().__init__() - self.scene_size = scene_size - self.dataset = dataset - self.project_scale = project_scale - - def forward(self, x2d, projected_pix, fov_mask): - c, h, w = x2d.shape - - src = x2d.view(c, -1) - zeros_vec = torch.zeros(c, 1).type_as(src) - src = torch.cat([src, zeros_vec], 1) - - pix_x, pix_y = projected_pix[:, 0], projected_pix[:, 1] - img_indices = pix_y * w + pix_x - img_indices[~fov_mask] = h * w - img_indices = img_indices.expand(c, -1).long() # c, HWD - src_feature = torch.gather(src, 1, img_indices) - - if self.dataset == "NYU": - x3d = src_feature.reshape( - c, - self.scene_size[0] // self.project_scale, - self.scene_size[2] // self.project_scale, - self.scene_size[1] // self.project_scale, - ) - x3d = x3d.permute(0, 1, 3, 2) - elif self.dataset == "kitti": - x3d = src_feature.reshape( - c, - self.scene_size[0] // self.project_scale, - self.scene_size[1] // self.project_scale, - self.scene_size[2] // self.project_scale, - ) - - return x3d diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/maskiou_head.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/maskiou_head.py deleted file mode 100644 index 39bcd6a7dbdb089cd19cef811038e0b6a80ab89a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/maskiou_head.py +++ /dev/null @@ -1,186 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import Conv2d, Linear, MaxPool2d, kaiming_init, normal_init -from mmcv.runner import force_fp32 -from torch.nn.modules.utils import _pair - -from mmdet.models.builder import HEADS, build_loss - - -@HEADS.register_module() -class MaskIoUHead(nn.Module): - """Mask IoU Head. - - This head predicts the IoU of predicted masks and corresponding gt masks. - """ - - def __init__(self, - num_convs=4, - num_fcs=2, - roi_feat_size=14, - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - num_classes=80, - loss_iou=dict(type='MSELoss', loss_weight=0.5)): - super(MaskIoUHead, self).__init__() - self.in_channels = in_channels - self.conv_out_channels = conv_out_channels - self.fc_out_channels = fc_out_channels - self.num_classes = num_classes - self.fp16_enabled = False - - self.convs = nn.ModuleList() - for i in range(num_convs): - if i == 0: - # concatenation of mask feature and mask prediction - in_channels = self.in_channels + 1 - else: - in_channels = self.conv_out_channels - stride = 2 if i == num_convs - 1 else 1 - self.convs.append( - Conv2d( - in_channels, - self.conv_out_channels, - 3, - stride=stride, - padding=1)) - - roi_feat_size = _pair(roi_feat_size) - pooled_area = (roi_feat_size[0] // 2) * (roi_feat_size[1] // 2) - self.fcs = nn.ModuleList() - for i in range(num_fcs): - in_channels = ( - self.conv_out_channels * - pooled_area if i == 0 else self.fc_out_channels) - self.fcs.append(Linear(in_channels, self.fc_out_channels)) - - self.fc_mask_iou = Linear(self.fc_out_channels, self.num_classes) - self.relu = nn.ReLU() - self.max_pool = MaxPool2d(2, 2) - self.loss_iou = build_loss(loss_iou) - - def init_weights(self): - for conv in self.convs: - kaiming_init(conv) - for fc in self.fcs: - kaiming_init( - fc, - a=1, - mode='fan_in', - nonlinearity='leaky_relu', - distribution='uniform') - normal_init(self.fc_mask_iou, std=0.01) - - def forward(self, mask_feat, mask_pred): - mask_pred = mask_pred.sigmoid() - mask_pred_pooled = self.max_pool(mask_pred.unsqueeze(1)) - - x = torch.cat((mask_feat, mask_pred_pooled), 1) - - for conv in self.convs: - x = self.relu(conv(x)) - x = x.flatten(1) - for fc in self.fcs: - x = self.relu(fc(x)) - mask_iou = self.fc_mask_iou(x) - return mask_iou - - @force_fp32(apply_to=('mask_iou_pred', )) - def loss(self, mask_iou_pred, mask_iou_targets): - pos_inds = mask_iou_targets > 0 - if pos_inds.sum() > 0: - loss_mask_iou = self.loss_iou(mask_iou_pred[pos_inds], - mask_iou_targets[pos_inds]) - else: - loss_mask_iou = mask_iou_pred.sum() * 0 - return dict(loss_mask_iou=loss_mask_iou) - - @force_fp32(apply_to=('mask_pred', )) - def get_targets(self, sampling_results, gt_masks, mask_pred, mask_targets, - rcnn_train_cfg): - """Compute target of mask IoU. - - Mask IoU target is the IoU of the predicted mask (inside a bbox) and - the gt mask of corresponding gt mask (the whole instance). - The intersection area is computed inside the bbox, and the gt mask area - is computed with two steps, firstly we compute the gt area inside the - bbox, then divide it by the area ratio of gt area inside the bbox and - the gt area of the whole instance. - - Args: - sampling_results (list[:obj:`SamplingResult`]): sampling results. - gt_masks (BitmapMask | PolygonMask): Gt masks (the whole instance) - of each image, with the same shape of the input image. - mask_pred (Tensor): Predicted masks of each positive proposal, - shape (num_pos, h, w). - mask_targets (Tensor): Gt mask of each positive proposal, - binary map of the shape (num_pos, h, w). - rcnn_train_cfg (dict): Training config for R-CNN part. - - Returns: - Tensor: mask iou target (length == num positive). - """ - pos_proposals = [res.pos_bboxes for res in sampling_results] - pos_assigned_gt_inds = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - - # compute the area ratio of gt areas inside the proposals and - # the whole instance - area_ratios = map(self._get_area_ratio, pos_proposals, - pos_assigned_gt_inds, gt_masks) - area_ratios = torch.cat(list(area_ratios)) - assert mask_targets.size(0) == area_ratios.size(0) - - mask_pred = (mask_pred > rcnn_train_cfg.mask_thr_binary).float() - mask_pred_areas = mask_pred.sum((-1, -2)) - - # mask_pred and mask_targets are binary maps - overlap_areas = (mask_pred * mask_targets).sum((-1, -2)) - - # compute the mask area of the whole instance - gt_full_areas = mask_targets.sum((-1, -2)) / (area_ratios + 1e-7) - - mask_iou_targets = overlap_areas / ( - mask_pred_areas + gt_full_areas - overlap_areas) - return mask_iou_targets - - def _get_area_ratio(self, pos_proposals, pos_assigned_gt_inds, gt_masks): - """Compute area ratio of the gt mask inside the proposal and the gt - mask of the corresponding instance.""" - num_pos = pos_proposals.size(0) - if num_pos > 0: - area_ratios = [] - proposals_np = pos_proposals.cpu().numpy() - pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy() - # compute mask areas of gt instances (batch processing for speedup) - gt_instance_mask_area = gt_masks.areas - for i in range(num_pos): - gt_mask = gt_masks[pos_assigned_gt_inds[i]] - - # crop the gt mask inside the proposal - bbox = proposals_np[i, :].astype(np.int32) - gt_mask_in_proposal = gt_mask.crop(bbox) - - ratio = gt_mask_in_proposal.areas[0] / ( - gt_instance_mask_area[pos_assigned_gt_inds[i]] + 1e-7) - area_ratios.append(ratio) - area_ratios = torch.from_numpy(np.stack(area_ratios)).float().to( - pos_proposals.device) - else: - area_ratios = pos_proposals.new_zeros((0, )) - return area_ratios - - @force_fp32(apply_to=('mask_iou_pred', )) - def get_mask_scores(self, mask_iou_pred, det_bboxes, det_labels): - """Get the mask scores. - - mask_score = bbox_score * mask_iou - """ - inds = range(det_labels.size(0)) - mask_scores = mask_iou_pred[inds, det_labels] * det_bboxes[inds, -1] - mask_scores = mask_scores.cpu().numpy() - det_labels = det_labels.cpu().numpy() - return [mask_scores[det_labels == i] for i in range(self.num_classes)] diff --git a/spaces/CVPR/WALT/mmdet/models/utils/builder.py b/spaces/CVPR/WALT/mmdet/models/utils/builder.py deleted file mode 100644 index f362d1c92ca9d4ed95a2b3d28d3e6baedd14e462..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/utils/builder.py +++ /dev/null @@ -1,14 +0,0 @@ -from mmcv.utils import Registry, build_from_cfg - -TRANSFORMER = Registry('Transformer') -POSITIONAL_ENCODING = Registry('Position encoding') - - -def build_transformer(cfg, default_args=None): - """Builder for Transformer.""" - return build_from_cfg(cfg, TRANSFORMER, default_args) - - -def build_positional_encoding(cfg, default_args=None): - """Builder for Position Encoding.""" - return build_from_cfg(cfg, POSITIONAL_ENCODING, default_args) diff --git a/spaces/Copy233/copy/README.md b/spaces/Copy233/copy/README.md deleted file mode 100644 index a1d8f85510b85899dc7d22770ab7859484075847..0000000000000000000000000000000000000000 --- a/spaces/Copy233/copy/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Real CUGAN -emoji: 🔥 -colorFrom: pink -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/chars.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/chars.py deleted file mode 100644 index 71772ab85dec2b42458e25593b611e5f24e465d2..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/chars.py +++ /dev/null @@ -1,199 +0,0 @@ -import os - -import cv2 -import numpy as np - - -def char2num(char): - if char in "0123456789": - num = ord(char) - ord("0") + 1 - elif char in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ": - num = ord(char.lower()) - ord("a") + 11 - else: - num = 0 - return num - - -def num2char(num): - chars = "_0123456789abcdefghijklmnopqrstuvwxyz" - char = chars[num] - # if num >=1 and num <=10: - # char = chr(ord('0') + num - 1) - # elif num > 10 and num <= 36: - # char = chr(ord('a') + num - 11) - # else: - # print('error number:%d'%(num)) - # exit() - return char - - -def getstr_grid(seg, box, threshold=192): - pos = 255 - (seg[0] * 255).astype(np.uint8) - mask_index = np.argmax(seg, axis=0) - mask_index = mask_index.astype(np.uint8) - pos = pos.astype(np.uint8) - string, score, rec_scores, char_polygons = seg2text( - pos, mask_index, seg, box, threshold=threshold - ) - return string, score, rec_scores, char_polygons - - -def seg2text(gray, mask, seg, box, threshold=192): - ## input numpy - img_h, img_w = gray.shape - box_w = box[2] - box[0] - box_h = box[3] - box[1] - ratio_h = float(box_h) / img_h - ratio_w = float(box_w) / img_w - # SE1=cv2.getStructuringElement(cv2.MORPH_RECT,(3,3)) - # gray = cv2.erode(gray,SE1) - # gray = cv2.dilate(gray,SE1) - # gray = cv2.morphologyEx(gray,cv2.MORPH_CLOSE,SE1) - ret, thresh = cv2.threshold(gray, threshold, 255, cv2.THRESH_BINARY) - try: - _, contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) - except: - contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) - chars = [] - scores = [] - char_polygons = [] - for i in range(len(contours)): - char = {} - temp = np.zeros((img_h, img_w)).astype(np.uint8) - cv2.drawContours(temp, [contours[i]], 0, (255), -1) - x, y, w, h = cv2.boundingRect(contours[i]) - c_x, c_y = x + w / 2, y + h / 2 - perimeter = cv2.arcLength(contours[i], True) - epsilon = 0.01 * cv2.arcLength(contours[i], True) - approx = cv2.approxPolyDP(contours[i], epsilon, True) - pts = approx.reshape((-1, 2)) - pts[:, 0] = pts[:, 0] * ratio_w + box[0] - pts[:, 1] = pts[:, 1] * ratio_h + box[1] - polygon = list(pts.reshape((-1,))) - polygon = list(map(int, polygon)) - if len(polygon) >= 6: - char_polygons.append(polygon) - # x1 = x * ratio_w + box[0] - # y1 = y * ratio_h + box[1] - # x3 = (x + w) * ratio_w + box[0] - # y3 = (y + h) * ratio_h + box[1] - # polygon = [x1, y1, x3, y1, x3, y3, x1, y3] - regions = seg[1:, temp == 255].reshape((36, -1)) - cs = np.mean(regions, axis=1) - sym = num2char(np.argmax(cs.reshape((-1))) + 1) - char["x"] = c_x - char["y"] = c_y - char["s"] = sym - char["cs"] = cs.reshape((-1, 1)) - scores.append(np.max(char["cs"], axis=0)[0]) - - chars.append(char) - chars = sorted(chars, key=lambda x: x["x"]) - string = "" - css = [] - for char in chars: - string = string + char["s"] - css.append(char["cs"]) - if len(scores) > 0: - score = sum(scores) / len(scores) - else: - score = 0.00 - if not css: - css = [0.0] - return string, score, np.hstack(css), char_polygons - - -# def get_tight_rect(points, start_x, start_y, image_height, image_width, scale): -# points = list(points) -# ps = sorted(points, key=lambda x: x[0]) -# -# if ps[1][1] > ps[0][1]: -# px1 = ps[0][0] * scale + start_x -# py1 = ps[0][1] * scale + start_y -# px4 = ps[1][0] * scale + start_x -# py4 = ps[1][1] * scale + start_y -# else: -# px1 = ps[1][0] * scale + start_x -# py1 = ps[1][1] * scale + start_y -# px4 = ps[0][0] * scale + start_x -# py4 = ps[0][1] * scale + start_y -# if ps[3][1] > ps[2][1]: -# px2 = ps[2][0] * scale + start_x -# py2 = ps[2][1] * scale + start_y -# px3 = ps[3][0] * scale + start_x -# py3 = ps[3][1] * scale + start_y -# else: -# px2 = ps[3][0] * scale + start_x -# py2 = ps[3][1] * scale + start_y -# px3 = ps[2][0] * scale + start_x -# py3 = ps[2][1] * scale + start_y -# -# if px1 < 0: -# px1 = 1 -# if px1 > image_width: -# px1 = image_width - 1 -# if px2 < 0: -# px2 = 1 -# if px2 > image_width: -# px2 = image_width - 1 -# if px3 < 0: -# px3 = 1 -# if px3 > image_width: -# px3 = image_width - 1 -# if px4 < 0: -# px4 = 1 -# if px4 > image_width: -# px4 = image_width - 1 -# -# if py1 < 0: -# py1 = 1 -# if py1 > image_height: -# py1 = image_height - 1 -# if py2 < 0: -# py2 = 1 -# if py2 > image_height: -# py2 = image_height - 1 -# if py3 < 0: -# py3 = 1 -# if py3 > image_height: -# py3 = image_height - 1 -# if py4 < 0: -# py4 = 1 -# if py4 > image_height: -# py4 = image_height - 1 -# return [px1, py1, px2, py2, px3, py3, px4, py4] - -def get_tight_rect(points, start_x, start_y, image_height, image_width, scale): - points = list(points) - ps = sorted(points, key=lambda x: x[0]) - - if ps[1][1] > ps[0][1]: - px1 = ps[0][0] * scale + start_x - py1 = ps[0][1] * scale + start_y - px4 = ps[1][0] * scale + start_x - py4 = ps[1][1] * scale + start_y - else: - px1 = ps[1][0] * scale + start_x - py1 = ps[1][1] * scale + start_y - px4 = ps[0][0] * scale + start_x - py4 = ps[0][1] * scale + start_y - if ps[3][1] > ps[2][1]: - px2 = ps[2][0] * scale + start_x - py2 = ps[2][1] * scale + start_y - px3 = ps[3][0] * scale + start_x - py3 = ps[3][1] * scale + start_y - else: - px2 = ps[3][0] * scale + start_x - py2 = ps[3][1] * scale + start_y - px3 = ps[2][0] * scale + start_x - py3 = ps[2][1] * scale + start_y - - px1 = min(max(px1, 1), image_width - 1) - px2 = min(max(px2, 1), image_width - 1) - px3 = min(max(px3, 1), image_width - 1) - px4 = min(max(px4, 1), image_width - 1) - py1 = min(max(py1, 1), image_height - 1) - py2 = min(max(py2, 1), image_height - 1) - py3 = min(max(py3, 1), image_height - 1) - py4 = min(max(py4, 1), image_height - 1) - return [px1, py1, px2, py2, px3, py3, px4, py4] diff --git a/spaces/DCandE/rvc-models/infer_pack/transforms.py b/spaces/DCandE/rvc-models/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/DCandE/rvc-models/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/iup.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/iup.py deleted file mode 100644 index 0f1232ad2ea1cac8953239a5bdc55f1cedbb5f02..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/iup.py +++ /dev/null @@ -1,496 +0,0 @@ -try: - import cython - - COMPILED = cython.compiled -except (AttributeError, ImportError): - # if cython not installed, use mock module with no-op decorators and types - from fontTools.misc import cython - - COMPILED = False - -from typing import ( - Sequence, - Tuple, - Union, -) -from numbers import Integral, Real - -try: - import cython - - COMPILED = cython.compiled -except (AttributeError, ImportError): - # if cython not installed, use mock module with no-op decorators and types - from fontTools.misc import cython - - COMPILED = False - - -_Point = Tuple[Real, Real] -_Delta = Tuple[Real, Real] -_PointSegment = Sequence[_Point] -_DeltaSegment = Sequence[_Delta] -_DeltaOrNone = Union[_Delta, None] -_DeltaOrNoneSegment = Sequence[_DeltaOrNone] -_Endpoints = Sequence[Integral] - - -MAX_LOOKBACK = 8 - - -@cython.cfunc -@cython.locals( - j=cython.int, - n=cython.int, - x1=cython.double, - x2=cython.double, - d1=cython.double, - d2=cython.double, - scale=cython.double, - x=cython.double, - d=cython.double, -) -def iup_segment( - coords: _PointSegment, rc1: _Point, rd1: _Delta, rc2: _Point, rd2: _Delta -): # -> _DeltaSegment: - """Given two reference coordinates `rc1` & `rc2` and their respective - delta vectors `rd1` & `rd2`, returns interpolated deltas for the set of - coordinates `coords`.""" - - # rc1 = reference coord 1 - # rd1 = reference delta 1 - out_arrays = [None, None] - for j in 0, 1: - out_arrays[j] = out = [] - x1, x2, d1, d2 = rc1[j], rc2[j], rd1[j], rd2[j] - - if x1 == x2: - n = len(coords) - if d1 == d2: - out.extend([d1] * n) - else: - out.extend([0] * n) - continue - - if x1 > x2: - x1, x2 = x2, x1 - d1, d2 = d2, d1 - - # x1 < x2 - scale = (d2 - d1) / (x2 - x1) - for pair in coords: - x = pair[j] - - if x <= x1: - d = d1 - elif x >= x2: - d = d2 - else: - # Interpolate - d = d1 + (x - x1) * scale - - out.append(d) - - return zip(*out_arrays) - - -def iup_contour(deltas: _DeltaOrNoneSegment, coords: _PointSegment) -> _DeltaSegment: - """For the contour given in `coords`, interpolate any missing - delta values in delta vector `deltas`. - - Returns fully filled-out delta vector.""" - - assert len(deltas) == len(coords) - if None not in deltas: - return deltas - - n = len(deltas) - # indices of points with explicit deltas - indices = [i for i, v in enumerate(deltas) if v is not None] - if not indices: - # All deltas are None. Return 0,0 for all. - return [(0, 0)] * n - - out = [] - it = iter(indices) - start = next(it) - if start != 0: - # Initial segment that wraps around - i1, i2, ri1, ri2 = 0, start, start, indices[-1] - out.extend( - iup_segment( - coords[i1:i2], coords[ri1], deltas[ri1], coords[ri2], deltas[ri2] - ) - ) - out.append(deltas[start]) - for end in it: - if end - start > 1: - i1, i2, ri1, ri2 = start + 1, end, start, end - out.extend( - iup_segment( - coords[i1:i2], coords[ri1], deltas[ri1], coords[ri2], deltas[ri2] - ) - ) - out.append(deltas[end]) - start = end - if start != n - 1: - # Final segment that wraps around - i1, i2, ri1, ri2 = start + 1, n, start, indices[0] - out.extend( - iup_segment( - coords[i1:i2], coords[ri1], deltas[ri1], coords[ri2], deltas[ri2] - ) - ) - - assert len(deltas) == len(out), (len(deltas), len(out)) - return out - - -def iup_delta( - deltas: _DeltaOrNoneSegment, coords: _PointSegment, ends: _Endpoints -) -> _DeltaSegment: - """For the outline given in `coords`, with contour endpoints given - in sorted increasing order in `ends`, interpolate any missing - delta values in delta vector `deltas`. - - Returns fully filled-out delta vector.""" - - assert sorted(ends) == ends and len(coords) == (ends[-1] + 1 if ends else 0) + 4 - n = len(coords) - ends = ends + [n - 4, n - 3, n - 2, n - 1] - out = [] - start = 0 - for end in ends: - end += 1 - contour = iup_contour(deltas[start:end], coords[start:end]) - out.extend(contour) - start = end - - return out - - -# Optimizer - - -@cython.cfunc -@cython.inline -@cython.locals( - i=cython.int, - j=cython.int, - tolerance=cython.double, - x=cython.double, - y=cython.double, - p=cython.double, - q=cython.double, -) -@cython.returns(int) -def can_iup_in_between( - deltas: _DeltaSegment, - coords: _PointSegment, - i: Integral, - j: Integral, - tolerance: Real, -): # -> bool: - """Return true if the deltas for points at `i` and `j` (`i < j`) can be - successfully used to interpolate deltas for points in between them within - provided error tolerance.""" - - assert j - i >= 2 - interp = iup_segment(coords[i + 1 : j], coords[i], deltas[i], coords[j], deltas[j]) - deltas = deltas[i + 1 : j] - - return all( - abs(complex(x - p, y - q)) <= tolerance - for (x, y), (p, q) in zip(deltas, interp) - ) - - -@cython.locals( - cj=cython.double, - dj=cython.double, - lcj=cython.double, - ldj=cython.double, - ncj=cython.double, - ndj=cython.double, - force=cython.int, - forced=set, -) -def _iup_contour_bound_forced_set( - deltas: _DeltaSegment, coords: _PointSegment, tolerance: Real = 0 -) -> set: - """The forced set is a conservative set of points on the contour that must be encoded - explicitly (ie. cannot be interpolated). Calculating this set allows for significantly - speeding up the dynamic-programming, as well as resolve circularity in DP. - - The set is precise; that is, if an index is in the returned set, then there is no way - that IUP can generate delta for that point, given `coords` and `deltas`. - """ - assert len(deltas) == len(coords) - - n = len(deltas) - forced = set() - # Track "last" and "next" points on the contour as we sweep. - for i in range(len(deltas) - 1, -1, -1): - ld, lc = deltas[i - 1], coords[i - 1] - d, c = deltas[i], coords[i] - nd, nc = deltas[i - n + 1], coords[i - n + 1] - - for j in (0, 1): # For X and for Y - cj = c[j] - dj = d[j] - lcj = lc[j] - ldj = ld[j] - ncj = nc[j] - ndj = nd[j] - - if lcj <= ncj: - c1, c2 = lcj, ncj - d1, d2 = ldj, ndj - else: - c1, c2 = ncj, lcj - d1, d2 = ndj, ldj - - force = False - - # If the two coordinates are the same, then the interpolation - # algorithm produces the same delta if both deltas are equal, - # and zero if they differ. - # - # This test has to be before the next one. - if c1 == c2: - if abs(d1 - d2) > tolerance and abs(dj) > tolerance: - force = True - - # If coordinate for current point is between coordinate of adjacent - # points on the two sides, but the delta for current point is NOT - # between delta for those adjacent points (considering tolerance - # allowance), then there is no way that current point can be IUP-ed. - # Mark it forced. - elif c1 <= cj <= c2: # and c1 != c2 - if not (min(d1, d2) - tolerance <= dj <= max(d1, d2) + tolerance): - force = True - - # Otherwise, the delta should either match the closest, or have the - # same sign as the interpolation of the two deltas. - else: # cj < c1 or c2 < cj - if d1 != d2: - if cj < c1: - if ( - abs(dj) > tolerance - and abs(dj - d1) > tolerance - and ((dj - tolerance < d1) != (d1 < d2)) - ): - force = True - else: # c2 < cj - if ( - abs(dj) > tolerance - and abs(dj - d2) > tolerance - and ((d2 < dj + tolerance) != (d1 < d2)) - ): - force = True - - if force: - forced.add(i) - break - - return forced - - -@cython.locals( - i=cython.int, - j=cython.int, - best_cost=cython.double, - best_j=cython.int, - cost=cython.double, - forced=set, - tolerance=cython.double, -) -def _iup_contour_optimize_dp( - deltas: _DeltaSegment, - coords: _PointSegment, - forced=set(), - tolerance: Real = 0, - lookback: Integral = None, -): - """Straightforward Dynamic-Programming. For each index i, find least-costly encoding of - points 0 to i where i is explicitly encoded. We find this by considering all previous - explicit points j and check whether interpolation can fill points between j and i. - - Note that solution always encodes last point explicitly. Higher-level is responsible - for removing that restriction. - - As major speedup, we stop looking further whenever we see a "forced" point.""" - - n = len(deltas) - if lookback is None: - lookback = n - lookback = min(lookback, MAX_LOOKBACK) - costs = {-1: 0} - chain = {-1: None} - for i in range(0, n): - best_cost = costs[i - 1] + 1 - - costs[i] = best_cost - chain[i] = i - 1 - - if i - 1 in forced: - continue - - for j in range(i - 2, max(i - lookback, -2), -1): - cost = costs[j] + 1 - - if cost < best_cost and can_iup_in_between(deltas, coords, j, i, tolerance): - costs[i] = best_cost = cost - chain[i] = j - - if j in forced: - break - - return chain, costs - - -def _rot_list(l: list, k: int): - """Rotate list by k items forward. Ie. item at position 0 will be - at position k in returned list. Negative k is allowed.""" - n = len(l) - k %= n - if not k: - return l - return l[n - k :] + l[: n - k] - - -def _rot_set(s: set, k: int, n: int): - k %= n - if not k: - return s - return {(v + k) % n for v in s} - - -def iup_contour_optimize( - deltas: _DeltaSegment, coords: _PointSegment, tolerance: Real = 0.0 -) -> _DeltaOrNoneSegment: - """For contour with coordinates `coords`, optimize a set of delta - values `deltas` within error `tolerance`. - - Returns delta vector that has most number of None items instead of - the input delta. - """ - - n = len(deltas) - - # Get the easy cases out of the way: - - # If all are within tolerance distance of 0, encode nothing: - if all(abs(complex(*p)) <= tolerance for p in deltas): - return [None] * n - - # If there's exactly one point, return it: - if n == 1: - return deltas - - # If all deltas are exactly the same, return just one (the first one): - d0 = deltas[0] - if all(d0 == d for d in deltas): - return [d0] + [None] * (n - 1) - - # Else, solve the general problem using Dynamic Programming. - - forced = _iup_contour_bound_forced_set(deltas, coords, tolerance) - # The _iup_contour_optimize_dp() routine returns the optimal encoding - # solution given the constraint that the last point is always encoded. - # To remove this constraint, we use two different methods, depending on - # whether forced set is non-empty or not: - - # Debugging: Make the next if always take the second branch and observe - # if the font size changes (reduced); that would mean the forced-set - # has members it should not have. - if forced: - # Forced set is non-empty: rotate the contour start point - # such that the last point in the list is a forced point. - k = (n - 1) - max(forced) - assert k >= 0 - - deltas = _rot_list(deltas, k) - coords = _rot_list(coords, k) - forced = _rot_set(forced, k, n) - - # Debugging: Pass a set() instead of forced variable to the next call - # to exercise forced-set computation for under-counting. - chain, costs = _iup_contour_optimize_dp(deltas, coords, forced, tolerance) - - # Assemble solution. - solution = set() - i = n - 1 - while i is not None: - solution.add(i) - i = chain[i] - solution.remove(-1) - - # if not forced <= solution: - # print("coord", coords) - # print("deltas", deltas) - # print("len", len(deltas)) - assert forced <= solution, (forced, solution) - - deltas = [deltas[i] if i in solution else None for i in range(n)] - - deltas = _rot_list(deltas, -k) - else: - # Repeat the contour an extra time, solve the new case, then look for solutions of the - # circular n-length problem in the solution for new linear case. I cannot prove that - # this always produces the optimal solution... - chain, costs = _iup_contour_optimize_dp( - deltas + deltas, coords + coords, forced, tolerance, n - ) - best_sol, best_cost = None, n + 1 - - for start in range(n - 1, len(costs) - 1): - # Assemble solution. - solution = set() - i = start - while i > start - n: - solution.add(i % n) - i = chain[i] - if i == start - n: - cost = costs[start] - costs[start - n] - if cost <= best_cost: - best_sol, best_cost = solution, cost - - # if not forced <= best_sol: - # print("coord", coords) - # print("deltas", deltas) - # print("len", len(deltas)) - assert forced <= best_sol, (forced, best_sol) - - deltas = [deltas[i] if i in best_sol else None for i in range(n)] - - return deltas - - -def iup_delta_optimize( - deltas: _DeltaSegment, - coords: _PointSegment, - ends: _Endpoints, - tolerance: Real = 0.0, -) -> _DeltaOrNoneSegment: - """For the outline given in `coords`, with contour endpoints given - in sorted increasing order in `ends`, optimize a set of delta - values `deltas` within error `tolerance`. - - Returns delta vector that has most number of None items instead of - the input delta. - """ - assert sorted(ends) == ends and len(coords) == (ends[-1] + 1 if ends else 0) + 4 - n = len(coords) - ends = ends + [n - 4, n - 3, n - 2, n - 1] - out = [] - start = 0 - for end in ends: - contour = iup_contour_optimize( - deltas[start : end + 1], coords[start : end + 1], tolerance - ) - assert len(contour) == end - start + 1 - out.extend(contour) - start = end + 1 - - return out diff --git a/spaces/Djacon/emotion_detection/README.md b/spaces/Djacon/emotion_detection/README.md deleted file mode 100644 index cb60ac2caafd7d58fbb20957c2599821b356ca8a..0000000000000000000000000000000000000000 --- a/spaces/Djacon/emotion_detection/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Emotion Detection -emoji: 🐠 -colorFrom: blue -colorTo: yellow -sdk: docker -pinned: false -license: mit ---- - -# Text2Feature -Powerful text mining tool. Convert raw text into clean using modern technologies of NLP and AI - -You can test my website right here 👇 - -Link: [djacon.github.io/text2feature](https://djacon-emotion-detection.hf.space) \ No newline at end of file diff --git a/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/__init__.py b/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Duskfallcrew/flowers-2-1-768/README.md b/spaces/Duskfallcrew/flowers-2-1-768/README.md deleted file mode 100644 index 4b812957513cfb0234d076661b97af9e0d6667ca..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/flowers-2-1-768/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Flowers 2 1 768 -emoji: 📚 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ElainaFanBoy/MusicGen/audiocraft/quantization/base.py b/spaces/ElainaFanBoy/MusicGen/audiocraft/quantization/base.py deleted file mode 100644 index 1b16c130d266fbd021d3fc29bb9f98c33dd3c588..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/MusicGen/audiocraft/quantization/base.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Base class for all quantizers. -""" - -from dataclasses import dataclass, field -import typing as tp - -import torch -from torch import nn - - -@dataclass -class QuantizedResult: - x: torch.Tensor - codes: torch.Tensor - bandwidth: torch.Tensor # bandwidth in kb/s used, per batch item. - penalty: tp.Optional[torch.Tensor] = None - metrics: dict = field(default_factory=dict) - - -class BaseQuantizer(nn.Module): - """Base class for quantizers. - """ - - def forward(self, x: torch.Tensor, frame_rate: int) -> QuantizedResult: - """ - Given input tensor x, returns first the quantized (or approximately quantized) - representation along with quantized codes, bandwidth, and any penalty term for the loss. - Finally, this returns a dict of metrics to update logging etc. - Frame rate must be passed so that the bandwidth is properly computed. - """ - raise NotImplementedError() - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified sample rate at the given bandwidth. - """ - raise NotImplementedError() - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - """ - raise NotImplementedError() - - @property - def total_codebooks(self): - """Total number of codebooks. - """ - raise NotImplementedError() - - @property - def num_codebooks(self): - """Number of active codebooks. - """ - raise NotImplementedError() - - def set_num_codebooks(self, n: int): - """Set the number of active codebooks. - """ - raise NotImplementedError() - - -class DummyQuantizer(BaseQuantizer): - """Fake quantizer that actually does not perform any quantization. - """ - def __init__(self): - super().__init__() - - def forward(self, x: torch.Tensor, frame_rate: int): - q = x.unsqueeze(1) - return QuantizedResult(x, q, torch.tensor(q.numel() * 32 * frame_rate / 1000 / len(x)).to(x)) - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified sample rate at the given bandwidth. - In the case of the DummyQuantizer, the codes are actually identical - to the input and resulting quantized representation as no quantization is done. - """ - return x.unsqueeze(1) - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - In the case of the DummyQuantizer, the codes are actually identical - to the input and resulting quantized representation as no quantization is done. - """ - return codes.squeeze(1) - - @property - def total_codebooks(self): - """Total number of codebooks. - """ - return 1 - - @property - def num_codebooks(self): - """Total number of codebooks. - """ - return self.total_codebooks - - def set_num_codebooks(self, n: int): - """Set the number of active codebooks. - """ - raise AttributeError("Cannot override the number of codebooks for the dummy quantizer") diff --git a/spaces/EmanAbelwhab/foodvision_mini/README.md b/spaces/EmanAbelwhab/foodvision_mini/README.md deleted file mode 100644 index 48c57a6f3ddf9bf60c96a652845d6f078ad4fcb7..0000000000000000000000000000000000000000 --- a/spaces/EmanAbelwhab/foodvision_mini/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Foodvision Mini -emoji: 💩 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets_537227KB.py b/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets_537227KB.py deleted file mode 100644 index a1bb530e006482704f234c2e739a695174142941..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets_537227KB.py +++ /dev/null @@ -1,123 +0,0 @@ -import torch -import numpy as np -from torch import nn -import torch.nn.functional as F - -from . import layers_537238KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 64) - self.stg1_high_band_net = BaseASPPNet(2, 64) - - self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(32, 64) - - self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(64, 128) - - self.out = nn.Conv2d(128, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(64, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(64, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/EronSamez/RVC_HFmeu/train/utils.py b/spaces/EronSamez/RVC_HFmeu/train/utils.py deleted file mode 100644 index aae833b08acc24b848aa70114fd9b7aad8b1a6ad..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/train/utils.py +++ /dev/null @@ -1,500 +0,0 @@ -import os, traceback -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint_d(checkpoint_path, combd, sbd, optimizer=None, load_opt=1): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location="cpu") - - ################## - def go(model, bkey): - saved_state_dict = checkpoint_dict[bkey] - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): # 模型需要的shape - try: - new_state_dict[k] = saved_state_dict[k] - if saved_state_dict[k].shape != state_dict[k].shape: - print( - "shape-%s-mismatch|need-%s|get-%s" - % (k, state_dict[k].shape, saved_state_dict[k].shape) - ) # - raise KeyError - except: - # logger.info(traceback.format_exc()) - logger.info("%s is not in the checkpoint" % k) # pretrain缺失的 - new_state_dict[k] = v # 模型自带的随机值 - if hasattr(model, "module"): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - - go(combd, "combd") - go(sbd, "sbd") - ############# - logger.info("Loaded model weights") - - iteration = checkpoint_dict["iteration"] - learning_rate = checkpoint_dict["learning_rate"] - if ( - optimizer is not None and load_opt == 1 - ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch - # try: - optimizer.load_state_dict(checkpoint_dict["optimizer"]) - # except: - # traceback.print_exc() - logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -# def load_checkpoint(checkpoint_path, model, optimizer=None): -# assert os.path.isfile(checkpoint_path) -# checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') -# iteration = checkpoint_dict['iteration'] -# learning_rate = checkpoint_dict['learning_rate'] -# if optimizer is not None: -# optimizer.load_state_dict(checkpoint_dict['optimizer']) -# # print(1111) -# saved_state_dict = checkpoint_dict['model'] -# # print(1111) -# -# if hasattr(model, 'module'): -# state_dict = model.module.state_dict() -# else: -# state_dict = model.state_dict() -# new_state_dict= {} -# for k, v in state_dict.items(): -# try: -# new_state_dict[k] = saved_state_dict[k] -# except: -# logger.info("%s is not in the checkpoint" % k) -# new_state_dict[k] = v -# if hasattr(model, 'module'): -# model.module.load_state_dict(new_state_dict) -# else: -# model.load_state_dict(new_state_dict) -# logger.info("Loaded checkpoint '{}' (epoch {})" .format( -# checkpoint_path, iteration)) -# return model, optimizer, learning_rate, iteration -def load_checkpoint(checkpoint_path, model, optimizer=None, load_opt=1): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location="cpu") - - saved_state_dict = checkpoint_dict["model"] - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): # 模型需要的shape - try: - new_state_dict[k] = saved_state_dict[k] - if saved_state_dict[k].shape != state_dict[k].shape: - print( - "shape-%s-mismatch|need-%s|get-%s" - % (k, state_dict[k].shape, saved_state_dict[k].shape) - ) # - raise KeyError - except: - # logger.info(traceback.format_exc()) - logger.info("%s is not in the checkpoint" % k) # pretrain缺失的 - new_state_dict[k] = v # 模型自带的随机值 - if hasattr(model, "module"): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - logger.info("Loaded model weights") - - iteration = checkpoint_dict["iteration"] - learning_rate = checkpoint_dict["learning_rate"] - if ( - optimizer is not None and load_opt == 1 - ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch - # try: - optimizer.load_state_dict(checkpoint_dict["optimizer"]) - # except: - # traceback.print_exc() - logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info( - "Saving model and optimizer state at epoch {} to {}".format( - iteration, checkpoint_path - ) - ) - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save( - { - "model": state_dict, - "iteration": iteration, - "optimizer": optimizer.state_dict(), - "learning_rate": learning_rate, - }, - checkpoint_path, - ) - - -def save_checkpoint_d(combd, sbd, optimizer, learning_rate, iteration, checkpoint_path): - logger.info( - "Saving model and optimizer state at epoch {} to {}".format( - iteration, checkpoint_path - ) - ) - if hasattr(combd, "module"): - state_dict_combd = combd.module.state_dict() - else: - state_dict_combd = combd.state_dict() - if hasattr(sbd, "module"): - state_dict_sbd = sbd.module.state_dict() - else: - state_dict_sbd = sbd.state_dict() - torch.save( - { - "combd": state_dict_combd, - "sbd": state_dict_sbd, - "iteration": iteration, - "optimizer": optimizer.state_dict(), - "learning_rate": learning_rate, - }, - checkpoint_path, - ) - - -def summarize( - writer, - global_step, - scalars={}, - histograms={}, - images={}, - audios={}, - audio_sampling_rate=22050, -): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats="HWC") - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none") - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow( - alignment.transpose(), aspect="auto", origin="lower", interpolation="none" - ) - fig.colorbar(im, ax=ax) - xlabel = "Decoder timestep" - if info is not None: - xlabel += "\n\n" + info - plt.xlabel(xlabel) - plt.ylabel("Encoder timestep") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - filepaths_and_text = [item for item in filepaths_and_text if len(item) == 5] # ensure there are 5 items. - return filepaths_and_text - - -def get_hparams(init=True): - """ - todo: - 结尾七人组: - 保存频率、总epoch done - bs done - pretrainG、pretrainD done - 卡号:os.en["CUDA_VISIBLE_DEVICES"] done - if_latest done - 模型:if_f0 done - 采样率:自动选择config done - 是否缓存数据集进GPU:if_cache_data_in_gpu done - - -m: - 自动决定training_files路径,改掉train_nsf_load_pretrain.py里的hps.data.training_files done - -c不要了 - """ - parser = argparse.ArgumentParser() - # parser.add_argument('-c', '--config', type=str, default="configs/40k.json",help='JSON file for configuration') - parser.add_argument( - "-se", - "--save_every_epoch", - type=int, - required=True, - help="checkpoint save frequency (epoch)", - ) - parser.add_argument( - "-te", "--total_epoch", type=int, required=True, help="total_epoch" - ) - parser.add_argument( - "-pg", "--pretrainG", type=str, default="", help="Pretrained Discriminator path" - ) - parser.add_argument( - "-pd", "--pretrainD", type=str, default="", help="Pretrained Generator path" - ) - parser.add_argument("-g", "--gpus", type=str, default="0", help="split by -") - parser.add_argument( - "-bs", "--batch_size", type=int, required=True, help="batch size" - ) - parser.add_argument( - "-e", "--experiment_dir", type=str, required=True, help="experiment dir" - ) # -m - parser.add_argument( - "-sr", "--sample_rate", type=str, required=True, help="sample rate, 32k/40k/48k" - ) - parser.add_argument( - "-sw", - "--save_every_weights", - type=str, - default="0", - help="save the extracted model in weights directory when saving checkpoints", - ) - parser.add_argument( - "-v", "--version", type=str, required=True, help="model version" - ) - parser.add_argument( - "-f0", - "--if_f0", - type=int, - required=True, - help="use f0 as one of the inputs of the model, 1 or 0", - ) - parser.add_argument( - "-l", - "--if_latest", - type=int, - required=True, - help="if only save the latest G/D pth file, 1 or 0", - ) - parser.add_argument( - "-c", - "--if_cache_data_in_gpu", - type=int, - required=True, - help="if caching the dataset in GPU memory, 1 or 0", - ) - parser.add_argument( - "-li", "--log_interval", type=int, required=True, help="log interval" - ) - - args = parser.parse_args() - name = args.experiment_dir - experiment_dir = os.path.join("./logs", args.experiment_dir) - - if not os.path.exists(experiment_dir): - os.makedirs(experiment_dir) - - if args.version == "v1" or args.sample_rate == "40k": - config_path = "configs/%s.json" % args.sample_rate - else: - config_path = "configs/%s_v2.json" % args.sample_rate - config_save_path = os.path.join(experiment_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = hparams.experiment_dir = experiment_dir - hparams.save_every_epoch = args.save_every_epoch - hparams.name = name - hparams.total_epoch = args.total_epoch - hparams.pretrainG = args.pretrainG - hparams.pretrainD = args.pretrainD - hparams.version = args.version - hparams.gpus = args.gpus - hparams.train.batch_size = args.batch_size - hparams.sample_rate = args.sample_rate - hparams.if_f0 = args.if_f0 - hparams.if_latest = args.if_latest - hparams.save_every_weights = args.save_every_weights - hparams.if_cache_data_in_gpu = args.if_cache_data_in_gpu - hparams.data.training_files = "%s/filelist.txt" % experiment_dir - - hparams.train.log_interval = args.log_interval - - # Update log_interval in the 'train' section of the config dictionary - config["train"]["log_interval"] = args.log_interval - - # Save the updated config back to the config_save_path - with open(config_save_path, "w") as f: - json.dump(config, f, indent=4) - - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn( - "{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - ) - ) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn( - "git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8] - ) - ) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams: - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/facelib/parsing/bisenet.py b/spaces/FelixLuoX/codeformer/CodeFormer/facelib/parsing/bisenet.py deleted file mode 100644 index 3898cab76ae5876459cd4899c54cafa14234971d..0000000000000000000000000000000000000000 --- a/spaces/FelixLuoX/codeformer/CodeFormer/facelib/parsing/bisenet.py +++ /dev/null @@ -1,140 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .resnet import ResNet18 - - -class ConvBNReLU(nn.Module): - - def __init__(self, in_chan, out_chan, ks=3, stride=1, padding=1): - super(ConvBNReLU, self).__init__() - self.conv = nn.Conv2d(in_chan, out_chan, kernel_size=ks, stride=stride, padding=padding, bias=False) - self.bn = nn.BatchNorm2d(out_chan) - - def forward(self, x): - x = self.conv(x) - x = F.relu(self.bn(x)) - return x - - -class BiSeNetOutput(nn.Module): - - def __init__(self, in_chan, mid_chan, num_class): - super(BiSeNetOutput, self).__init__() - self.conv = ConvBNReLU(in_chan, mid_chan, ks=3, stride=1, padding=1) - self.conv_out = nn.Conv2d(mid_chan, num_class, kernel_size=1, bias=False) - - def forward(self, x): - feat = self.conv(x) - out = self.conv_out(feat) - return out, feat - - -class AttentionRefinementModule(nn.Module): - - def __init__(self, in_chan, out_chan): - super(AttentionRefinementModule, self).__init__() - self.conv = ConvBNReLU(in_chan, out_chan, ks=3, stride=1, padding=1) - self.conv_atten = nn.Conv2d(out_chan, out_chan, kernel_size=1, bias=False) - self.bn_atten = nn.BatchNorm2d(out_chan) - self.sigmoid_atten = nn.Sigmoid() - - def forward(self, x): - feat = self.conv(x) - atten = F.avg_pool2d(feat, feat.size()[2:]) - atten = self.conv_atten(atten) - atten = self.bn_atten(atten) - atten = self.sigmoid_atten(atten) - out = torch.mul(feat, atten) - return out - - -class ContextPath(nn.Module): - - def __init__(self): - super(ContextPath, self).__init__() - self.resnet = ResNet18() - self.arm16 = AttentionRefinementModule(256, 128) - self.arm32 = AttentionRefinementModule(512, 128) - self.conv_head32 = ConvBNReLU(128, 128, ks=3, stride=1, padding=1) - self.conv_head16 = ConvBNReLU(128, 128, ks=3, stride=1, padding=1) - self.conv_avg = ConvBNReLU(512, 128, ks=1, stride=1, padding=0) - - def forward(self, x): - feat8, feat16, feat32 = self.resnet(x) - h8, w8 = feat8.size()[2:] - h16, w16 = feat16.size()[2:] - h32, w32 = feat32.size()[2:] - - avg = F.avg_pool2d(feat32, feat32.size()[2:]) - avg = self.conv_avg(avg) - avg_up = F.interpolate(avg, (h32, w32), mode='nearest') - - feat32_arm = self.arm32(feat32) - feat32_sum = feat32_arm + avg_up - feat32_up = F.interpolate(feat32_sum, (h16, w16), mode='nearest') - feat32_up = self.conv_head32(feat32_up) - - feat16_arm = self.arm16(feat16) - feat16_sum = feat16_arm + feat32_up - feat16_up = F.interpolate(feat16_sum, (h8, w8), mode='nearest') - feat16_up = self.conv_head16(feat16_up) - - return feat8, feat16_up, feat32_up # x8, x8, x16 - - -class FeatureFusionModule(nn.Module): - - def __init__(self, in_chan, out_chan): - super(FeatureFusionModule, self).__init__() - self.convblk = ConvBNReLU(in_chan, out_chan, ks=1, stride=1, padding=0) - self.conv1 = nn.Conv2d(out_chan, out_chan // 4, kernel_size=1, stride=1, padding=0, bias=False) - self.conv2 = nn.Conv2d(out_chan // 4, out_chan, kernel_size=1, stride=1, padding=0, bias=False) - self.relu = nn.ReLU(inplace=True) - self.sigmoid = nn.Sigmoid() - - def forward(self, fsp, fcp): - fcat = torch.cat([fsp, fcp], dim=1) - feat = self.convblk(fcat) - atten = F.avg_pool2d(feat, feat.size()[2:]) - atten = self.conv1(atten) - atten = self.relu(atten) - atten = self.conv2(atten) - atten = self.sigmoid(atten) - feat_atten = torch.mul(feat, atten) - feat_out = feat_atten + feat - return feat_out - - -class BiSeNet(nn.Module): - - def __init__(self, num_class): - super(BiSeNet, self).__init__() - self.cp = ContextPath() - self.ffm = FeatureFusionModule(256, 256) - self.conv_out = BiSeNetOutput(256, 256, num_class) - self.conv_out16 = BiSeNetOutput(128, 64, num_class) - self.conv_out32 = BiSeNetOutput(128, 64, num_class) - - def forward(self, x, return_feat=False): - h, w = x.size()[2:] - feat_res8, feat_cp8, feat_cp16 = self.cp(x) # return res3b1 feature - feat_sp = feat_res8 # replace spatial path feature with res3b1 feature - feat_fuse = self.ffm(feat_sp, feat_cp8) - - out, feat = self.conv_out(feat_fuse) - out16, feat16 = self.conv_out16(feat_cp8) - out32, feat32 = self.conv_out32(feat_cp16) - - out = F.interpolate(out, (h, w), mode='bilinear', align_corners=True) - out16 = F.interpolate(out16, (h, w), mode='bilinear', align_corners=True) - out32 = F.interpolate(out32, (h, w), mode='bilinear', align_corners=True) - - if return_feat: - feat = F.interpolate(feat, (h, w), mode='bilinear', align_corners=True) - feat16 = F.interpolate(feat16, (h, w), mode='bilinear', align_corners=True) - feat32 = F.interpolate(feat32, (h, w), mode='bilinear', align_corners=True) - return out, out16, out32, feat, feat16, feat32 - else: - return out, out16, out32 diff --git a/spaces/GanymedeNil/text2vec/app.py b/spaces/GanymedeNil/text2vec/app.py deleted file mode 100644 index 5fa2c3b6ac857790a98d1c9c02770c0ad816baf7..0000000000000000000000000000000000000000 --- a/spaces/GanymedeNil/text2vec/app.py +++ /dev/null @@ -1,40 +0,0 @@ -# -*- coding: utf-8 -*- -""" -@author:XuMing(xuming624@qq.com) -@description: text similarity example, fine-tuned by CoSENT model -""" -import gradio as gr -from text2vec import Similarity - -# 中文句向量模型(CoSENT) -sim_model = Similarity(model_name_or_path='GanymedeNil/text2vec-large-chinese', - similarity_type='cosine', embedding_type='sbert') - - -def ai_text(sentence1, sentence2): - score = sim_model.get_score(sentence1, sentence2) - print("{} \t\t {} \t\t Score: {:.4f}".format(sentence1, sentence2, score)) - - return score - - -if __name__ == '__main__': - examples = [ - ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡'], - ['我在北京打篮球', '我是北京人,我喜欢篮球'], - ['一个女人在看书。', '一个女人在揉面团'], - ['一个男人在车库里举重。', '一个人在举重。'], - ] - input1 = gr.inputs.Textbox(lines=2, placeholder="Enter First Sentence") - input2 = gr.inputs.Textbox(lines=2, placeholder="Enter Second Sentence") - - output_text = gr.outputs.Textbox() - gr.Interface(ai_text, - inputs=[input1, input2], - outputs=[output_text], - theme="grass", - title="Chinese Text to Vector Model GanymedeNil/text2vec-large-chinese", - description="Copy or input Chinese text here. Submit and the machine will calculate the cosine score.", - article="Link to Github REPO", - examples=examples - ).launch() diff --git a/spaces/Giuvyz/rvc-genshin/infer_pack/commons.py b/spaces/Giuvyz/rvc-genshin/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/Giuvyz/rvc-genshin/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/archs/discriminator_arch.py b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/archs/discriminator_arch.py deleted file mode 100644 index ccd810559201624bc6c20ea9b60009b927ecadd6..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/archs/discriminator_arch.py +++ /dev/null @@ -1,67 +0,0 @@ -from basicsr.utils.registry import ARCH_REGISTRY -from torch import nn as nn -from torch.nn import functional as F -from torch.nn.utils import spectral_norm - - -@ARCH_REGISTRY.register() -class UNetDiscriminatorSN(nn.Module): - """Defines a U-Net discriminator with spectral normalization (SN) - - It is used in Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - Arg: - num_in_ch (int): Channel number of inputs. Default: 3. - num_feat (int): Channel number of base intermediate features. Default: 64. - skip_connection (bool): Whether to use skip connections between U-Net. Default: True. - """ - - def __init__(self, num_in_ch, num_feat=64, skip_connection=True): - super(UNetDiscriminatorSN, self).__init__() - self.skip_connection = skip_connection - norm = spectral_norm - # the first convolution - self.conv0 = nn.Conv2d(num_in_ch, num_feat, kernel_size=3, stride=1, padding=1) - # downsample - self.conv1 = norm(nn.Conv2d(num_feat, num_feat * 2, 4, 2, 1, bias=False)) - self.conv2 = norm(nn.Conv2d(num_feat * 2, num_feat * 4, 4, 2, 1, bias=False)) - self.conv3 = norm(nn.Conv2d(num_feat * 4, num_feat * 8, 4, 2, 1, bias=False)) - # upsample - self.conv4 = norm(nn.Conv2d(num_feat * 8, num_feat * 4, 3, 1, 1, bias=False)) - self.conv5 = norm(nn.Conv2d(num_feat * 4, num_feat * 2, 3, 1, 1, bias=False)) - self.conv6 = norm(nn.Conv2d(num_feat * 2, num_feat, 3, 1, 1, bias=False)) - # extra convolutions - self.conv7 = norm(nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=False)) - self.conv8 = norm(nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=False)) - self.conv9 = nn.Conv2d(num_feat, 1, 3, 1, 1) - - def forward(self, x): - # downsample - x0 = F.leaky_relu(self.conv0(x), negative_slope=0.2, inplace=True) - x1 = F.leaky_relu(self.conv1(x0), negative_slope=0.2, inplace=True) - x2 = F.leaky_relu(self.conv2(x1), negative_slope=0.2, inplace=True) - x3 = F.leaky_relu(self.conv3(x2), negative_slope=0.2, inplace=True) - - # upsample - x3 = F.interpolate(x3, scale_factor=2, mode="bilinear", align_corners=False) - x4 = F.leaky_relu(self.conv4(x3), negative_slope=0.2, inplace=True) - - if self.skip_connection: - x4 = x4 + x2 - x4 = F.interpolate(x4, scale_factor=2, mode="bilinear", align_corners=False) - x5 = F.leaky_relu(self.conv5(x4), negative_slope=0.2, inplace=True) - - if self.skip_connection: - x5 = x5 + x1 - x5 = F.interpolate(x5, scale_factor=2, mode="bilinear", align_corners=False) - x6 = F.leaky_relu(self.conv6(x5), negative_slope=0.2, inplace=True) - - if self.skip_connection: - x6 = x6 + x0 - - # extra convolutions - out = F.leaky_relu(self.conv7(x6), negative_slope=0.2, inplace=True) - out = F.leaky_relu(self.conv8(out), negative_slope=0.2, inplace=True) - out = self.conv9(out) - - return out diff --git a/spaces/GookProxy/Gyul/greeting.md b/spaces/GookProxy/Gyul/greeting.md deleted file mode 100644 index b45dc59b5a57fb26fc190a84eff8bb8199fb47ca..0000000000000000000000000000000000000000 --- a/spaces/GookProxy/Gyul/greeting.md +++ /dev/null @@ -1,5 +0,0 @@ -Everything you send to this proxy, including prompts, is logged.
    -This logs will also be publicly available at arca.live/b/characterai
    -(프롬프트를 포함하여 이 프록시로 보내는 모든 내용이 기록됩니다
    -이 로그는 arca.live/b/characterai에서 공개적으로 확인할 수 있습니다.)

    -비밀번호는 아카라이브의 프록시 게이트에서만 공개됩니다. \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_1x_lvis_v1.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_1x_lvis_v1.py deleted file mode 100644 index f77adba2f150f62900571f5f32b2083ee53b7003..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_1x_lvis_v1.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/profiler.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/profiler.py deleted file mode 100644 index b45b6d15910b50305c7b212c089ffad3c25b324d..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/utils/profiler.py +++ /dev/null @@ -1,38 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import typing as tp - -import dora -import torch - - -logger = logging.getLogger(__name__) - - -class Profiler: - """Context manager wrapper for xformers profiler. - """ - def __init__(self, module: torch.nn.Module, enabled: bool = False): - self.profiler: tp.Optional[tp.Any] = None - if enabled: - from xformers.profiler import profile - output_dir = dora.get_xp().folder / 'profiler_data' - logger.info("Profiling activated, results with be saved to %s", output_dir) - self.profiler = profile(output_dir=output_dir, module=module) - - def step(self): - if self.profiler is not None: - self.profiler.step() # type: ignore - - def __enter__(self): - if self.profiler is not None: - return self.profiler.__enter__() # type: ignore - - def __exit__(self, exc_type, exc_value, exc_tb): - if self.profiler is not None: - return self.profiler.__exit__(exc_type, exc_value, exc_tb) # type: ignore diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/utils/autocast.py b/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/utils/autocast.py deleted file mode 100644 index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/utils/autocast.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class TorchAutocast: - """TorchAutocast utility class. - Allows you to enable and disable autocast. This is specially useful - when dealing with different architectures and clusters with different - levels of support. - - Args: - enabled (bool): Whether to enable torch.autocast or not. - args: Additional args for torch.autocast. - kwargs: Additional kwargs for torch.autocast - """ - def __init__(self, enabled: bool, *args, **kwargs): - self.autocast = torch.autocast(*args, **kwargs) if enabled else None - - def __enter__(self): - if self.autocast is None: - return - try: - self.autocast.__enter__() - except RuntimeError: - device = self.autocast.device - dtype = self.autocast.fast_dtype - raise RuntimeError( - f"There was an error autocasting with dtype={dtype} device={device}\n" - "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16" - ) - - def __exit__(self, *args, **kwargs): - if self.autocast is None: - return - self.autocast.__exit__(*args, **kwargs) diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/audio_dataset.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/audio_dataset.py deleted file mode 100644 index cf21422ea0059cb2d6553f93e608b8f9fa0d3a50..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/audiocraft/data/audio_dataset.py +++ /dev/null @@ -1,525 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import copy -from concurrent.futures import ThreadPoolExecutor, Future -from dataclasses import dataclass, fields -from contextlib import ExitStack -import gzip -import json -import logging -import os -from pathlib import Path -import random -import sys -import typing as tp - -import torch -import torch.nn.functional as F - -from .audio import audio_read, audio_info -from .audio_utils import convert_audio -from .zip import PathInZip - -try: - import dora -except ImportError: - dora = None # type: ignore - - -@dataclass(order=True) -class BaseInfo: - - @classmethod - def _dict2fields(cls, dictionary: dict): - return { - field.name: dictionary[field.name] - for field in fields(cls) if field.name in dictionary - } - - @classmethod - def from_dict(cls, dictionary: dict): - _dictionary = cls._dict2fields(dictionary) - return cls(**_dictionary) - - def to_dict(self): - return { - field.name: self.__getattribute__(field.name) - for field in fields(self) - } - - -@dataclass(order=True) -class AudioMeta(BaseInfo): - path: str - duration: float - sample_rate: int - amplitude: tp.Optional[float] = None - weight: tp.Optional[float] = None - # info_path is used to load additional information about the audio file that is stored in zip files. - info_path: tp.Optional[PathInZip] = None - - @classmethod - def from_dict(cls, dictionary: dict): - base = cls._dict2fields(dictionary) - if 'info_path' in base and base['info_path'] is not None: - base['info_path'] = PathInZip(base['info_path']) - return cls(**base) - - def to_dict(self): - d = super().to_dict() - if d['info_path'] is not None: - d['info_path'] = str(d['info_path']) - return d - - -@dataclass(order=True) -class SegmentInfo(BaseInfo): - meta: AudioMeta - seek_time: float - n_frames: int # actual number of frames without padding - total_frames: int # total number of frames, padding included - sample_rate: int # actual sample rate - - -DEFAULT_EXTS = ['.wav', '.mp3', '.flac', '.ogg', '.m4a'] - -logger = logging.getLogger(__name__) - - -def _get_audio_meta(file_path: str, minimal: bool = True) -> AudioMeta: - """AudioMeta from a path to an audio file. - - Args: - file_path (str): Resolved path of valid audio file. - minimal (bool): Whether to only load the minimal set of metadata (takes longer if not). - Returns: - AudioMeta: Audio file path and its metadata. - """ - info = audio_info(file_path) - amplitude: tp.Optional[float] = None - if not minimal: - wav, sr = audio_read(file_path) - amplitude = wav.abs().max().item() - return AudioMeta(file_path, info.duration, info.sample_rate, amplitude) - - -def _resolve_audio_meta(m: AudioMeta, fast: bool = True) -> AudioMeta: - """If Dora is available as a dependency, try to resolve potential relative paths - in list of AudioMeta. This method is expected to be used when loading meta from file. - - Args: - m (AudioMeta): Audio meta to resolve. - fast (bool): If True, uses a really fast check for determining if a file is already absolute or not. - Only valid on Linux/Mac. - Returns: - AudioMeta: Audio meta with resolved path. - """ - def is_abs(m): - if fast: - return str(m)[0] == '/' - else: - os.path.isabs(str(m)) - - if not dora: - return m - - if not is_abs(m.path): - m.path = dora.git_save.to_absolute_path(m.path) - if m.info_path is not None and not is_abs(m.info_path.zip_path): - m.info_path.zip_path = dora.git_save.to_absolute_path(m.path) - return m - - -def find_audio_files(path: tp.Union[Path, str], - exts: tp.List[str] = DEFAULT_EXTS, - resolve: bool = True, - minimal: bool = True, - progress: bool = False, - workers: int = 0) -> tp.List[AudioMeta]: - """Build a list of AudioMeta from a given path, - collecting relevant audio files and fetching meta info. - - Args: - path (str or Path): Path to folder containing audio files. - exts (list of str): List of file extensions to consider for audio files. - minimal (bool): Whether to only load the minimal set of metadata (takes longer if not). - progress (bool): Whether to log progress on audio files collection. - workers (int): number of parallel workers, if 0, use only the current thread. - Returns: - List[AudioMeta]: List of audio file path and its metadata. - """ - audio_files = [] - futures: tp.List[Future] = [] - pool: tp.Optional[ThreadPoolExecutor] = None - with ExitStack() as stack: - if workers > 0: - pool = ThreadPoolExecutor(workers) - stack.enter_context(pool) - - if progress: - print("Finding audio files...") - for root, folders, files in os.walk(path, followlinks=True): - for file in files: - full_path = Path(root) / file - if full_path.suffix.lower() in exts: - audio_files.append(full_path) - if pool is not None: - futures.append(pool.submit(_get_audio_meta, str(audio_files[-1]), minimal)) - if progress: - print(format(len(audio_files), " 8d"), end='\r', file=sys.stderr) - - if progress: - print("Getting audio metadata...") - meta: tp.List[AudioMeta] = [] - for idx, file_path in enumerate(audio_files): - try: - if pool is None: - m = _get_audio_meta(str(file_path), minimal) - else: - m = futures[idx].result() - if resolve: - m = _resolve_audio_meta(m) - except Exception as err: - print("Error with", str(file_path), err, file=sys.stderr) - continue - meta.append(m) - if progress: - print(format((1 + idx) / len(audio_files), " 3.1%"), end='\r', file=sys.stderr) - meta.sort() - return meta - - -def load_audio_meta(path: tp.Union[str, Path], - resolve: bool = True, fast: bool = True) -> tp.List[AudioMeta]: - """Load list of AudioMeta from an optionally compressed json file. - - Args: - path (str or Path): Path to JSON file. - resolve (bool): Whether to resolve the path from AudioMeta (default=True). - fast (bool): activates some tricks to make things faster. - Returns: - List[AudioMeta]: List of audio file path and its total duration. - """ - open_fn = gzip.open if str(path).lower().endswith('.gz') else open - with open_fn(path, 'rb') as fp: # type: ignore - lines = fp.readlines() - meta = [] - for line in lines: - d = json.loads(line) - m = AudioMeta.from_dict(d) - if resolve: - m = _resolve_audio_meta(m, fast=fast) - meta.append(m) - return meta - - -def save_audio_meta(path: tp.Union[str, Path], meta: tp.List[AudioMeta]): - """Save the audio metadata to the file pointer as json. - - Args: - path (str or Path): Path to JSON file. - metadata (list of BaseAudioMeta): List of audio meta to save. - """ - Path(path).parent.mkdir(exist_ok=True, parents=True) - open_fn = gzip.open if str(path).lower().endswith('.gz') else open - with open_fn(path, 'wb') as fp: # type: ignore - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - json_bytes = json_str.encode('utf-8') - fp.write(json_bytes) - - -class AudioDataset: - """Base audio dataset. - - The dataset takes a list of AudioMeta and create a dataset composed of segments of audio - and potentially additional information, by creating random segments from the list of audio - files referenced in the metadata and applying minimal data pre-processing such as resampling, - mixing of channels, padding, etc. - - If no segment_duration value is provided, the AudioDataset will return the full wav for each - audio file. Otherwise, it will randomly sample audio files and create a segment of the specified - duration, applying padding if required. - - By default, only the torch Tensor corresponding to the waveform is returned. Setting return_info=True - allows to return a tuple containing the torch Tensor and additional metadata on the segment and the - original audio meta. - - Args: - meta (tp.List[AudioMeta]): List of audio files metadata. - segment_duration (float): Optional segment duration of audio to load. - If not specified, the dataset will load the full audio segment from the file. - shuffle (bool): Set to `True` to have the data reshuffled at every epoch. - sample_rate (int): Target sample rate of the loaded audio samples. - channels (int): Target number of channels of the loaded audio samples. - sample_on_duration (bool): Set to `True` to sample segments with probability - dependent on audio file duration. This is only used if `segment_duration` is provided. - sample_on_weight (bool): Set to `True` to sample segments using the `weight` entry of - `AudioMeta`. If `sample_on_duration` is also True, the actual weight will be the product - of the file duration and file weight. This is only used if `segment_duration` is provided. - min_segment_ratio (float): Minimum segment ratio to use when the audio file - is shorter than the desired segment. - max_read_retry (int): Maximum number of retries to sample an audio segment from the dataset. - return_info (bool): Whether to return the wav only or return wav along with segment info and metadata. - min_audio_duration (tp.Optional[float], optional): Minimum audio file duration, in seconds, if provided - audio shorter than this will be filtered out. - max_audio_duration (tp.Optional[float], optional): Maximal audio file duration in seconds, if provided - audio longer than this will be filtered out. - """ - def __init__(self, - meta: tp.List[AudioMeta], - segment_duration: tp.Optional[float] = None, - shuffle: bool = True, - num_samples: int = 10_000, - sample_rate: int = 48_000, - channels: int = 2, - pad: bool = True, - sample_on_duration: bool = True, - sample_on_weight: bool = True, - min_segment_ratio: float = 0.5, - max_read_retry: int = 10, - return_info: bool = False, - min_audio_duration: tp.Optional[float] = None, - max_audio_duration: tp.Optional[float] = None - ): - assert len(meta) > 0, 'No audio meta provided to AudioDataset. Please check loading of audio meta.' - assert segment_duration is None or segment_duration > 0 - assert segment_duration is None or min_segment_ratio >= 0 - logging.debug(f'sample_on_duration: {sample_on_duration}') - logging.debug(f'sample_on_weight: {sample_on_weight}') - logging.debug(f'pad: {pad}') - logging.debug(f'min_segment_ratio: {min_segment_ratio}') - - self.segment_duration = segment_duration - self.min_segment_ratio = min_segment_ratio - self.max_audio_duration = max_audio_duration - self.min_audio_duration = min_audio_duration - if self.min_audio_duration is not None and self.max_audio_duration is not None: - assert self.min_audio_duration <= self.max_audio_duration - self.meta: tp.List[AudioMeta] = self._filter_duration(meta) - assert len(self.meta) # Fail fast if all data has been filtered. - self.total_duration = sum(d.duration for d in self.meta) - - if segment_duration is None: - num_samples = len(self.meta) - self.num_samples = num_samples - self.shuffle = shuffle - self.sample_rate = sample_rate - self.channels = channels - self.pad = pad - self.sample_on_weight = sample_on_weight - self.sample_on_duration = sample_on_duration - self.sampling_probabilities = self._get_sampling_probabilities() - self.max_read_retry = max_read_retry - self.return_info = return_info - - def __len__(self): - return self.num_samples - - def _get_sampling_probabilities(self, normalized: bool = True): - """Return the sampling probabilities for each file inside `self.meta`. - """ - scores: tp.List[float] = [] - for file_meta in self.meta: - score = 1. - if self.sample_on_weight and file_meta.weight is not None: - score *= file_meta.weight - if self.sample_on_duration: - score *= file_meta.duration - scores.append(score) - probabilities = torch.tensor(scores) - if normalized: - probabilities /= probabilities.sum() - return probabilities - - def sample_file(self, rng: torch.Generator) -> AudioMeta: - """Sample a given file from `self.meta`. Can be overriden in subclasses. - This is only called if `segment_duration` is not None. - - You must use the provided random number generator `rng` for reproducibility. - """ - if not self.sample_on_weight and not self.sample_on_duration: - file_index = int(torch.randint(len(self.sampling_probabilities), (1,), generator=rng).item()) - else: - file_index = int(torch.multinomial(self.sampling_probabilities, 1, generator=rng).item()) - - return self.meta[file_index] - - def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentInfo]]: - if self.segment_duration is None: - file_meta = self.meta[index] - out, sr = audio_read(file_meta.path) - out = convert_audio(out, sr, self.sample_rate, self.channels) - n_frames = out.shape[-1] - segment_info = SegmentInfo(file_meta, seek_time=0., n_frames=n_frames, total_frames=n_frames, - sample_rate=self.sample_rate) - else: - rng = torch.Generator() - if self.shuffle: - # We use index, plus extra randomness - rng.manual_seed(index + self.num_samples * random.randint(0, 2**24)) - else: - # We only use index - rng.manual_seed(index) - - for retry in range(self.max_read_retry): - file_meta = self.sample_file(rng) - # We add some variance in the file position even if audio file is smaller than segment - # without ending up with empty segments - max_seek = max(0, file_meta.duration - self.segment_duration * self.min_segment_ratio) - seek_time = torch.rand(1, generator=rng).item() * max_seek - try: - out, sr = audio_read(file_meta.path, seek_time, self.segment_duration, pad=False) - out = convert_audio(out, sr, self.sample_rate, self.channels) - n_frames = out.shape[-1] - target_frames = int(self.segment_duration * self.sample_rate) - if self.pad: - out = F.pad(out, (0, target_frames - n_frames)) - segment_info = SegmentInfo(file_meta, seek_time, n_frames=n_frames, total_frames=target_frames, - sample_rate=self.sample_rate) - except Exception as exc: - logger.warning("Error opening file %s: %r", file_meta.path, exc) - if retry == self.max_read_retry - 1: - raise - else: - break - - if self.return_info: - # Returns the wav and additional information on the wave segment - return out, segment_info - else: - return out - - def collater(self, samples): - """The collater function has to be provided to the dataloader - if AudioDataset has return_info=True in order to properly collate - the samples of a batch. - """ - if self.segment_duration is None and len(samples) > 1: - assert self.pad, "Must allow padding when batching examples of different durations." - - # In this case the audio reaching the collater is of variable length as segment_duration=None. - to_pad = self.segment_duration is None and self.pad - if to_pad: - max_len = max([wav.shape[-1] for wav, _ in samples]) - - def _pad_wav(wav): - return F.pad(wav, (0, max_len - wav.shape[-1])) - - if self.return_info: - if len(samples) > 0: - assert len(samples[0]) == 2 - assert isinstance(samples[0][0], torch.Tensor) - assert isinstance(samples[0][1], SegmentInfo) - - wavs = [wav for wav, _ in samples] - segment_infos = [copy.deepcopy(info) for _, info in samples] - - if to_pad: - # Each wav could be of a different duration as they are not segmented. - for i in range(len(samples)): - # Determines the total legth of the signal with padding, so we update here as we pad. - segment_infos[i].total_frames = max_len - wavs[i] = _pad_wav(wavs[i]) - - wav = torch.stack(wavs) - return wav, segment_infos - else: - assert isinstance(samples[0], torch.Tensor) - if to_pad: - samples = [_pad_wav(s) for s in samples] - return torch.stack(samples) - - def _filter_duration(self, meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]: - """Filters out audio files with short durations. - Removes from meta files that have durations that will not allow to samples examples from them. - """ - orig_len = len(meta) - - # Filter data that is too short. - if self.min_audio_duration is not None: - meta = [m for m in meta if m.duration >= self.min_audio_duration] - - # Filter data that is too long. - if self.max_audio_duration is not None: - meta = [m for m in meta if m.duration <= self.max_audio_duration] - - filtered_len = len(meta) - removed_percentage = 100*(1-float(filtered_len)/orig_len) - msg = 'Removed %.2f percent of the data because it was too short or too long.' % removed_percentage - if removed_percentage < 10: - logging.debug(msg) - else: - logging.warning(msg) - return meta - - @classmethod - def from_meta(cls, root: tp.Union[str, Path], **kwargs): - """Instantiate AudioDataset from a path to a directory containing a manifest as a jsonl file. - - Args: - root (str or Path): Path to root folder containing audio files. - kwargs: Additional keyword arguments for the AudioDataset. - """ - root = Path(root) - if root.is_dir(): - if (root / 'data.jsonl').exists(): - root = root / 'data.jsonl' - elif (root / 'data.jsonl.gz').exists(): - root = root / 'data.jsonl.gz' - else: - raise ValueError("Don't know where to read metadata from in the dir. " - "Expecting either a data.jsonl or data.jsonl.gz file but none found.") - meta = load_audio_meta(root) - return cls(meta, **kwargs) - - @classmethod - def from_path(cls, root: tp.Union[str, Path], minimal_meta: bool = True, - exts: tp.List[str] = DEFAULT_EXTS, **kwargs): - """Instantiate AudioDataset from a path containing (possibly nested) audio files. - - Args: - root (str or Path): Path to root folder containing audio files. - minimal_meta (bool): Whether to only load minimal metadata or not. - exts (list of str): Extensions for audio files. - kwargs: Additional keyword arguments for the AudioDataset. - """ - root = Path(root) - if root.is_file(): - meta = load_audio_meta(root, resolve=True) - else: - meta = find_audio_files(root, exts, minimal=minimal_meta, resolve=True) - return cls(meta, **kwargs) - - -def main(): - logging.basicConfig(stream=sys.stderr, level=logging.INFO) - parser = argparse.ArgumentParser( - prog='audio_dataset', - description='Generate .jsonl files by scanning a folder.') - parser.add_argument('root', help='Root folder with all the audio files') - parser.add_argument('output_meta_file', - help='Output file to store the metadata, ') - parser.add_argument('--complete', - action='store_false', dest='minimal', default=True, - help='Retrieve all metadata, even the one that are expansive ' - 'to compute (e.g. normalization).') - parser.add_argument('--resolve', - action='store_true', default=False, - help='Resolve the paths to be absolute and with no symlinks.') - parser.add_argument('--workers', - default=10, type=int, - help='Number of workers.') - args = parser.parse_args() - meta = find_audio_files(args.root, DEFAULT_EXTS, progress=True, - resolve=args.resolve, minimal=args.minimal, workers=args.workers) - save_audio_meta(args.output_meta_file, meta) - - -if __name__ == '__main__': - main() diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/model/__init__.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/model/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/criterions/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/criterions/__init__.py deleted file mode 100644 index 579abd2ace1b14b80f5e53e5c96583e4d5b14c52..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/criterions/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -import importlib -import os - - -# ASG loss requires flashlight bindings -files_to_skip = set() -try: - import flashlight.lib.sequence.criterion -except ImportError: - files_to_skip.add("ASG_loss.py") - -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_") and file not in files_to_skip: - criterion_name = file[: file.find(".py")] - importlib.import_module( - "examples.speech_recognition.criterions." + criterion_name - ) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/compound_split_bleu.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/compound_split_bleu.sh deleted file mode 100644 index 1972fddcebff9a43a70bcf14c287175c68f60e3f..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/compound_split_bleu.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash - -if [ $# -ne 1 ]; then - echo "usage: $0 GENERATE_PY_OUTPUT" - exit 1 -fi - -GEN=$1 - -SYS=$GEN.sys -REF=$GEN.ref - -if [ $(tail -n 1 $GEN | grep BLEU | wc -l) -ne 1 ]; then - echo "not done generating" - exit -fi - -grep ^H $GEN | awk -F '\t' '{print $NF}' | perl -ple 's{(\S)-(\S)}{$1 ##AT##-##AT## $2}g' > $SYS -grep ^T $GEN | cut -f2- | perl -ple 's{(\S)-(\S)}{$1 ##AT##-##AT## $2}g' > $REF -fairseq-score --sys $SYS --ref $REF diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/hifi/train_hifi.sh b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/hifi/train_hifi.sh deleted file mode 100644 index 287ca1159b5bf8f779d66885197fadbcd23b911e..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/scripts/hifi/train_hifi.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/bash - -gender='male' - -config='../../config/hifi/config_v1.json' -modeldir='../../checkpoints/hifi/'$gender -logdir='../../logs/hifi/'$gender - - -#################################################### - - - -python ../../src/hifi_gan/train.py \ - --config $config \ - --input_training_file '../../data/hifi/'$gender'/train.txt' \ - --input_validation_file '../../data/hifi/'$gender'/valid.txt' \ - --checkpoint_path $modeldir \ - --logs_path $logdir \ - --checkpoint_interval 10000 \ - --stdout_interval 50 diff --git a/spaces/ICML2022/OFA/data/data_utils.py b/spaces/ICML2022/OFA/data/data_utils.py deleted file mode 100644 index 7f843789138c62668f9e1c4e7fd44299fb5ef768..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/data/data_utils.py +++ /dev/null @@ -1,601 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -try: - from collections.abc import Iterable -except ImportError: - from collections import Iterable -import contextlib -import itertools -import logging -import re -import warnings -from typing import Optional, Tuple - -import numpy as np -import torch - -from fairseq.file_io import PathManager -from fairseq import utils -import os - -logger = logging.getLogger(__name__) - - -def infer_language_pair(path): - """Infer language pair from filename: .-.(...).idx""" - src, dst = None, None - for filename in PathManager.ls(path): - parts = filename.split(".") - if len(parts) >= 3 and len(parts[1].split("-")) == 2: - return parts[1].split("-") - return src, dst - - -def collate_tokens( - values, - pad_idx, - eos_idx=None, - left_pad=False, - move_eos_to_beginning=False, - pad_to_length=None, - pad_to_multiple=1, - pad_to_bsz=None, -): - """Convert a list of 1d tensors into a padded 2d tensor.""" - size = max(v.size(0) for v in values) - size = size if pad_to_length is None else max(size, pad_to_length) - if pad_to_multiple != 1 and size % pad_to_multiple != 0: - size = int(((size - 0.1) // pad_to_multiple + 1) * pad_to_multiple) - - def copy_tensor(src, dst): - assert dst.numel() == src.numel() - if move_eos_to_beginning: - if eos_idx is None: - # if no eos_idx is specified, then use the last token in src - dst[0] = src[-1] - else: - dst[0] = eos_idx - dst[1:] = src[:-1] - else: - dst.copy_(src) - - if values[0].dim() == 1: - res = values[0].new(len(values), size).fill_(pad_idx) - elif values[0].dim() == 2: - assert move_eos_to_beginning is False - res = values[0].new(len(values), size, values[0].size(1)).fill_(pad_idx) - else: - raise NotImplementedError - - for i, v in enumerate(values): - copy_tensor(v, res[i][size - len(v) :] if left_pad else res[i][: len(v)]) - return res - - -def load_indexed_dataset( - path, dictionary=None, dataset_impl=None, combine=False, default="cached" -): - """A helper function for loading indexed datasets. - - Args: - path (str): path to indexed dataset (e.g., 'data-bin/train') - dictionary (~fairseq.data.Dictionary): data dictionary - dataset_impl (str, optional): which dataset implementation to use. If - not provided, it will be inferred automatically. For legacy indexed - data we use the 'cached' implementation by default. - combine (bool, optional): automatically load and combine multiple - datasets. For example, if *path* is 'data-bin/train', then we will - combine 'data-bin/train', 'data-bin/train1', ... and return a - single ConcatDataset instance. - """ - import fairseq.data.indexed_dataset as indexed_dataset - from fairseq.data.concat_dataset import ConcatDataset - - datasets = [] - for k in itertools.count(): - path_k = path + (str(k) if k > 0 else "") - try: - path_k = indexed_dataset.get_indexed_dataset_to_local(path_k) - except Exception as e: - if "StorageException: [404] Path not found" in str(e): - logger.warning(f"path_k: {e} not found") - else: - raise e - - dataset_impl_k = dataset_impl - if dataset_impl_k is None: - dataset_impl_k = indexed_dataset.infer_dataset_impl(path_k) - dataset = indexed_dataset.make_dataset( - path_k, - impl=dataset_impl_k or default, - fix_lua_indexing=True, - dictionary=dictionary, - ) - if dataset is None: - break - logger.info("loaded {:,} examples from: {}".format(len(dataset), path_k)) - datasets.append(dataset) - if not combine: - break - if len(datasets) == 0: - return None - elif len(datasets) == 1: - return datasets[0] - else: - return ConcatDataset(datasets) - - -@contextlib.contextmanager -def numpy_seed(seed, *addl_seeds): - """Context manager which seeds the NumPy PRNG with the specified seed and - restores the state afterward""" - if seed is None: - yield - return - if len(addl_seeds) > 0: - seed = int(hash((seed, *addl_seeds)) % 1e6) - state = np.random.get_state() - np.random.seed(seed) - try: - yield - finally: - np.random.set_state(state) - - -def collect_filtered(function, iterable, filtered): - """ - Similar to :func:`filter` but collects filtered elements in ``filtered``. - - Args: - function (callable): function that returns ``False`` for elements that - should be filtered - iterable (iterable): iterable to filter - filtered (list): list to store filtered elements - """ - for el in iterable: - if function(el): - yield el - else: - filtered.append(el) - - -def _filter_by_size_dynamic(indices, size_fn, max_positions, raise_exception=False): - def compare_leq(a, b): - return a <= b if not isinstance(a, tuple) else max(a) <= b - - def check_size(idx): - if isinstance(max_positions, float) or isinstance(max_positions, int): - return size_fn(idx) <= max_positions - elif isinstance(max_positions, dict): - idx_size = size_fn(idx) - assert isinstance(idx_size, dict) - intersect_keys = set(max_positions.keys()) & set(idx_size.keys()) - return all( - all( - a is None or b is None or a <= b - for a, b in zip(idx_size[key], max_positions[key]) - ) - for key in intersect_keys - ) - else: - # For MultiCorpusSampledDataset, will generalize it later - if not isinstance(size_fn(idx), Iterable): - return all(size_fn(idx) <= b for b in max_positions) - return all( - a is None or b is None or a <= b - for a, b in zip(size_fn(idx), max_positions) - ) - - ignored = [] - itr = collect_filtered(check_size, indices, ignored) - indices = np.fromiter(itr, dtype=np.int64, count=-1) - return indices, ignored - - -def filter_by_size(indices, dataset, max_positions, raise_exception=False): - """ - [deprecated] Filter indices based on their size. - Use `FairseqDataset::filter_indices_by_size` instead. - - Args: - indices (List[int]): ordered list of dataset indices - dataset (FairseqDataset): fairseq dataset instance - max_positions (tuple): filter elements larger than this size. - Comparisons are done component-wise. - raise_exception (bool, optional): if ``True``, raise an exception if - any elements are filtered (default: False). - """ - warnings.warn( - "data_utils.filter_by_size is deprecated. " - "Use `FairseqDataset::filter_indices_by_size` instead.", - stacklevel=2, - ) - if isinstance(max_positions, float) or isinstance(max_positions, int): - if hasattr(dataset, "sizes") and isinstance(dataset.sizes, np.ndarray): - ignored = indices[dataset.sizes[indices] > max_positions].tolist() - indices = indices[dataset.sizes[indices] <= max_positions] - elif ( - hasattr(dataset, "sizes") - and isinstance(dataset.sizes, list) - and len(dataset.sizes) == 1 - ): - ignored = indices[dataset.sizes[0][indices] > max_positions].tolist() - indices = indices[dataset.sizes[0][indices] <= max_positions] - else: - indices, ignored = _filter_by_size_dynamic( - indices, dataset.size, max_positions - ) - else: - indices, ignored = _filter_by_size_dynamic(indices, dataset.size, max_positions) - - if len(ignored) > 0 and raise_exception: - raise Exception( - ( - "Size of sample #{} is invalid (={}) since max_positions={}, " - "skip this example with --skip-invalid-size-inputs-valid-test" - ).format(ignored[0], dataset.size(ignored[0]), max_positions) - ) - if len(ignored) > 0: - logger.warning( - ( - "{} samples have invalid sizes and will be skipped, " - "max_positions={}, first few sample ids={}" - ).format(len(ignored), max_positions, ignored[:10]) - ) - return indices - - -def filter_paired_dataset_indices_by_size(src_sizes, tgt_sizes, indices, max_sizes): - """Filter a list of sample indices. Remove those that are longer - than specified in max_sizes. - - Args: - indices (np.array): original array of sample indices - max_sizes (int or list[int] or tuple[int]): max sample size, - can be defined separately for src and tgt (then list or tuple) - - Returns: - np.array: filtered sample array - list: list of removed indices - """ - if max_sizes is None: - return indices, [] - if type(max_sizes) in (int, float): - max_src_size, max_tgt_size = max_sizes, max_sizes - else: - max_src_size, max_tgt_size = max_sizes - if tgt_sizes is None: - ignored = indices[src_sizes[indices] > max_src_size] - else: - ignored = indices[ - (src_sizes[indices] > max_src_size) | (tgt_sizes[indices] > max_tgt_size) - ] - if len(ignored) > 0: - if tgt_sizes is None: - indices = indices[src_sizes[indices] <= max_src_size] - else: - indices = indices[ - (src_sizes[indices] <= max_src_size) - & (tgt_sizes[indices] <= max_tgt_size) - ] - return indices, ignored.tolist() - - -def batch_by_size( - indices, - num_tokens_fn, - num_tokens_vec=None, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - fixed_shapes=None, -): - """ - Yield mini-batches of indices bucketed by size. Batches may contain - sequences of different lengths. - - Args: - indices (List[int]): ordered list of dataset indices - num_tokens_fn (callable): function that returns the number of tokens at - a given index - num_tokens_vec (List[int], optional): precomputed vector of the number - of tokens for each index in indices (to enable faster batch generation) - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - required_batch_size_multiple (int, optional): require batch size to - be less than N or a multiple of N (default: 1). - fixed_shapes (List[Tuple[int, int]], optional): if given, batches will - only be created with the given shapes. *max_sentences* and - *required_batch_size_multiple* will be ignored (default: None). - """ - try: - from fairseq.data.data_utils_fast import ( - batch_by_size_fn, - batch_by_size_vec, - batch_fixed_shapes_fast, - ) - except ImportError: - raise ImportError( - "Please build Cython components with: " - "`python setup.py build_ext --inplace`" - ) - except ValueError: - raise ValueError( - "Please build (or rebuild) Cython components with `python setup.py build_ext --inplace`." - ) - - # added int() to avoid TypeError: an integer is required - max_tokens = ( - int(max_tokens) if max_tokens is not None else -1 - ) - max_sentences = max_sentences if max_sentences is not None else -1 - bsz_mult = required_batch_size_multiple - - if not isinstance(indices, np.ndarray): - indices = np.fromiter(indices, dtype=np.int64, count=-1) - - if num_tokens_vec is not None and not isinstance(num_tokens_vec, np.ndarray): - num_tokens_vec = np.fromiter(num_tokens_vec, dtype=np.int64, count=-1) - - if fixed_shapes is None: - if num_tokens_vec is None: - return batch_by_size_fn( - indices, - num_tokens_fn, - max_tokens, - max_sentences, - bsz_mult, - ) - else: - return batch_by_size_vec( - indices, - num_tokens_vec, - max_tokens, - max_sentences, - bsz_mult, - ) - - else: - fixed_shapes = np.array(fixed_shapes, dtype=np.int64) - sort_order = np.lexsort( - [ - fixed_shapes[:, 1].argsort(), # length - fixed_shapes[:, 0].argsort(), # bsz - ] - ) - fixed_shapes_sorted = fixed_shapes[sort_order] - return batch_fixed_shapes_fast(indices, num_tokens_fn, fixed_shapes_sorted) - - -def post_process(sentence: str, symbol: str): - if symbol == "sentencepiece": - sentence = sentence.replace(" ", "").replace("\u2581", " ").strip() - elif symbol == "wordpiece": - sentence = sentence.replace(" ", "").replace("_", " ").strip() - elif symbol == "letter": - sentence = sentence.replace(" ", "").replace("|", " ").strip() - elif symbol == "silence": - import re - sentence = sentence.replace("", "") - sentence = re.sub(' +', ' ', sentence).strip() - elif symbol == "_EOW": - sentence = sentence.replace(" ", "").replace("_EOW", " ").strip() - elif symbol in {"subword_nmt", "@@ ", "@@"}: - if symbol == "subword_nmt": - symbol = "@@ " - sentence = (sentence + " ").replace(symbol, "").rstrip() - elif symbol == "none": - pass - elif symbol is not None: - raise NotImplementedError(f"Unknown post_process option: {symbol}") - return sentence - - -def compute_mask_indices( - shape: Tuple[int, int], - padding_mask: Optional[torch.Tensor], - mask_prob: float, - mask_length: int, - mask_type: str = "static", - mask_other: float = 0.0, - min_masks: int = 0, - no_overlap: bool = False, - min_space: int = 0, -) -> np.ndarray: - """ - Computes random mask spans for a given shape - - Args: - shape: the the shape for which to compute masks. - should be of size 2 where first element is batch size and 2nd is timesteps - padding_mask: optional padding mask of the same size as shape, which will prevent masking padded elements - mask_prob: probability for each token to be chosen as start of the span to be masked. this will be multiplied by - number of timesteps divided by length of mask span to mask approximately this percentage of all elements. - however due to overlaps, the actual number will be smaller (unless no_overlap is True) - mask_type: how to compute mask lengths - static = fixed size - uniform = sample from uniform distribution [mask_other, mask_length*2] - normal = sample from normal distribution with mean mask_length and stdev mask_other. mask is min 1 element - poisson = sample from possion distribution with lambda = mask length - min_masks: minimum number of masked spans - no_overlap: if false, will switch to an alternative recursive algorithm that prevents spans from overlapping - min_space: only used if no_overlap is True, this is how many elements to keep unmasked between spans - """ - - bsz, all_sz = shape - mask = np.full((bsz, all_sz), False) - - all_num_mask = int( - # add a random number for probabilistic rounding - mask_prob * all_sz / float(mask_length) - + np.random.rand() - ) - - all_num_mask = max(min_masks, all_num_mask) - - mask_idcs = [] - for i in range(bsz): - if padding_mask is not None: - sz = all_sz - padding_mask[i].long().sum().item() - num_mask = int( - # add a random number for probabilistic rounding - mask_prob * sz / float(mask_length) - + np.random.rand() - ) - num_mask = max(min_masks, num_mask) - else: - sz = all_sz - num_mask = all_num_mask - - if mask_type == "static": - lengths = np.full(num_mask, mask_length) - elif mask_type == "uniform": - lengths = np.random.randint(mask_other, mask_length * 2 + 1, size=num_mask) - elif mask_type == "normal": - lengths = np.random.normal(mask_length, mask_other, size=num_mask) - lengths = [max(1, int(round(x))) for x in lengths] - elif mask_type == "poisson": - lengths = np.random.poisson(mask_length, size=num_mask) - lengths = [int(round(x)) for x in lengths] - else: - raise Exception("unknown mask selection " + mask_type) - - if sum(lengths) == 0: - lengths[0] = min(mask_length, sz - 1) - - if no_overlap: - mask_idc = [] - - def arrange(s, e, length, keep_length): - span_start = np.random.randint(s, e - length) - mask_idc.extend(span_start + i for i in range(length)) - - new_parts = [] - if span_start - s - min_space >= keep_length: - new_parts.append((s, span_start - min_space + 1)) - if e - span_start - keep_length - min_space > keep_length: - new_parts.append((span_start + length + min_space, e)) - return new_parts - - parts = [(0, sz)] - min_length = min(lengths) - for length in sorted(lengths, reverse=True): - lens = np.fromiter( - (e - s if e - s >= length + min_space else 0 for s, e in parts), - np.int, - ) - l_sum = np.sum(lens) - if l_sum == 0: - break - probs = lens / np.sum(lens) - c = np.random.choice(len(parts), p=probs) - s, e = parts.pop(c) - parts.extend(arrange(s, e, length, min_length)) - mask_idc = np.asarray(mask_idc) - else: - min_len = min(lengths) - if sz - min_len <= num_mask: - min_len = sz - num_mask - 1 - - mask_idc = np.random.choice(sz - min_len, num_mask, replace=False) - - mask_idc = np.asarray( - [ - mask_idc[j] + offset - for j in range(len(mask_idc)) - for offset in range(lengths[j]) - ] - ) - - mask_idcs.append(np.unique(mask_idc[mask_idc < sz])) - - min_len = min([len(m) for m in mask_idcs]) - for i, mask_idc in enumerate(mask_idcs): - if len(mask_idc) > min_len: - mask_idc = np.random.choice(mask_idc, min_len, replace=False) - mask[i, mask_idc] = True - - return mask - - -def get_mem_usage(): - try: - import psutil - - mb = 1024 * 1024 - return f"used={psutil.virtual_memory().used / mb}Mb; avail={psutil.virtual_memory().available / mb}Mb" - except ImportError: - return "N/A" - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_padding_mask(lens): - bsz, max_lens = lens.size(0), torch.max(lens).item() - mask = torch.arange(max_lens).to(lens.device).view(1, max_lens) - mask = mask.expand(bsz, -1) >= lens.view(bsz, 1).expand(-1, max_lens) - return mask - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_mask(lens): - return ~lengths_to_padding_mask(lens) - - -def get_buckets(sizes, num_buckets): - buckets = np.unique( - np.percentile( - sizes, - np.linspace(0, 100, num_buckets + 1), - interpolation='lower', - )[1:] - ) - return buckets - - -def get_bucketed_sizes(orig_sizes, buckets): - sizes = np.copy(orig_sizes) - assert np.min(sizes) >= 0 - start_val = -1 - for end_val in buckets: - mask = (sizes > start_val) & (sizes <= end_val) - sizes[mask] = end_val - start_val = end_val - return sizes - - - -def _find_extra_valid_paths(dataset_path: str) -> set: - paths = utils.split_paths(dataset_path) - all_valid_paths = set() - for sub_dir in paths: - contents = PathManager.ls(sub_dir) - valid_paths = [c for c in contents if re.match("valid*[0-9].*", c) is not None] - all_valid_paths |= {os.path.basename(p) for p in valid_paths} - # Remove .bin, .idx etc - roots = {os.path.splitext(p)[0] for p in all_valid_paths} - return roots - - -def raise_if_valid_subsets_unintentionally_ignored(train_cfg) -> None: - """Raises if there are paths matching 'valid*[0-9].*' which are not combined or ignored.""" - if ( - train_cfg.dataset.ignore_unused_valid_subsets - or train_cfg.dataset.combine_valid_subsets - or train_cfg.dataset.disable_validation - or not hasattr(train_cfg.task, "data") - ): - return - other_paths = _find_extra_valid_paths(train_cfg.task.data) - specified_subsets = train_cfg.dataset.valid_subset.split(",") - ignored_paths = [p for p in other_paths if p not in specified_subsets] - if ignored_paths: - advice = "Set --combine-val to combine them or --ignore-unused-valid-subsets to ignore them." - msg = f"Valid paths {ignored_paths} will be ignored. {advice}" - raise ValueError(msg) \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/simultaneous_translation/utils/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/simultaneous_translation/utils/__init__.py deleted file mode 100644 index 1e9ce844f59a4211061392084cc81075e6bab19f..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/simultaneous_translation/utils/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -# automatically import any Python files in the criterions/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - module = file[: file.find(".py")] - importlib.import_module("examples.simultaneous_translation.utils." + module) diff --git a/spaces/ICML2023/ICML2023_papers/paper_list.py b/spaces/ICML2023/ICML2023_papers/paper_list.py deleted file mode 100644 index 46729cc444467dc0a2fac57a71374bad46f5bc26..0000000000000000000000000000000000000000 --- a/spaces/ICML2023/ICML2023_papers/paper_list.py +++ /dev/null @@ -1,108 +0,0 @@ -from __future__ import annotations - -import numpy as np -import pandas as pd - - -class PaperList: - def __init__(self): - self.organization_name = 'ICML2023' - self.table = pd.read_csv('papers.csv') - self._preprocess_table() - - self.table_header = ''' - - Title - Authors - arXiv - GitHub - Paper pages - Spaces - Models - Datasets - Claimed - ''' - - def _preprocess_table(self) -> None: - self.table['title_lowercase'] = self.table.title.str.lower() - - rows = [] - for row in self.table.itertuples(): - title = f'{row.title}' - arxiv = f'arXiv' if isinstance( - row.arxiv, str) else '' - github = f'GitHub' if isinstance( - row.github, str) else '' - hf_paper = f'Paper page' if isinstance( - row.hf_paper, str) else '' - hf_space = f'Space' if isinstance( - row.hf_space, str) else '' - hf_model = f'Model' if isinstance( - row.hf_model, str) else '' - hf_dataset = f'Dataset' if isinstance( - row.hf_dataset, str) else '' - author_linked = '✅' if ~np.isnan( - row.n_linked_authors) and row.n_linked_authors > 0 else '' - n_linked_authors = '' if np.isnan(row.n_linked_authors) else int( - row.n_linked_authors) - n_authors = '' if np.isnan(row.n_authors) else int(row.n_authors) - claimed_paper = '' if n_linked_authors == '' else f'{n_linked_authors}/{n_authors} {author_linked}' - row = f''' - - {title} - {row.authors} - {arxiv} - {github} - {hf_paper} - {hf_space} - {hf_model} - {hf_dataset} - {claimed_paper} - ''' - rows.append(row) - self.table['html_table_content'] = rows - - def render(self, search_query: str, case_sensitive: bool, - filter_names: list[str]) -> tuple[str, str]: - df = self.table - if search_query: - if case_sensitive: - df = df[df.title.str.contains(search_query)] - else: - df = df[df.title_lowercase.str.contains(search_query.lower())] - has_arxiv = 'arXiv' in filter_names - has_github = 'GitHub' in filter_names - has_hf_space = 'Space' in filter_names - has_hf_model = 'Model' in filter_names - has_hf_dataset = 'Dataset' in filter_names - df = self.filter_table(df, has_arxiv, has_github, has_hf_space, - has_hf_model, has_hf_dataset) - n_claimed = len(df[df.n_linked_authors > 0]) - return f'{len(df)} ({n_claimed} claimed)', self.to_html( - df, self.table_header) - - @staticmethod - def filter_table(df: pd.DataFrame, has_arxiv: bool, has_github: bool, - has_hf_space: bool, has_hf_model: bool, - has_hf_dataset: bool) -> pd.DataFrame: - if has_arxiv: - df = df[~df.arxiv.isna()] - if has_github: - df = df[~df.github.isna()] - if has_hf_space: - df = df[~df.hf_space.isna()] - if has_hf_model: - df = df[~df.hf_model.isna()] - if has_hf_dataset: - df = df[~df.hf_dataset.isna()] - return df - - @staticmethod - def to_html(df: pd.DataFrame, table_header: str) -> str: - table_data = ''.join(df.html_table_content) - html = f''' - - {table_header} - {table_data} -
    ''' - return html diff --git a/spaces/IDEA-Research/Grounded-SAM/segment_anything/scripts/amg.py b/spaces/IDEA-Research/Grounded-SAM/segment_anything/scripts/amg.py deleted file mode 100644 index 3cae6ff720e5cb718045ff3f1082340968516d6a..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/segment_anything/scripts/amg.py +++ /dev/null @@ -1,238 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import cv2 # type: ignore - -from segment_anything import SamAutomaticMaskGenerator, sam_model_registry - -import argparse -import json -import os -from typing import Any, Dict, List - -parser = argparse.ArgumentParser( - description=( - "Runs automatic mask generation on an input image or directory of images, " - "and outputs masks as either PNGs or COCO-style RLEs. Requires open-cv, " - "as well as pycocotools if saving in RLE format." - ) -) - -parser.add_argument( - "--input", - type=str, - required=True, - help="Path to either a single input image or folder of images.", -) - -parser.add_argument( - "--output", - type=str, - required=True, - help=( - "Path to the directory where masks will be output. Output will be either a folder " - "of PNGs per image or a single json with COCO-style masks." - ), -) - -parser.add_argument( - "--model-type", - type=str, - default="default", - help="The type of model to load, in ['default', 'vit_l', 'vit_b']", -) - -parser.add_argument( - "--checkpoint", - type=str, - required=True, - help="The path to the SAM checkpoint to use for mask generation.", -) - -parser.add_argument("--device", type=str, default="cuda", help="The device to run generation on.") - -parser.add_argument( - "--convert-to-rle", - action="store_true", - help=( - "Save masks as COCO RLEs in a single json instead of as a folder of PNGs. " - "Requires pycocotools." - ), -) - -amg_settings = parser.add_argument_group("AMG Settings") - -amg_settings.add_argument( - "--points-per-side", - type=int, - default=None, - help="Generate masks by sampling a grid over the image with this many points to a side.", -) - -amg_settings.add_argument( - "--points-per-batch", - type=int, - default=None, - help="How many input points to process simultaneously in one batch.", -) - -amg_settings.add_argument( - "--pred-iou-thresh", - type=float, - default=None, - help="Exclude masks with a predicted score from the model that is lower than this threshold.", -) - -amg_settings.add_argument( - "--stability-score-thresh", - type=float, - default=None, - help="Exclude masks with a stability score lower than this threshold.", -) - -amg_settings.add_argument( - "--stability-score-offset", - type=float, - default=None, - help="Larger values perturb the mask more when measuring stability score.", -) - -amg_settings.add_argument( - "--box-nms-thresh", - type=float, - default=None, - help="The overlap threshold for excluding a duplicate mask.", -) - -amg_settings.add_argument( - "--crop-n-layers", - type=int, - default=None, - help=( - "If >0, mask generation is run on smaller crops of the image to generate more masks. " - "The value sets how many different scales to crop at." - ), -) - -amg_settings.add_argument( - "--crop-nms-thresh", - type=float, - default=None, - help="The overlap threshold for excluding duplicate masks across different crops.", -) - -amg_settings.add_argument( - "--crop-overlap-ratio", - type=int, - default=None, - help="Larger numbers mean image crops will overlap more.", -) - -amg_settings.add_argument( - "--crop-n-points-downscale-factor", - type=int, - default=None, - help="The number of points-per-side in each layer of crop is reduced by this factor.", -) - -amg_settings.add_argument( - "--min-mask-region-area", - type=int, - default=None, - help=( - "Disconnected mask regions or holes with area smaller than this value " - "in pixels are removed by postprocessing." - ), -) - - -def write_masks_to_folder(masks: List[Dict[str, Any]], path: str) -> None: - header = "id,area,bbox_x0,bbox_y0,bbox_w,bbox_h,point_input_x,point_input_y,predicted_iou,stability_score,crop_box_x0,crop_box_y0,crop_box_w,crop_box_h" # noqa - metadata = [header] - for i, mask_data in enumerate(masks): - mask = mask_data["segmentation"] - filename = f"{i}.png" - cv2.imwrite(os.path.join(path, filename), mask * 255) - mask_metadata = [ - str(i), - str(mask_data["area"]), - *[str(x) for x in mask_data["bbox"]], - *[str(x) for x in mask_data["point_coords"][0]], - str(mask_data["predicted_iou"]), - str(mask_data["stability_score"]), - *[str(x) for x in mask_data["crop_box"]], - ] - row = ",".join(mask_metadata) - metadata.append(row) - metadata_path = os.path.join(path, "metadata.csv") - with open(metadata_path, "w") as f: - f.write("\n".join(metadata)) - - return - - -def get_amg_kwargs(args): - amg_kwargs = { - "points_per_side": args.points_per_side, - "points_per_batch": args.points_per_batch, - "pred_iou_thresh": args.pred_iou_thresh, - "stability_score_thresh": args.stability_score_thresh, - "stability_score_offset": args.stability_score_offset, - "box_nms_thresh": args.box_nms_thresh, - "crop_n_layers": args.crop_n_layers, - "crop_nms_thresh": args.crop_nms_thresh, - "crop_overlap_ratio": args.crop_overlap_ratio, - "crop_n_points_downscale_factor": args.crop_n_points_downscale_factor, - "min_mask_region_area": args.min_mask_region_area, - } - amg_kwargs = {k: v for k, v in amg_kwargs.items() if v is not None} - return amg_kwargs - - -def main(args: argparse.Namespace) -> None: - print("Loading model...") - sam = sam_model_registry[args.model_type](checkpoint=args.checkpoint) - _ = sam.to(device=args.device) - output_mode = "coco_rle" if args.convert_to_rle else "binary_mask" - amg_kwargs = get_amg_kwargs(args) - generator = SamAutomaticMaskGenerator(sam, output_mode=output_mode, **amg_kwargs) - - if not os.path.isdir(args.input): - targets = [args.input] - else: - targets = [ - f for f in os.listdir(args.input) if not os.path.isdir(os.path.join(args.input, f)) - ] - targets = [os.path.join(args.input, f) for f in targets] - - os.makedirs(args.output, exist_ok=True) - - for t in targets: - print(f"Processing '{t}'...") - image = cv2.imread(t) - if image is None: - print(f"Could not load '{t}' as an image, skipping...") - continue - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - - masks = generator.generate(image) - - base = os.path.basename(t) - base = os.path.splitext(base)[0] - save_base = os.path.join(args.output, base) - if output_mode == "binary_mask": - os.makedirs(save_base, exist_ok=False) - write_masks_to_folder(masks, save_base) - else: - save_file = save_base + ".json" - with open(save_file, "w") as f: - json.dump(masks, f) - print("Done!") - - -if __name__ == "__main__": - args = parser.parse_args() - main(args) diff --git a/spaces/Iceclear/StableSR/StableSR/taming/data/utils.py b/spaces/Iceclear/StableSR/StableSR/taming/data/utils.py deleted file mode 100644 index 2b3c3d53cd2b6c72b481b59834cf809d3735b394..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/taming/data/utils.py +++ /dev/null @@ -1,169 +0,0 @@ -import collections -import os -import tarfile -import urllib -import zipfile -from pathlib import Path - -import numpy as np -import torch -from taming.data.helper_types import Annotation -from torch._six import string_classes -from torch.utils.data._utils.collate import np_str_obj_array_pattern, default_collate_err_msg_format -from tqdm import tqdm - - -def unpack(path): - if path.endswith("tar.gz"): - with tarfile.open(path, "r:gz") as tar: - tar.extractall(path=os.path.split(path)[0]) - elif path.endswith("tar"): - with tarfile.open(path, "r:") as tar: - tar.extractall(path=os.path.split(path)[0]) - elif path.endswith("zip"): - with zipfile.ZipFile(path, "r") as f: - f.extractall(path=os.path.split(path)[0]) - else: - raise NotImplementedError( - "Unknown file extension: {}".format(os.path.splitext(path)[1]) - ) - - -def reporthook(bar): - """tqdm progress bar for downloads.""" - - def hook(b=1, bsize=1, tsize=None): - if tsize is not None: - bar.total = tsize - bar.update(b * bsize - bar.n) - - return hook - - -def get_root(name): - base = "data/" - root = os.path.join(base, name) - os.makedirs(root, exist_ok=True) - return root - - -def is_prepared(root): - return Path(root).joinpath(".ready").exists() - - -def mark_prepared(root): - Path(root).joinpath(".ready").touch() - - -def prompt_download(file_, source, target_dir, content_dir=None): - targetpath = os.path.join(target_dir, file_) - while not os.path.exists(targetpath): - if content_dir is not None and os.path.exists( - os.path.join(target_dir, content_dir) - ): - break - print( - "Please download '{}' from '{}' to '{}'.".format(file_, source, targetpath) - ) - if content_dir is not None: - print( - "Or place its content into '{}'.".format( - os.path.join(target_dir, content_dir) - ) - ) - input("Press Enter when done...") - return targetpath - - -def download_url(file_, url, target_dir): - targetpath = os.path.join(target_dir, file_) - os.makedirs(target_dir, exist_ok=True) - with tqdm( - unit="B", unit_scale=True, unit_divisor=1024, miniters=1, desc=file_ - ) as bar: - urllib.request.urlretrieve(url, targetpath, reporthook=reporthook(bar)) - return targetpath - - -def download_urls(urls, target_dir): - paths = dict() - for fname, url in urls.items(): - outpath = download_url(fname, url, target_dir) - paths[fname] = outpath - return paths - - -def quadratic_crop(x, bbox, alpha=1.0): - """bbox is xmin, ymin, xmax, ymax""" - im_h, im_w = x.shape[:2] - bbox = np.array(bbox, dtype=np.float32) - bbox = np.clip(bbox, 0, max(im_h, im_w)) - center = 0.5 * (bbox[0] + bbox[2]), 0.5 * (bbox[1] + bbox[3]) - w = bbox[2] - bbox[0] - h = bbox[3] - bbox[1] - l = int(alpha * max(w, h)) - l = max(l, 2) - - required_padding = -1 * min( - center[0] - l, center[1] - l, im_w - (center[0] + l), im_h - (center[1] + l) - ) - required_padding = int(np.ceil(required_padding)) - if required_padding > 0: - padding = [ - [required_padding, required_padding], - [required_padding, required_padding], - ] - padding += [[0, 0]] * (len(x.shape) - 2) - x = np.pad(x, padding, "reflect") - center = center[0] + required_padding, center[1] + required_padding - xmin = int(center[0] - l / 2) - ymin = int(center[1] - l / 2) - return np.array(x[ymin : ymin + l, xmin : xmin + l, ...]) - - -def custom_collate(batch): - r"""source: pytorch 1.9.0, only one modification to original code """ - - elem = batch[0] - elem_type = type(elem) - if isinstance(elem, torch.Tensor): - out = None - if torch.utils.data.get_worker_info() is not None: - # If we're in a background process, concatenate directly into a - # shared memory tensor to avoid an extra copy - numel = sum([x.numel() for x in batch]) - storage = elem.storage()._new_shared(numel) - out = elem.new(storage) - return torch.stack(batch, 0, out=out) - elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \ - and elem_type.__name__ != 'string_': - if elem_type.__name__ == 'ndarray' or elem_type.__name__ == 'memmap': - # array of string classes and object - if np_str_obj_array_pattern.search(elem.dtype.str) is not None: - raise TypeError(default_collate_err_msg_format.format(elem.dtype)) - - return custom_collate([torch.as_tensor(b) for b in batch]) - elif elem.shape == (): # scalars - return torch.as_tensor(batch) - elif isinstance(elem, float): - return torch.tensor(batch, dtype=torch.float64) - elif isinstance(elem, int): - return torch.tensor(batch) - elif isinstance(elem, string_classes): - return batch - elif isinstance(elem, collections.abc.Mapping): - return {key: custom_collate([d[key] for d in batch]) for key in elem} - elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple - return elem_type(*(custom_collate(samples) for samples in zip(*batch))) - if isinstance(elem, collections.abc.Sequence) and isinstance(elem[0], Annotation): # added - return batch # added - elif isinstance(elem, collections.abc.Sequence): - # check to make sure that the elements in batch have consistent size - it = iter(batch) - elem_size = len(next(it)) - if not all(len(elem) == elem_size for elem in it): - raise RuntimeError('each element in list of batch should be of equal size') - transposed = zip(*batch) - return [custom_collate(samples) for samples in transposed] - - raise TypeError(default_collate_err_msg_format.format(elem_type)) diff --git a/spaces/Illumotion/Koboldcpp/examples/finetune/README.md b/spaces/Illumotion/Koboldcpp/examples/finetune/README.md deleted file mode 100644 index b7347c20ca0ab40c18c496addbb9b1d8d574a042..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/finetune/README.md +++ /dev/null @@ -1,90 +0,0 @@ -# finetune - -Basic usage instructions: - -```bash -# get training data -wget https://raw.githubusercontent.com/brunoklein99/deep-learning-notes/master/shakespeare.txt - -# finetune LORA adapter -./bin/finetune \ - --model-base open-llama-3b-v2-q8_0.gguf \ - --checkpoint-in chk-lora-open-llama-3b-v2-q8_0-shakespeare-LATEST.gguf \ - --checkpoint-out chk-lora-open-llama-3b-v2-q8_0-shakespeare-ITERATION.gguf \ - --lora-out lora-open-llama-3b-v2-q8_0-shakespeare-ITERATION.bin \ - --train-data "shakespeare.txt" \ - --save-every 10 \ - --threads 6 --adam-iter 30 --batch 4 --ctx 64 \ - --use-checkpointing - -# predict -./bin/main -m open-llama-3b-v2-q8_0.gguf --lora lora-open-llama-3b-v2-q8_0-shakespeare-LATEST.bin -``` - -Finetune output files will be saved every N iterations (config with `--save-every N`). -The pattern 'ITERATION' in the output filenames will be replaced with the iteration number and with 'LATEST' for the latest output. -So in above example after 10 iterations these files will be written: -- chk-lora-open-llama-3b-v2-q8_0-shakespeare-10.gguf -- chk-lora-open-llama-3b-v2-q8_0-shakespeare-LATEST.gguf -- lora-open-llama-3b-v2-q8_0-shakespeare-10.bin -- lora-open-llama-3b-v2-q8_0-shakespeare-LATEST.bin - -After 10 more iterations: -- chk-lora-open-llama-3b-v2-q8_0-shakespeare-20.gguf -- chk-lora-open-llama-3b-v2-q8_0-shakespeare-LATEST.gguf -- lora-open-llama-3b-v2-q8_0-shakespeare-20.bin -- lora-open-llama-3b-v2-q8_0-shakespeare-LATEST.bin - -Checkpoint files (`--checkpoint-in FN`, `--checkpoint-out FN`) store the training process. When the input checkpoint file does not exist, it will begin finetuning a new randomly initialized adapter. - -llama.cpp compatible LORA adapters will be saved with filename specified by `--lora-out FN`. -These LORA adapters can then be used by `main` together with the base model, like in the 'predict' example command above. - -In `main` you can also load multiple LORA adapters, which will then be mixed together. - -For example if you have two LORA adapters `lora-open-llama-3b-v2-q8_0-shakespeare-LATEST.bin` and `lora-open-llama-3b-v2-q8_0-bible-LATEST.bin`, you can mix them together like this: - -```bash -./bin/main -m open-llama-3b-v2-q8_0.gguf \ - --lora lora-open-llama-3b-v2-q8_0-shakespeare-LATEST.bin \ - --lora lora-open-llama-3b-v2-q8_0-bible-LATEST.bin -``` - -You can change how strong each LORA adapter is applied to the base model by using `--lora-scaled FN SCALE` instead of `--lora FN`. - -For example to apply 40% of the 'shakespeare' LORA adapter, 80% of the 'bible' LORA adapter and 100% of yet another one: - -```bash -./bin/main -m open-llama-3b-v2-q8_0.gguf \ - --lora-scaled lora-open-llama-3b-v2-q8_0-shakespeare-LATEST.bin 0.4 \ - --lora-scaled lora-open-llama-3b-v2-q8_0-bible-LATEST.bin 0.8 \ - --lora lora-open-llama-3b-v2-q8_0-yet-another-one-LATEST.bin -``` - -The scale numbers don't need to add up to one, and you can also use numbers creater than 1 to further increase the influence of an adapter. But making the values to big will sometimes result in worse output. Play around to find good values. - -Gradient checkpointing reduces the memory requirements by ~50% but increases the runtime. -If you have enough RAM, you can make finetuning a bit faster by disabling checkpointing with `--no-checkpointing`. - -The default LORA rank can be specified with `--lora-r N`. -The LORA rank can be configured for each model tensor type separately with these command line options: - -```bash - --lora-r N LORA r: default rank. Also specifies resulting scaling together with lora-alpha. (default 4) - --rank-att-norm N LORA rank for attention norm tensor (default 1) - --rank-ffn-norm N LORA rank for feed-forward norm tensor (default 1) - --rank-out-norm N LORA rank for output norm tensor (default 1) - --rank-tok-embd N LORA rank for token embeddings tensor (default 4) - --rank-out N LORA rank for output tensor (default 4) - --rank-wq N LORA rank for wq tensor (default 4) - --rank-wk N LORA rank for wk tensor (default 4) - --rank-wv N LORA rank for wv tensor (default 4) - --rank-wo N LORA rank for wo tensor (default 4) - --rank-w1 N LORA rank for w1 tensor (default 4) - --rank-w2 N LORA rank for w2 tensor (default 4) - --rank-w3 N LORA rank for w3 tensor (default 4) -``` - -The LORA rank of 'norm' tensors should always be 1. - -To see all available options use `finetune --help`. diff --git a/spaces/JacobLinCool/captcha-recognizer/src/shared.py b/spaces/JacobLinCool/captcha-recognizer/src/shared.py deleted file mode 100644 index 4c6b8b581bb6189d7f3551b468ba70c20116b614..0000000000000000000000000000000000000000 --- a/spaces/JacobLinCool/captcha-recognizer/src/shared.py +++ /dev/null @@ -1,20 +0,0 @@ -import os - -raw_dir = os.path.normpath(os.path.join(os.path.dirname(__file__), "..", "data", "raw")) - -if not os.path.exists(raw_dir): - os.makedirs(raw_dir) - -preprocess_dir = os.path.normpath( - os.path.join(os.path.dirname(__file__), "..", "data", "preprocessed") -) - -if not os.path.exists(preprocess_dir): - os.makedirs(preprocess_dir) - -genereated_dir = os.path.normpath( - os.path.join(os.path.dirname(__file__), "..", "data", "generated") -) - -if not os.path.exists(genereated_dir): - os.makedirs(genereated_dir) diff --git a/spaces/JavierIA/gccopen/utils/datasets.py b/spaces/JavierIA/gccopen/utils/datasets.py deleted file mode 100644 index b6bb8b02aa706c7ea8536665d908b417134fcd0f..0000000000000000000000000000000000000000 --- a/spaces/JavierIA/gccopen/utils/datasets.py +++ /dev/null @@ -1,1320 +0,0 @@ -# Dataset utils and dataloaders - -import glob -import logging -import math -import os -import random -import shutil -import time -from itertools import repeat -from multiprocessing.pool import ThreadPool -from pathlib import Path -from threading import Thread - -import cv2 -import numpy as np -import torch -import torch.nn.functional as F -from PIL import Image, ExifTags -from torch.utils.data import Dataset -from tqdm import tqdm - -import pickle -from copy import deepcopy -#from pycocotools import mask as maskUtils -from torchvision.utils import save_image -from torchvision.ops import roi_pool, roi_align, ps_roi_pool, ps_roi_align - -from utils.general import check_requirements, xyxy2xywh, xywh2xyxy, xywhn2xyxy, xyn2xy, segment2box, segments2boxes, \ - resample_segments, clean_str -from utils.torch_utils import torch_distributed_zero_first - -# Parameters -help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data' -img_formats = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng', 'webp', 'mpo'] # acceptable image suffixes -vid_formats = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes -logger = logging.getLogger(__name__) - -# Get orientation exif tag -for orientation in ExifTags.TAGS.keys(): - if ExifTags.TAGS[orientation] == 'Orientation': - break - - -def get_hash(files): - # Returns a single hash value of a list of files - return sum(os.path.getsize(f) for f in files if os.path.isfile(f)) - - -def exif_size(img): - # Returns exif-corrected PIL size - s = img.size # (width, height) - try: - rotation = dict(img._getexif().items())[orientation] - if rotation == 6: # rotation 270 - s = (s[1], s[0]) - elif rotation == 8: # rotation 90 - s = (s[1], s[0]) - except: - pass - - return s - - -def create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False, - rank=-1, world_size=1, workers=8, image_weights=False, quad=False, prefix=''): - # Make sure only the first process in DDP process the dataset first, and the following others can use the cache - with torch_distributed_zero_first(rank): - dataset = LoadImagesAndLabels(path, imgsz, batch_size, - augment=augment, # augment images - hyp=hyp, # augmentation hyperparameters - rect=rect, # rectangular training - cache_images=cache, - single_cls=opt.single_cls, - stride=int(stride), - pad=pad, - image_weights=image_weights, - prefix=prefix) - - batch_size = min(batch_size, len(dataset)) - nw = min([os.cpu_count() // world_size, batch_size if batch_size > 1 else 0, workers]) # number of workers - sampler = torch.utils.data.distributed.DistributedSampler(dataset) if rank != -1 else None - loader = torch.utils.data.DataLoader if image_weights else InfiniteDataLoader - # Use torch.utils.data.DataLoader() if dataset.properties will update during training else InfiniteDataLoader() - dataloader = loader(dataset, - batch_size=batch_size, - num_workers=nw, - sampler=sampler, - pin_memory=True, - collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn) - return dataloader, dataset - - -class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader): - """ Dataloader that reuses workers - - Uses same syntax as vanilla DataLoader - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler)) - self.iterator = super().__iter__() - - def __len__(self): - return len(self.batch_sampler.sampler) - - def __iter__(self): - for i in range(len(self)): - yield next(self.iterator) - - -class _RepeatSampler(object): - """ Sampler that repeats forever - - Args: - sampler (Sampler) - """ - - def __init__(self, sampler): - self.sampler = sampler - - def __iter__(self): - while True: - yield from iter(self.sampler) - - -class LoadImages: # for inference - def __init__(self, path, img_size=640, stride=32): - p = str(Path(path).absolute()) # os-agnostic absolute path - if '*' in p: - files = sorted(glob.glob(p, recursive=True)) # glob - elif os.path.isdir(p): - files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir - elif os.path.isfile(p): - files = [p] # files - else: - raise Exception(f'ERROR: {p} does not exist') - - images = [x for x in files if x.split('.')[-1].lower() in img_formats] - videos = [x for x in files if x.split('.')[-1].lower() in vid_formats] - ni, nv = len(images), len(videos) - - self.img_size = img_size - self.stride = stride - self.files = images + videos - self.nf = ni + nv # number of files - self.video_flag = [False] * ni + [True] * nv - self.mode = 'image' - if any(videos): - self.new_video(videos[0]) # new video - else: - self.cap = None - assert self.nf > 0, f'No images or videos found in {p}. ' \ - f'Supported formats are:\nimages: {img_formats}\nvideos: {vid_formats}' - - def __iter__(self): - self.count = 0 - return self - - def __next__(self): - if self.count == self.nf: - raise StopIteration - path = self.files[self.count] - - if self.video_flag[self.count]: - # Read video - self.mode = 'video' - ret_val, img0 = self.cap.read() - if not ret_val: - self.count += 1 - self.cap.release() - if self.count == self.nf: # last video - raise StopIteration - else: - path = self.files[self.count] - self.new_video(path) - ret_val, img0 = self.cap.read() - - self.frame += 1 - print(f'video {self.count + 1}/{self.nf} ({self.frame}/{self.nframes}) {path}: ', end='') - - else: - # Read image - self.count += 1 - img0 = cv2.imread(path) # BGR - assert img0 is not None, 'Image Not Found ' + path - #print(f'image {self.count}/{self.nf} {path}: ', end='') - - # Padded resize - img = letterbox(img0, self.img_size, stride=self.stride)[0] - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - - return path, img, img0, self.cap - - def new_video(self, path): - self.frame = 0 - self.cap = cv2.VideoCapture(path) - self.nframes = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - - def __len__(self): - return self.nf # number of files - - -class LoadWebcam: # for inference - def __init__(self, pipe='0', img_size=640, stride=32): - self.img_size = img_size - self.stride = stride - - if pipe.isnumeric(): - pipe = eval(pipe) # local camera - # pipe = 'rtsp://192.168.1.64/1' # IP camera - # pipe = 'rtsp://username:password@192.168.1.64/1' # IP camera with login - # pipe = 'http://wmccpinetop.axiscam.net/mjpg/video.mjpg' # IP golf camera - - self.pipe = pipe - self.cap = cv2.VideoCapture(pipe) # video capture object - self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 3) # set buffer size - - def __iter__(self): - self.count = -1 - return self - - def __next__(self): - self.count += 1 - if cv2.waitKey(1) == ord('q'): # q to quit - self.cap.release() - cv2.destroyAllWindows() - raise StopIteration - - # Read frame - if self.pipe == 0: # local camera - ret_val, img0 = self.cap.read() - img0 = cv2.flip(img0, 1) # flip left-right - else: # IP camera - n = 0 - while True: - n += 1 - self.cap.grab() - if n % 30 == 0: # skip frames - ret_val, img0 = self.cap.retrieve() - if ret_val: - break - - # Print - assert ret_val, f'Camera Error {self.pipe}' - img_path = 'webcam.jpg' - print(f'webcam {self.count}: ', end='') - - # Padded resize - img = letterbox(img0, self.img_size, stride=self.stride)[0] - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - - return img_path, img, img0, None - - def __len__(self): - return 0 - - -class LoadStreams: # multiple IP or RTSP cameras - def __init__(self, sources='streams.txt', img_size=640, stride=32): - self.mode = 'stream' - self.img_size = img_size - self.stride = stride - - if os.path.isfile(sources): - with open(sources, 'r') as f: - sources = [x.strip() for x in f.read().strip().splitlines() if len(x.strip())] - else: - sources = [sources] - - n = len(sources) - self.imgs = [None] * n - self.sources = [clean_str(x) for x in sources] # clean source names for later - for i, s in enumerate(sources): - # Start the thread to read frames from the video stream - print(f'{i + 1}/{n}: {s}... ', end='') - url = eval(s) if s.isnumeric() else s - if 'youtube.com/' in str(url) or 'youtu.be/' in str(url): # if source is YouTube video - check_requirements(('pafy', 'youtube_dl')) - import pafy - url = pafy.new(url).getbest(preftype="mp4").url - cap = cv2.VideoCapture(url) - assert cap.isOpened(), f'Failed to open {s}' - w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - self.fps = cap.get(cv2.CAP_PROP_FPS) % 100 - - _, self.imgs[i] = cap.read() # guarantee first frame - thread = Thread(target=self.update, args=([i, cap]), daemon=True) - print(f' success ({w}x{h} at {self.fps:.2f} FPS).') - thread.start() - print('') # newline - - # check for common shapes - s = np.stack([letterbox(x, self.img_size, stride=self.stride)[0].shape for x in self.imgs], 0) # shapes - self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal - if not self.rect: - print('WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.') - - def update(self, index, cap): - # Read next stream frame in a daemon thread - n = 0 - while cap.isOpened(): - n += 1 - # _, self.imgs[index] = cap.read() - cap.grab() - if n == 4: # read every 4th frame - success, im = cap.retrieve() - self.imgs[index] = im if success else self.imgs[index] * 0 - n = 0 - time.sleep(1 / self.fps) # wait time - - def __iter__(self): - self.count = -1 - return self - - def __next__(self): - self.count += 1 - img0 = self.imgs.copy() - if cv2.waitKey(1) == ord('q'): # q to quit - cv2.destroyAllWindows() - raise StopIteration - - # Letterbox - img = [letterbox(x, self.img_size, auto=self.rect, stride=self.stride)[0] for x in img0] - - # Stack - img = np.stack(img, 0) - - # Convert - img = img[:, :, :, ::-1].transpose(0, 3, 1, 2) # BGR to RGB, to bsx3x416x416 - img = np.ascontiguousarray(img) - - return self.sources, img, img0, None - - def __len__(self): - return 0 # 1E12 frames = 32 streams at 30 FPS for 30 years - - -def img2label_paths(img_paths): - # Define label paths as a function of image paths - sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings - return ['txt'.join(x.replace(sa, sb, 1).rsplit(x.split('.')[-1], 1)) for x in img_paths] - - -class LoadImagesAndLabels(Dataset): # for training/testing - def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False, - cache_images=False, single_cls=False, stride=32, pad=0.0, prefix=''): - self.img_size = img_size - self.augment = augment - self.hyp = hyp - self.image_weights = image_weights - self.rect = False if image_weights else rect - self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training) - self.mosaic_border = [-img_size // 2, -img_size // 2] - self.stride = stride - self.path = path - #self.albumentations = Albumentations() if augment else None - - try: - f = [] # image files - for p in path if isinstance(path, list) else [path]: - p = Path(p) # os-agnostic - if p.is_dir(): # dir - f += glob.glob(str(p / '**' / '*.*'), recursive=True) - # f = list(p.rglob('**/*.*')) # pathlib - elif p.is_file(): # file - with open(p, 'r') as t: - t = t.read().strip().splitlines() - parent = str(p.parent) + os.sep - f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path - # f += [p.parent / x.lstrip(os.sep) for x in t] # local to global path (pathlib) - else: - raise Exception(f'{prefix}{p} does not exist') - self.img_files = sorted([x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in img_formats]) - # self.img_files = sorted([x for x in f if x.suffix[1:].lower() in img_formats]) # pathlib - assert self.img_files, f'{prefix}No images found' - except Exception as e: - raise Exception(f'{prefix}Error loading data from {path}: {e}\nSee {help_url}') - - # Check cache - self.label_files = img2label_paths(self.img_files) # labels - cache_path = (p if p.is_file() else Path(self.label_files[0]).parent).with_suffix('.cache') # cached labels - if cache_path.is_file(): - cache, exists = torch.load(cache_path), True # load - #if cache['hash'] != get_hash(self.label_files + self.img_files) or 'version' not in cache: # changed - # cache, exists = self.cache_labels(cache_path, prefix), False # re-cache - else: - cache, exists = self.cache_labels(cache_path, prefix), False # cache - - # Display cache - nf, nm, ne, nc, n = cache.pop('results') # found, missing, empty, corrupted, total - if exists: - d = f"Scanning '{cache_path}' images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted" - tqdm(None, desc=prefix + d, total=n, initial=n) # display cache results - assert nf > 0 or not augment, f'{prefix}No labels in {cache_path}. Can not train without labels. See {help_url}' - - # Read cache - cache.pop('hash') # remove hash - cache.pop('version') # remove version - labels, shapes, self.segments = zip(*cache.values()) - self.labels = list(labels) - self.shapes = np.array(shapes, dtype=np.float64) - self.img_files = list(cache.keys()) # update - self.label_files = img2label_paths(cache.keys()) # update - if single_cls: - for x in self.labels: - x[:, 0] = 0 - - n = len(shapes) # number of images - bi = np.floor(np.arange(n) / batch_size).astype(np.int) # batch index - nb = bi[-1] + 1 # number of batches - self.batch = bi # batch index of image - self.n = n - self.indices = range(n) - - # Rectangular Training - if self.rect: - # Sort by aspect ratio - s = self.shapes # wh - ar = s[:, 1] / s[:, 0] # aspect ratio - irect = ar.argsort() - self.img_files = [self.img_files[i] for i in irect] - self.label_files = [self.label_files[i] for i in irect] - self.labels = [self.labels[i] for i in irect] - self.shapes = s[irect] # wh - ar = ar[irect] - - # Set training image shapes - shapes = [[1, 1]] * nb - for i in range(nb): - ari = ar[bi == i] - mini, maxi = ari.min(), ari.max() - if maxi < 1: - shapes[i] = [maxi, 1] - elif mini > 1: - shapes[i] = [1, 1 / mini] - - self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(np.int) * stride - - # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM) - self.imgs = [None] * n - if cache_images: - if cache_images == 'disk': - self.im_cache_dir = Path(Path(self.img_files[0]).parent.as_posix() + '_npy') - self.img_npy = [self.im_cache_dir / Path(f).with_suffix('.npy').name for f in self.img_files] - self.im_cache_dir.mkdir(parents=True, exist_ok=True) - gb = 0 # Gigabytes of cached images - self.img_hw0, self.img_hw = [None] * n, [None] * n - results = ThreadPool(8).imap(lambda x: load_image(*x), zip(repeat(self), range(n))) - pbar = tqdm(enumerate(results), total=n) - for i, x in pbar: - if cache_images == 'disk': - if not self.img_npy[i].exists(): - np.save(self.img_npy[i].as_posix(), x[0]) - gb += self.img_npy[i].stat().st_size - else: - self.imgs[i], self.img_hw0[i], self.img_hw[i] = x - gb += self.imgs[i].nbytes - pbar.desc = f'{prefix}Caching images ({gb / 1E9:.1f}GB)' - pbar.close() - - def cache_labels(self, path=Path('./labels.cache'), prefix=''): - # Cache dataset labels, check images and read shapes - x = {} # dict - nm, nf, ne, nc = 0, 0, 0, 0 # number missing, found, empty, duplicate - pbar = tqdm(zip(self.img_files, self.label_files), desc='Scanning images', total=len(self.img_files)) - for i, (im_file, lb_file) in enumerate(pbar): - try: - # verify images - im = Image.open(im_file) - im.verify() # PIL verify - shape = exif_size(im) # image size - segments = [] # instance segments - assert (shape[0] > 9) & (shape[1] > 9), f'image size {shape} <10 pixels' - assert im.format.lower() in img_formats, f'invalid image format {im.format}' - - # verify labels - if os.path.isfile(lb_file): - nf += 1 # label found - with open(lb_file, 'r') as f: - l = [x.split() for x in f.read().strip().splitlines()] - if any([len(x) > 8 for x in l]): # is segment - classes = np.array([x[0] for x in l], dtype=np.float32) - segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in l] # (cls, xy1...) - l = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh) - l = np.array(l, dtype=np.float32) - if len(l): - assert l.shape[1] == 5, 'labels require 5 columns each' - assert (l >= 0).all(), 'negative labels' - assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels' - assert np.unique(l, axis=0).shape[0] == l.shape[0], 'duplicate labels' - else: - ne += 1 # label empty - l = np.zeros((0, 5), dtype=np.float32) - else: - nm += 1 # label missing - l = np.zeros((0, 5), dtype=np.float32) - x[im_file] = [l, shape, segments] - except Exception as e: - nc += 1 - print(f'{prefix}WARNING: Ignoring corrupted image and/or label {im_file}: {e}') - - pbar.desc = f"{prefix}Scanning '{path.parent / path.stem}' images and labels... " \ - f"{nf} found, {nm} missing, {ne} empty, {nc} corrupted" - pbar.close() - - if nf == 0: - print(f'{prefix}WARNING: No labels found in {path}. See {help_url}') - - x['hash'] = get_hash(self.label_files + self.img_files) - x['results'] = nf, nm, ne, nc, i + 1 - x['version'] = 0.1 # cache version - torch.save(x, path) # save for next time - logging.info(f'{prefix}New cache created: {path}') - return x - - def __len__(self): - return len(self.img_files) - - # def __iter__(self): - # self.count = -1 - # print('ran dataset iter') - # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF) - # return self - - def __getitem__(self, index): - index = self.indices[index] # linear, shuffled, or image_weights - - hyp = self.hyp - mosaic = self.mosaic and random.random() < hyp['mosaic'] - if mosaic: - # Load mosaic - if random.random() < 0.8: - img, labels = load_mosaic(self, index) - else: - img, labels = load_mosaic9(self, index) - shapes = None - - # MixUp https://arxiv.org/pdf/1710.09412.pdf - if random.random() < hyp['mixup']: - if random.random() < 0.8: - img2, labels2 = load_mosaic(self, random.randint(0, len(self.labels) - 1)) - else: - img2, labels2 = load_mosaic9(self, random.randint(0, len(self.labels) - 1)) - r = np.random.beta(8.0, 8.0) # mixup ratio, alpha=beta=8.0 - img = (img * r + img2 * (1 - r)).astype(np.uint8) - labels = np.concatenate((labels, labels2), 0) - - else: - # Load image - img, (h0, w0), (h, w) = load_image(self, index) - - # Letterbox - shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape - img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment) - shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling - - labels = self.labels[index].copy() - if labels.size: # normalized xywh to pixel xyxy format - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1]) - - if self.augment: - # Augment imagespace - if not mosaic: - img, labels = random_perspective(img, labels, - degrees=hyp['degrees'], - translate=hyp['translate'], - scale=hyp['scale'], - shear=hyp['shear'], - perspective=hyp['perspective']) - - - #img, labels = self.albumentations(img, labels) - - # Augment colorspace - augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v']) - - # Apply cutouts - # if random.random() < 0.9: - # labels = cutout(img, labels) - - if random.random() < hyp['paste_in']: - sample_labels, sample_images, sample_masks = [], [], [] - while len(sample_labels) < 30: - sample_labels_, sample_images_, sample_masks_ = load_samples(self, random.randint(0, len(self.labels) - 1)) - sample_labels += sample_labels_ - sample_images += sample_images_ - sample_masks += sample_masks_ - #print(len(sample_labels)) - if len(sample_labels) == 0: - break - labels = pastein(img, labels, sample_labels, sample_images, sample_masks) - - nL = len(labels) # number of labels - if nL: - labels[:, 1:5] = xyxy2xywh(labels[:, 1:5]) # convert xyxy to xywh - labels[:, [2, 4]] /= img.shape[0] # normalized height 0-1 - labels[:, [1, 3]] /= img.shape[1] # normalized width 0-1 - - if self.augment: - # flip up-down - if random.random() < hyp['flipud']: - img = np.flipud(img) - if nL: - labels[:, 2] = 1 - labels[:, 2] - - # flip left-right - if random.random() < hyp['fliplr']: - img = np.fliplr(img) - if nL: - labels[:, 1] = 1 - labels[:, 1] - - labels_out = torch.zeros((nL, 6)) - if nL: - labels_out[:, 1:] = torch.from_numpy(labels) - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - - return torch.from_numpy(img), labels_out, self.img_files[index], shapes - - @staticmethod - def collate_fn(batch): - img, label, path, shapes = zip(*batch) # transposed - for i, l in enumerate(label): - l[:, 0] = i # add target image index for build_targets() - return torch.stack(img, 0), torch.cat(label, 0), path, shapes - - @staticmethod - def collate_fn4(batch): - img, label, path, shapes = zip(*batch) # transposed - n = len(shapes) // 4 - img4, label4, path4, shapes4 = [], [], path[:n], shapes[:n] - - ho = torch.tensor([[0., 0, 0, 1, 0, 0]]) - wo = torch.tensor([[0., 0, 1, 0, 0, 0]]) - s = torch.tensor([[1, 1, .5, .5, .5, .5]]) # scale - for i in range(n): # zidane torch.zeros(16,3,720,1280) # BCHW - i *= 4 - if random.random() < 0.5: - im = F.interpolate(img[i].unsqueeze(0).float(), scale_factor=2., mode='bilinear', align_corners=False)[ - 0].type(img[i].type()) - l = label[i] - else: - im = torch.cat((torch.cat((img[i], img[i + 1]), 1), torch.cat((img[i + 2], img[i + 3]), 1)), 2) - l = torch.cat((label[i], label[i + 1] + ho, label[i + 2] + wo, label[i + 3] + ho + wo), 0) * s - img4.append(im) - label4.append(l) - - for i, l in enumerate(label4): - l[:, 0] = i # add target image index for build_targets() - - return torch.stack(img4, 0), torch.cat(label4, 0), path4, shapes4 - - -# Ancillary functions -------------------------------------------------------------------------------------------------- -def load_image(self, index): - # loads 1 image from dataset, returns img, original hw, resized hw - img = self.imgs[index] - if img is None: # not cached - path = self.img_files[index] - img = cv2.imread(path) # BGR - assert img is not None, 'Image Not Found ' + path - h0, w0 = img.shape[:2] # orig hw - r = self.img_size / max(h0, w0) # resize image to img_size - if r != 1: # always resize down, only resize up if training with augmentation - interp = cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR - img = cv2.resize(img, (int(w0 * r), int(h0 * r)), interpolation=interp) - return img, (h0, w0), img.shape[:2] # img, hw_original, hw_resized - else: - return self.imgs[index], self.img_hw0[index], self.img_hw[index] # img, hw_original, hw_resized - - -def augment_hsv(img, hgain=0.5, sgain=0.5, vgain=0.5): - r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains - hue, sat, val = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2HSV)) - dtype = img.dtype # uint8 - - x = np.arange(0, 256, dtype=np.int16) - lut_hue = ((x * r[0]) % 180).astype(dtype) - lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) - lut_val = np.clip(x * r[2], 0, 255).astype(dtype) - - img_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))).astype(dtype) - cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img) # no return needed - - -def hist_equalize(img, clahe=True, bgr=False): - # Equalize histogram on BGR image 'img' with img.shape(n,m,3) and range 0-255 - yuv = cv2.cvtColor(img, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV) - if clahe: - c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8)) - yuv[:, :, 0] = c.apply(yuv[:, :, 0]) - else: - yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0]) # equalize Y channel histogram - return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB) # convert YUV image to RGB - - -def load_mosaic(self, index): - # loads images in a 4-mosaic - - labels4, segments4 = [], [] - s = self.img_size - yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y - indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img4 - if i == 0: # top left - img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - elif i == 1: # top right - x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - elif i == 2: # bottom left - x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - elif i == 3: # bottom right - x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - - img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - padw = x1a - x1b - padh = y1a - y1b - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - labels4.append(labels) - segments4.extend(segments) - - # Concat/clip labels - labels4 = np.concatenate(labels4, 0) - for x in (labels4[:, 1:], *segments4): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img4, labels4 = replicate(img4, labels4) # replicate - - # Augment - #img4, labels4, segments4 = remove_background(img4, labels4, segments4) - #sample_segments(img4, labels4, segments4, probability=self.hyp['copy_paste']) - img4, labels4, segments4 = copy_paste(img4, labels4, segments4, probability=self.hyp['copy_paste']) - img4, labels4 = random_perspective(img4, labels4, segments4, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - - return img4, labels4 - - -def load_mosaic9(self, index): - # loads images in a 9-mosaic - - labels9, segments9 = [], [] - s = self.img_size - indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img9 - if i == 0: # center - img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - h0, w0 = h, w - c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates - elif i == 1: # top - c = s, s - h, s + w, s - elif i == 2: # top right - c = s + wp, s - h, s + wp + w, s - elif i == 3: # right - c = s + w0, s, s + w0 + w, s + h - elif i == 4: # bottom right - c = s + w0, s + hp, s + w0 + w, s + hp + h - elif i == 5: # bottom - c = s + w0 - w, s + h0, s + w0, s + h0 + h - elif i == 6: # bottom left - c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h - elif i == 7: # left - c = s - w, s + h0 - h, s, s + h0 - elif i == 8: # top left - c = s - w, s + h0 - hp - h, s, s + h0 - hp - - padx, pady = c[:2] - x1, y1, x2, y2 = [max(x, 0) for x in c] # allocate coords - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padx, pady) for x in segments] - labels9.append(labels) - segments9.extend(segments) - - # Image - img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax] - hp, wp = h, w # height, width previous - - # Offset - yc, xc = [int(random.uniform(0, s)) for _ in self.mosaic_border] # mosaic center x, y - img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s] - - # Concat/clip labels - labels9 = np.concatenate(labels9, 0) - labels9[:, [1, 3]] -= xc - labels9[:, [2, 4]] -= yc - c = np.array([xc, yc]) # centers - segments9 = [x - c for x in segments9] - - for x in (labels9[:, 1:], *segments9): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img9, labels9 = replicate(img9, labels9) # replicate - - # Augment - #img9, labels9, segments9 = remove_background(img9, labels9, segments9) - img9, labels9, segments9 = copy_paste(img9, labels9, segments9, probability=self.hyp['copy_paste']) - img9, labels9 = random_perspective(img9, labels9, segments9, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - - return img9, labels9 - - -def load_samples(self, index): - # loads images in a 4-mosaic - - labels4, segments4 = [], [] - s = self.img_size - yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y - indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img4 - if i == 0: # top left - img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - elif i == 1: # top right - x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - elif i == 2: # bottom left - x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - elif i == 3: # bottom right - x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - - img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - padw = x1a - x1b - padh = y1a - y1b - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - labels4.append(labels) - segments4.extend(segments) - - # Concat/clip labels - labels4 = np.concatenate(labels4, 0) - for x in (labels4[:, 1:], *segments4): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img4, labels4 = replicate(img4, labels4) # replicate - - # Augment - #img4, labels4, segments4 = remove_background(img4, labels4, segments4) - sample_labels, sample_images, sample_masks = sample_segments(img4, labels4, segments4, probability=0.5) - - return sample_labels, sample_images, sample_masks - - -def copy_paste(img, labels, segments, probability=0.5): - # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy) - n = len(segments) - if probability and n: - h, w, c = img.shape # height, width, channels - im_new = np.zeros(img.shape, np.uint8) - for j in random.sample(range(n), k=round(probability * n)): - l, s = labels[j], segments[j] - box = w - l[3], l[2], w - l[1], l[4] - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - if (ioa < 0.30).all(): # allow 30% obscuration of existing labels - labels = np.concatenate((labels, [[l[0], *box]]), 0) - segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1)) - cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED) - - result = cv2.bitwise_and(src1=img, src2=im_new) - result = cv2.flip(result, 1) # augment segments (flip left-right) - i = result > 0 # pixels to replace - # i[:, :] = result.max(2).reshape(h, w, 1) # act over ch - img[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug - - return img, labels, segments - - -def remove_background(img, labels, segments): - # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy) - n = len(segments) - h, w, c = img.shape # height, width, channels - im_new = np.zeros(img.shape, np.uint8) - img_new = np.ones(img.shape, np.uint8) * 114 - for j in range(n): - cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED) - - result = cv2.bitwise_and(src1=img, src2=im_new) - - i = result > 0 # pixels to replace - img_new[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug - - return img_new, labels, segments - - -def sample_segments(img, labels, segments, probability=0.5): - # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy) - n = len(segments) - sample_labels = [] - sample_images = [] - sample_masks = [] - if probability and n: - h, w, c = img.shape # height, width, channels - for j in random.sample(range(n), k=round(probability * n)): - l, s = labels[j], segments[j] - box = l[1].astype(int).clip(0,w-1), l[2].astype(int).clip(0,h-1), l[3].astype(int).clip(0,w-1), l[4].astype(int).clip(0,h-1) - - #print(box) - if (box[2] <= box[0]) or (box[3] <= box[1]): - continue - - sample_labels.append(l[0]) - - mask = np.zeros(img.shape, np.uint8) - - cv2.drawContours(mask, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED) - sample_masks.append(mask[box[1]:box[3],box[0]:box[2],:]) - - result = cv2.bitwise_and(src1=img, src2=mask) - i = result > 0 # pixels to replace - mask[i] = result[i] # cv2.imwrite('debug.jpg', img) # debug - #print(box) - sample_images.append(mask[box[1]:box[3],box[0]:box[2],:]) - - return sample_labels, sample_images, sample_masks - - -def replicate(img, labels): - # Replicate labels - h, w = img.shape[:2] - boxes = labels[:, 1:].astype(int) - x1, y1, x2, y2 = boxes.T - s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels) - for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices - x1b, y1b, x2b, y2b = boxes[i] - bh, bw = y2b - y1b, x2b - x1b - yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y - x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh] - img[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0) - - return img, labels - - -def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32): - # Resize and pad image while meeting stride-multiple constraints - shape = img.shape[:2] # current shape [height, width] - if isinstance(new_shape, int): - new_shape = (new_shape, new_shape) - - # Scale ratio (new / old) - r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) - if not scaleup: # only scale down, do not scale up (for better test mAP) - r = min(r, 1.0) - - # Compute padding - ratio = r, r # width, height ratios - new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) - dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding - if auto: # minimum rectangle - dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding - elif scaleFill: # stretch - dw, dh = 0.0, 0.0 - new_unpad = (new_shape[1], new_shape[0]) - ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios - - dw /= 2 # divide padding into 2 sides - dh /= 2 - - if shape[::-1] != new_unpad: # resize - img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) - top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) - left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) - img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border - return img, ratio, (dw, dh) - - -def random_perspective(img, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0, - border=(0, 0)): - # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10)) - # targets = [cls, xyxy] - - height = img.shape[0] + border[0] * 2 # shape(h,w,c) - width = img.shape[1] + border[1] * 2 - - # Center - C = np.eye(3) - C[0, 2] = -img.shape[1] / 2 # x translation (pixels) - C[1, 2] = -img.shape[0] / 2 # y translation (pixels) - - # Perspective - P = np.eye(3) - P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y) - P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x) - - # Rotation and Scale - R = np.eye(3) - a = random.uniform(-degrees, degrees) - # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations - s = random.uniform(1 - scale, 1.1 + scale) - # s = 2 ** random.uniform(-scale, scale) - R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s) - - # Shear - S = np.eye(3) - S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg) - S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg) - - # Translation - T = np.eye(3) - T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels) - T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels) - - # Combined rotation matrix - M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT - if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed - if perspective: - img = cv2.warpPerspective(img, M, dsize=(width, height), borderValue=(114, 114, 114)) - else: # affine - img = cv2.warpAffine(img, M[:2], dsize=(width, height), borderValue=(114, 114, 114)) - - # Visualize - # import matplotlib.pyplot as plt - # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel() - # ax[0].imshow(img[:, :, ::-1]) # base - # ax[1].imshow(img2[:, :, ::-1]) # warped - - # Transform label coordinates - n = len(targets) - if n: - use_segments = any(x.any() for x in segments) - new = np.zeros((n, 4)) - if use_segments: # warp segments - segments = resample_segments(segments) # upsample - for i, segment in enumerate(segments): - xy = np.ones((len(segment), 3)) - xy[:, :2] = segment - xy = xy @ M.T # transform - xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine - - # clip - new[i] = segment2box(xy, width, height) - - else: # warp boxes - xy = np.ones((n * 4, 3)) - xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1 - xy = xy @ M.T # transform - xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine - - # create new boxes - x = xy[:, [0, 2, 4, 6]] - y = xy[:, [1, 3, 5, 7]] - new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T - - # clip - new[:, [0, 2]] = new[:, [0, 2]].clip(0, width) - new[:, [1, 3]] = new[:, [1, 3]].clip(0, height) - - # filter candidates - i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10) - targets = targets[i] - targets[:, 1:5] = new[i] - - return img, targets - - -def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n) - # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio - w1, h1 = box1[2] - box1[0], box1[3] - box1[1] - w2, h2 = box2[2] - box2[0], box2[3] - box2[1] - ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio - return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates - - -def bbox_ioa(box1, box2): - # Returns the intersection over box2 area given box1, box2. box1 is 4, box2 is nx4. boxes are x1y1x2y2 - box2 = box2.transpose() - - # Get the coordinates of bounding boxes - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - - # Intersection area - inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \ - (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0) - - # box2 area - box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + 1e-16 - - # Intersection over box2 area - return inter_area / box2_area - - -def cutout(image, labels): - # Applies image cutout augmentation https://arxiv.org/abs/1708.04552 - h, w = image.shape[:2] - - # create random masks - scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction - for s in scales: - mask_h = random.randint(1, int(h * s)) - mask_w = random.randint(1, int(w * s)) - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - # apply random color mask - image[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)] - - # return unobscured labels - if len(labels) and s > 0.03: - box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - labels = labels[ioa < 0.60] # remove >60% obscured labels - - return labels - - -def pastein(image, labels, sample_labels, sample_images, sample_masks): - # Applies image cutout augmentation https://arxiv.org/abs/1708.04552 - h, w = image.shape[:2] - - # create random masks - scales = [0.75] * 2 + [0.5] * 4 + [0.25] * 4 + [0.125] * 4 + [0.0625] * 6 # image size fraction - for s in scales: - if random.random() < 0.2: - continue - mask_h = random.randint(1, int(h * s)) - mask_w = random.randint(1, int(w * s)) - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - if len(labels): - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - else: - ioa = np.zeros(1) - - if (ioa < 0.30).all() and len(sample_labels) and (xmax > xmin+20) and (ymax > ymin+20): # allow 30% obscuration of existing labels - sel_ind = random.randint(0, len(sample_labels)-1) - #print(len(sample_labels)) - #print(sel_ind) - #print((xmax-xmin, ymax-ymin)) - #print(image[ymin:ymax, xmin:xmax].shape) - #print([[sample_labels[sel_ind], *box]]) - #print(labels.shape) - hs, ws, cs = sample_images[sel_ind].shape - r_scale = min((ymax-ymin)/hs, (xmax-xmin)/ws) - r_w = int(ws*r_scale) - r_h = int(hs*r_scale) - - if (r_w > 10) and (r_h > 10): - r_mask = cv2.resize(sample_masks[sel_ind], (r_w, r_h)) - r_image = cv2.resize(sample_images[sel_ind], (r_w, r_h)) - temp_crop = image[ymin:ymin+r_h, xmin:xmin+r_w] - m_ind = r_mask > 0 - if m_ind.astype(np.int).sum() > 60: - temp_crop[m_ind] = r_image[m_ind] - #print(sample_labels[sel_ind]) - #print(sample_images[sel_ind].shape) - #print(temp_crop.shape) - box = np.array([xmin, ymin, xmin+r_w, ymin+r_h], dtype=np.float32) - if len(labels): - labels = np.concatenate((labels, [[sample_labels[sel_ind], *box]]), 0) - else: - labels = np.array([[sample_labels[sel_ind], *box]]) - - image[ymin:ymin+r_h, xmin:xmin+r_w] = temp_crop - - return labels - -class Albumentations: - # YOLOv5 Albumentations class (optional, only used if package is installed) - def __init__(self): - self.transform = None - import albumentations as A - - self.transform = A.Compose([ - A.CLAHE(p=0.01), - A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2, p=0.01), - A.RandomGamma(gamma_limit=[80, 120], p=0.01), - A.Blur(p=0.01), - A.MedianBlur(p=0.01), - A.ToGray(p=0.01), - A.ImageCompression(quality_lower=75, p=0.01),], - bbox_params=A.BboxParams(format='pascal_voc', label_fields=['class_labels'])) - - #logging.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p)) - - def __call__(self, im, labels, p=1.0): - if self.transform and random.random() < p: - new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed - im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])]) - return im, labels - - -def create_folder(path='./new'): - # Create folder - if os.path.exists(path): - shutil.rmtree(path) # delete output folder - os.makedirs(path) # make new output folder - - -def flatten_recursive(path='../coco'): - # Flatten a recursive directory by bringing all files to top level - new_path = Path(path + '_flat') - create_folder(new_path) - for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)): - shutil.copyfile(file, new_path / Path(file).name) - - -def extract_boxes(path='../coco/'): # from utils.datasets import *; extract_boxes('../coco128') - # Convert detection dataset into classification dataset, with one directory per class - - path = Path(path) # images dir - shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing - files = list(path.rglob('*.*')) - n = len(files) # number of files - for im_file in tqdm(files, total=n): - if im_file.suffix[1:] in img_formats: - # image - im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB - h, w = im.shape[:2] - - # labels - lb_file = Path(img2label_paths([str(im_file)])[0]) - if Path(lb_file).exists(): - with open(lb_file, 'r') as f: - lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - - for j, x in enumerate(lb): - c = int(x[0]) # class - f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - if not f.parent.is_dir(): - f.parent.mkdir(parents=True) - - b = x[1:] * [w, h, w, h] # box - # b[2:] = b[2:].max() # rectangle to square - b[2:] = b[2:] * 1.2 + 3 # pad - b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int) - - b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image - b[[1, 3]] = np.clip(b[[1, 3]], 0, h) - assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}' - - -def autosplit(path='../coco', weights=(0.9, 0.1, 0.0), annotated_only=False): - """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - Usage: from utils.datasets import *; autosplit('../coco') - Arguments - path: Path to images directory - weights: Train, val, test weights (list) - annotated_only: Only use images with an annotated txt file - """ - path = Path(path) # images dir - files = sum([list(path.rglob(f"*.{img_ext}")) for img_ext in img_formats], []) # image files only - n = len(files) # number of files - indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split - - txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files - [(path / x).unlink() for x in txt if (path / x).exists()] # remove existing - - print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only) - for i, img in tqdm(zip(indices, files), total=n): - if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label - with open(path / txt[i], 'a') as f: - f.write(str(img) + '\n') # add image to txt file - - -def load_segmentations(self, index): - key = '/work/handsomejw66/coco17/' + self.img_files[index] - #print(key) - # /work/handsomejw66/coco17/ - return self.segs[key] diff --git a/spaces/JunchuanYu/SegRS/segment_anything/utils/transforms.py b/spaces/JunchuanYu/SegRS/segment_anything/utils/transforms.py deleted file mode 100644 index 3ad346661f84b0647026e130a552c4b38b83e2ac..0000000000000000000000000000000000000000 --- a/spaces/JunchuanYu/SegRS/segment_anything/utils/transforms.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torch.nn import functional as F -from torchvision.transforms.functional import resize, to_pil_image # type: ignore - -from copy import deepcopy -from typing import Tuple - - -class ResizeLongestSide: - """ - Resizes images to longest side 'target_length', as well as provides - methods for resizing coordinates and boxes. Provides methods for - transforming both numpy array and batched torch tensors. - """ - - def __init__(self, target_length: int) -> None: - self.target_length = target_length - - def apply_image(self, image: np.ndarray) -> np.ndarray: - """ - Expects a numpy array with shape HxWxC in uint8 format. - """ - target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length) - return np.array(resize(to_pil_image(image), target_size)) - - def apply_coords(self, coords: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array of length 2 in the final dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape( - original_size[0], original_size[1], self.target_length - ) - coords = deepcopy(coords).astype(float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes(self, boxes: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array shape Bx4. Requires the original image size - in (H, W) format. - """ - boxes = self.apply_coords(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - def apply_image_torch(self, image: torch.Tensor) -> torch.Tensor: - """ - Expects batched images with shape BxCxHxW and float format. This - transformation may not exactly match apply_image. apply_image is - the transformation expected by the model. - """ - # Expects an image in BCHW format. May not exactly match apply_image. - target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length) - return F.interpolate( - image, target_size, mode="bilinear", align_corners=False, antialias=True - ) - - def apply_coords_torch( - self, coords: torch.Tensor, original_size: Tuple[int, ...] - ) -> torch.Tensor: - """ - Expects a torch tensor with length 2 in the last dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape( - original_size[0], original_size[1], self.target_length - ) - coords = deepcopy(coords).to(torch.float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes_torch( - self, boxes: torch.Tensor, original_size: Tuple[int, ...] - ) -> torch.Tensor: - """ - Expects a torch tensor with shape Bx4. Requires the original image - size in (H, W) format. - """ - boxes = self.apply_coords_torch(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - @staticmethod - def get_preprocess_shape(oldh: int, oldw: int, long_side_length: int) -> Tuple[int, int]: - """ - Compute the output size given input size and target long side length. - """ - scale = long_side_length * 1.0 / max(oldh, oldw) - newh, neww = oldh * scale, oldw * scale - neww = int(neww + 0.5) - newh = int(newh + 0.5) - return (newh, neww) diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer_preprocess_embeds.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer_preprocess_embeds.py deleted file mode 100644 index 7276626f5c870020ee5fda5168897dded0174dd8..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer_preprocess_embeds.py +++ /dev/null @@ -1,26 +0,0 @@ -from synthesizer.preprocess import create_embeddings -from utils.argutils import print_args -from pathlib import Path -import argparse - - -if __name__ == "__main__": - print("This method is deprecaded and will not be longer supported, please use 'pre.py'") - parser = argparse.ArgumentParser( - description="Creates embeddings for the synthesizer from the LibriSpeech utterances.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument("synthesizer_root", type=Path, help=\ - "Path to the synthesizer training data that contains the audios and the train.txt file. " - "If you let everything as default, it should be /SV2TTS/synthesizer/.") - parser.add_argument("-e", "--encoder_model_fpath", type=Path, - default="encoder/saved_models/pretrained.pt", help=\ - "Path your trained encoder model.") - parser.add_argument("-n", "--n_processes", type=int, default=4, help= \ - "Number of parallel processes. An encoder is created for each, so you may need to lower " - "this value on GPUs with low memory. Set it to 1 if CUDA is unhappy.") - args = parser.parse_args() - - # Preprocess the dataset - print_args(args, parser) - create_embeddings(**vars(args)) diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/utils/profiler.py b/spaces/Kevin676/Real-Time-Voice-Cloning/utils/profiler.py deleted file mode 100644 index 17175b9e1b0eb17fdc015199e5194a5c1afb8a28..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Real-Time-Voice-Cloning/utils/profiler.py +++ /dev/null @@ -1,45 +0,0 @@ -from time import perf_counter as timer -from collections import OrderedDict -import numpy as np - - -class Profiler: - def __init__(self, summarize_every=5, disabled=False): - self.last_tick = timer() - self.logs = OrderedDict() - self.summarize_every = summarize_every - self.disabled = disabled - - def tick(self, name): - if self.disabled: - return - - # Log the time needed to execute that function - if not name in self.logs: - self.logs[name] = [] - if len(self.logs[name]) >= self.summarize_every: - self.summarize() - self.purge_logs() - self.logs[name].append(timer() - self.last_tick) - - self.reset_timer() - - def purge_logs(self): - for name in self.logs: - self.logs[name].clear() - - def reset_timer(self): - self.last_tick = timer() - - def summarize(self): - n = max(map(len, self.logs.values())) - assert n == self.summarize_every - print("\nAverage execution time over %d steps:" % n) - - name_msgs = ["%s (%d/%d):" % (name, len(deltas), n) for name, deltas in self.logs.items()] - pad = max(map(len, name_msgs)) - for name_msg, deltas in zip(name_msgs, self.logs.values()): - print(" %s mean: %4.0fms std: %4.0fms" % - (name_msg.ljust(pad), np.mean(deltas) * 1000, np.std(deltas) * 1000)) - print("", flush=True) - \ No newline at end of file diff --git a/spaces/KevinQHLin/UniVTG/main/inference_demo.py b/spaces/KevinQHLin/UniVTG/main/inference_demo.py deleted file mode 100644 index 7659564d0598b249807620cec917374c2fa193f0..0000000000000000000000000000000000000000 --- a/spaces/KevinQHLin/UniVTG/main/inference_demo.py +++ /dev/null @@ -1,81 +0,0 @@ -import pdb -import pprint -from tqdm import tqdm, trange -import numpy as np -import os -from collections import OrderedDict, defaultdict -from utils.basic_utils import AverageMeter - -import torch -import torch.nn.functional as F -import torch.backends.cudnn as cudnn -from torch.utils.data import DataLoader - -from main.config import TestOptions, setup_model -from main.dataset import DatasetMR, start_end_collate_mr, prepare_batch_inputs_mr -from eval.eval import eval_submission -from eval.postprocessing import PostProcessorDETR -from utils.basic_utils import save_jsonl, save_json -from utils.temporal_nms import temporal_nms -from utils.span_utils import span_cxw_to_xx -from utils.basic_utils import load_jsonl, load_pickle, l2_normalize_np_array - -import logging -import importlib - -logger = logging.getLogger(__name__) -logging.basicConfig(format="%(asctime)s.%(msecs)03d:%(levelname)s:%(name)s - %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=logging.INFO) - -def load_model(): - logger.info("Setup config, data and model...") - opt = TestOptions().parse() - # pdb.set_trace() - cudnn.benchmark = True - cudnn.deterministic = False - - model, criterion, _, _ = setup_model(opt) - return model - -def load_data(save_dir): - vid = np.load(os.path.join(save_dir, 'vid.npz'))['features'].astype(np.float32) - txt = np.load(os.path.join(save_dir, 'txt.npz'))['features'].astype(np.float32) - - vid = torch.from_numpy(l2_normalize_np_array(vid)) - txt = torch.from_numpy(l2_normalize_np_array(txt)) - clip_len = 2 - ctx_l = vid.shape[0] - - timestamp = ( (torch.arange(0, ctx_l) + clip_len / 2) / ctx_l).unsqueeze(1).repeat(1, 2) - - if True: - tef_st = torch.arange(0, ctx_l, 1.0) / ctx_l - tef_ed = tef_st + 1.0 / ctx_l - tef = torch.stack([tef_st, tef_ed], dim=1) # (Lv, 2) - vid = torch.cat([vid, tef], dim=1) # (Lv, Dv+2) - - src_vid = vid.unsqueeze(0).cuda() - src_txt = txt.unsqueeze(0).cuda() - src_vid_mask = torch.ones(src_vid.shape[0], src_vid.shape[1]).cuda() - src_txt_mask = torch.ones(src_txt.shape[0], src_txt.shape[1]).cuda() - - return src_vid, src_txt, src_vid_mask, src_txt_mask, timestamp, ctx_l - -if __name__ == '__main__': - clip_len = 2 - save_dir = '/data/home/qinghonglin/univtg/demo/tmp' - - model = load_model() - src_vid, src_txt, src_vid_mask, src_txt_mask, timestamp, ctx_l = load_data(save_dir) - with torch.no_grad(): - output = model(src_vid=src_vid, src_txt=src_txt, src_vid_mask=src_vid_mask, src_txt_mask=src_txt_mask) - - pred_logits = output['pred_logits'][0].cpu() - pred_spans = output['pred_spans'][0].cpu() - pred_saliency = output['saliency_scores'].cpu() - - pdb.set_trace() - top1 = (pred_spans + timestamp)[torch.argmax(pred_logits)] * ctx_l * clip_len - print(top1) - print(pred_saliency.argmax()*clip_len) \ No newline at end of file diff --git a/spaces/Kreaols/ChuanhuChatGPT/modules/models.py b/spaces/Kreaols/ChuanhuChatGPT/modules/models.py deleted file mode 100644 index 25b18b1904910e183a997a763008403d960868d6..0000000000000000000000000000000000000000 --- a/spaces/Kreaols/ChuanhuChatGPT/modules/models.py +++ /dev/null @@ -1,625 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import platform -import base64 -from io import BytesIO -from PIL import Image - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum -import uuid - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy -from modules import config -from .base_model import BaseLLMModel, ModelType - - -class OpenAIClient(BaseLLMModel): - def __init__( - self, - model_name, - api_key, - system_prompt=INITIAL_SYSTEM_PROMPT, - temperature=1.0, - top_p=1.0, - ) -> None: - super().__init__( - model_name=model_name, - temperature=temperature, - top_p=top_p, - system_prompt=system_prompt, - ) - self.api_key = api_key - self.need_api_key = True - self._refresh_header() - - def get_answer_stream_iter(self): - response = self._get_response(stream=True) - if response is not None: - iter = self._decode_chat_response(response) - partial_text = "" - for i in iter: - partial_text += i - yield partial_text - else: - yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG - - def get_answer_at_once(self): - response = self._get_response() - response = json.loads(response.text) - content = response["choices"][0]["message"]["content"] - total_token_count = response["usage"]["total_tokens"] - return content, total_token_count - - def count_token(self, user_input): - input_token_count = count_token(construct_user(user_input)) - if self.system_prompt is not None and len(self.all_token_counts) == 0: - system_prompt_token_count = count_token( - construct_system(self.system_prompt) - ) - return input_token_count + system_prompt_token_count - return input_token_count - - def billing_info(self): - try: - curr_time = datetime.datetime.now() - last_day_of_month = get_last_day_of_month( - curr_time).strftime("%Y-%m-%d") - first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d") - usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}" - try: - usage_data = self._get_billing_data(usage_url) - except Exception as e: - logging.error(f"获取API使用情况失败:" + str(e)) - return i18n("**获取API使用情况失败**") - rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100) - return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}" - except requests.exceptions.ConnectTimeout: - status_text = ( - STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - ) - return status_text - except requests.exceptions.ReadTimeout: - status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - return status_text - except Exception as e: - import traceback - traceback.print_exc() - logging.error(i18n("获取API使用情况失败:") + str(e)) - return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG - - def set_token_upper_limit(self, new_upper_limit): - pass - - @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用 - def _get_response(self, stream=False): - openai_api_key = self.api_key - system_prompt = self.system_prompt - history = self.history - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}", - } - - if system_prompt is not None: - history = [construct_system(system_prompt), *history] - - payload = { - "model": self.model_name, - "messages": history, - "temperature": self.temperature, - "top_p": self.top_p, - "n": self.n_choices, - "stream": stream, - "presence_penalty": self.presence_penalty, - "frequency_penalty": self.frequency_penalty, - } - - if self.max_generation_token is not None: - payload["max_tokens"] = self.max_generation_token - if self.stop_sequence is not None: - payload["stop"] = self.stop_sequence - if self.logit_bias is not None: - payload["logit_bias"] = self.logit_bias - if self.user_identifier is not None: - payload["user"] = self.user_identifier - - if stream: - timeout = TIMEOUT_STREAMING - else: - timeout = TIMEOUT_ALL - - # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求 - if shared.state.completion_url != COMPLETION_URL: - logging.info(f"使用自定义API URL: {shared.state.completion_url}") - - with retrieve_proxy(): - try: - response = requests.post( - shared.state.completion_url, - headers=headers, - json=payload, - stream=stream, - timeout=timeout, - ) - except: - return None - return response - - def _refresh_header(self): - self.headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {self.api_key}", - } - - def _get_billing_data(self, billing_url): - with retrieve_proxy(): - response = requests.get( - billing_url, - headers=self.headers, - timeout=TIMEOUT_ALL, - ) - - if response.status_code == 200: - data = response.json() - return data - else: - raise Exception( - f"API request failed with status code {response.status_code}: {response.text}" - ) - - def _decode_chat_response(self, response): - error_msg = "" - for chunk in response.iter_lines(): - if chunk: - chunk = chunk.decode() - chunk_length = len(chunk) - try: - chunk = json.loads(chunk[6:]) - except json.JSONDecodeError: - print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}") - error_msg += chunk - continue - if chunk_length > 6 and "delta" in chunk["choices"][0]: - if chunk["choices"][0]["finish_reason"] == "stop": - break - try: - yield chunk["choices"][0]["delta"]["content"] - except Exception as e: - # logging.error(f"Error: {e}") - continue - if error_msg: - raise Exception(error_msg) - - def set_key(self, new_access_key): - ret = super().set_key(new_access_key) - self._refresh_header() - return ret - - -class ChatGLM_Client(BaseLLMModel): - def __init__(self, model_name) -> None: - super().__init__(model_name=model_name) - from transformers import AutoTokenizer, AutoModel - import torch - global CHATGLM_TOKENIZER, CHATGLM_MODEL - if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None: - system_name = platform.system() - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"THUDM/{model_name}" - CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained( - model_source, trust_remote_code=True - ) - quantified = False - if "int4" in model_name: - quantified = True - model = AutoModel.from_pretrained( - model_source, trust_remote_code=True - ) - if torch.cuda.is_available(): - # run on CUDA - logging.info("CUDA is available, using CUDA") - model = model.half().cuda() - # mps加速还存在一些问题,暂时不使用 - elif system_name == "Darwin" and model_path is not None and not quantified: - logging.info("Running on macOS, using MPS") - # running on macOS and model already downloaded - model = model.half().to("mps") - else: - logging.info("GPU is not available, using CPU") - model = model.float() - model = model.eval() - CHATGLM_MODEL = model - - def _get_glm_style_input(self): - history = [x["content"] for x in self.history] - query = history.pop() - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - assert ( - len(history) % 2 == 0 - ), f"History should be even length. current history is: {history}" - history = [[history[i], history[i + 1]] - for i in range(0, len(history), 2)] - return history, query - - def get_answer_at_once(self): - history, query = self._get_glm_style_input() - response, _ = CHATGLM_MODEL.chat( - CHATGLM_TOKENIZER, query, history=history) - return response, len(response) - - def get_answer_stream_iter(self): - history, query = self._get_glm_style_input() - for response, history in CHATGLM_MODEL.stream_chat( - CHATGLM_TOKENIZER, - query, - history, - max_length=self.token_upper_limit, - top_p=self.top_p, - temperature=self.temperature, - ): - yield response - - -class LLaMA_Client(BaseLLMModel): - def __init__( - self, - model_name, - lora_path=None, - ) -> None: - super().__init__(model_name=model_name) - from lmflow.datasets.dataset import Dataset - from lmflow.pipeline.auto_pipeline import AutoPipeline - from lmflow.models.auto_model import AutoModel - from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments - - self.max_generation_token = 1000 - self.end_string = "\n\n" - # We don't need input data - data_args = DatasetArguments(dataset_path=None) - self.dataset = Dataset(data_args) - self.system_prompt = "" - - global LLAMA_MODEL, LLAMA_INFERENCER - if LLAMA_MODEL is None or LLAMA_INFERENCER is None: - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"decapoda-research/{model_name}" - # raise Exception(f"models目录下没有这个模型: {model_name}") - if lora_path is not None: - lora_path = f"lora/{lora_path}" - model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None, - use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True) - pipeline_args = InferencerArguments( - local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16') - - with open(pipeline_args.deepspeed, "r") as f: - ds_config = json.load(f) - LLAMA_MODEL = AutoModel.get_model( - model_args, - tune_strategy="none", - ds_config=ds_config, - ) - LLAMA_INFERENCER = AutoPipeline.get_pipeline( - pipeline_name="inferencer", - model_args=model_args, - data_args=data_args, - pipeline_args=pipeline_args, - ) - - def _get_llama_style_input(self): - history = [] - instruction = "" - if self.system_prompt: - instruction = (f"Instruction: {self.system_prompt}\n") - for x in self.history: - if x["role"] == "user": - history.append(f"{instruction}Input: {x['content']}") - else: - history.append(f"Output: {x['content']}") - context = "\n\n".join(history) - context += "\n\nOutput: " - return context - - def get_answer_at_once(self): - context = self._get_llama_style_input() - - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [{"text": context}]} - ) - - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=self.max_generation_token, - temperature=self.temperature, - ) - - response = output_dataset.to_dict()["instances"][0]["text"] - return response, len(response) - - def get_answer_stream_iter(self): - context = self._get_llama_style_input() - partial_text = "" - step = 1 - for _ in range(0, self.max_generation_token, step): - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [ - {"text": context + partial_text}]} - ) - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=step, - temperature=self.temperature, - ) - response = output_dataset.to_dict()["instances"][0]["text"] - if response == "" or response == self.end_string: - break - partial_text += response - yield partial_text - - -class XMChat(BaseLLMModel): - def __init__(self, api_key): - super().__init__(model_name="xmchat") - self.api_key = api_key - self.session_id = None - self.reset() - self.image_bytes = None - self.image_path = None - self.xm_history = [] - self.url = "https://xmbot.net/web" - self.last_conv_id = None - - def reset(self): - self.session_id = str(uuid.uuid4()) - self.last_conv_id = None - return [], "已重置" - - def image_to_base64(self, image_path): - # 打开并加载图片 - img = Image.open(image_path) - - # 获取图片的宽度和高度 - width, height = img.size - - # 计算压缩比例,以确保最长边小于4096像素 - max_dimension = 2048 - scale_ratio = min(max_dimension / width, max_dimension / height) - - if scale_ratio < 1: - # 按压缩比例调整图片大小 - new_width = int(width * scale_ratio) - new_height = int(height * scale_ratio) - img = img.resize((new_width, new_height), Image.ANTIALIAS) - - # 将图片转换为jpg格式的二进制数据 - buffer = BytesIO() - if img.mode == "RGBA": - img = img.convert("RGB") - img.save(buffer, format='JPEG') - binary_image = buffer.getvalue() - - # 对二进制数据进行Base64编码 - base64_image = base64.b64encode(binary_image).decode('utf-8') - - return base64_image - - def try_read_image(self, filepath): - def is_image_file(filepath): - # 判断文件是否为图片 - valid_image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"] - file_extension = os.path.splitext(filepath)[1].lower() - return file_extension in valid_image_extensions - - if is_image_file(filepath): - logging.info(f"读取图片文件: {filepath}") - self.image_bytes = self.image_to_base64(filepath) - self.image_path = filepath - else: - self.image_bytes = None - self.image_path = None - - def like(self): - if self.last_conv_id is None: - return "点赞失败,你还没发送过消息" - data = { - "uuid": self.last_conv_id, - "appraise": "good" - } - response = requests.post(self.url, json=data) - return "👍点赞成功,,感谢反馈~" - - def dislike(self): - if self.last_conv_id is None: - return "点踩失败,你还没发送过消息" - data = { - "uuid": self.last_conv_id, - "appraise": "bad" - } - response = requests.post(self.url, json=data) - return "👎点踩成功,感谢反馈~" - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = real_inputs - display_append = "" - limited_context = False - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - if files: - for file in files: - if file.name: - logging.info(f"尝试读取图像: {file.name}") - self.try_read_image(file.name) - if self.image_path is not None: - chatbot = chatbot + [((self.image_path,), None)] - if self.image_bytes is not None: - logging.info("使用图片作为输入") - # XMChat的一轮对话中实际上只能处理一张图片 - self.reset() - conv_id = str(uuid.uuid4()) - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "imgbase64", - "data": self.image_bytes - } - response = requests.post(self.url, json=data) - response = json.loads(response.text) - logging.info(f"图片回复: {response['data']}") - return None, chatbot, None - - def get_answer_at_once(self): - question = self.history[-1]["content"] - conv_id = str(uuid.uuid4()) - self.last_conv_id = conv_id - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "text", - "data": question - } - response = requests.post(self.url, json=data) - try: - response = json.loads(response.text) - return response["data"], len(response["data"]) - except Exception as e: - return response.text, len(response.text) - - - - -def get_model( - model_name, - lora_model_path=None, - access_key=None, - temperature=None, - top_p=None, - system_prompt=None, -) -> BaseLLMModel: - msg = i18n("模型设置为了:") + f" {model_name}" - model_type = ModelType.get_type(model_name) - lora_selector_visibility = False - lora_choices = [] - dont_change_lora_selector = False - if model_type != ModelType.OpenAI: - config.local_embedding = True - # del current_model.model - model = None - try: - if model_type == ModelType.OpenAI: - logging.info(f"正在加载OpenAI模型: {model_name}") - model = OpenAIClient( - model_name=model_name, - api_key=access_key, - system_prompt=system_prompt, - temperature=temperature, - top_p=top_p, - ) - elif model_type == ModelType.ChatGLM: - logging.info(f"正在加载ChatGLM模型: {model_name}") - model = ChatGLM_Client(model_name) - elif model_type == ModelType.LLaMA and lora_model_path == "": - msg = f"现在请为 {model_name} 选择LoRA模型" - logging.info(msg) - lora_selector_visibility = True - if os.path.isdir("lora"): - lora_choices = get_file_names( - "lora", plain=True, filetypes=[""]) - lora_choices = ["No LoRA"] + lora_choices - elif model_type == ModelType.LLaMA and lora_model_path != "": - logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}") - dont_change_lora_selector = True - if lora_model_path == "No LoRA": - lora_model_path = None - msg += " + No LoRA" - else: - msg += f" + {lora_model_path}" - model = LLaMA_Client(model_name, lora_model_path) - elif model_type == ModelType.XMChat: - if os.environ.get("XMCHAT_API_KEY") != "": - access_key = os.environ.get("XMCHAT_API_KEY") - model = XMChat(api_key=access_key) - elif model_type == ModelType.Unknown: - raise ValueError(f"未知模型: {model_name}") - logging.info(msg) - except Exception as e: - logging.error(e) - msg = f"{STANDARD_ERROR_MSG}: {e}" - if dont_change_lora_selector: - return model, msg - else: - return model, msg, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility) - - -if __name__ == "__main__": - with open("config.json", "r") as f: - openai_api_key = cjson.load(f)["openai_api_key"] - # set logging level to debug - logging.basicConfig(level=logging.DEBUG) - # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key) - client = get_model(model_name="chatglm-6b-int4") - chatbot = [] - stream = False - # 测试账单功能 - logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET) - logging.info(client.billing_info()) - # 测试问答 - logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET) - question = "巴黎是中国的首都吗?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试问答后history : {client.history}") - # 测试记忆力 - logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET) - question = "我刚刚问了你什么问题?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试记忆力后history : {client.history}") - # 测试重试功能 - logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET) - for i in client.retry(chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"重试后history : {client.history}") - # # 测试总结功能 - # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET) - # chatbot, msg = client.reduce_token_size(chatbot=chatbot) - # print(chatbot, msg) - # print(f"总结后history: {client.history}") diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/global_context_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/global_context_head.py deleted file mode 100644 index cb947ea582227d2b74112cbb930e1a3f85b77ff5..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/global_context_head.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Tuple - -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmengine.model import BaseModule -from torch import Tensor - -from mmdet.models.layers import ResLayer, SimplifiedBasicBlock -from mmdet.registry import MODELS -from mmdet.utils import MultiConfig, OptConfigType - - -@MODELS.register_module() -class GlobalContextHead(BaseModule): - """Global context head used in `SCNet `_. - - Args: - num_convs (int, optional): number of convolutional layer in GlbCtxHead. - Defaults to 4. - in_channels (int, optional): number of input channels. Defaults to 256. - conv_out_channels (int, optional): number of output channels before - classification layer. Defaults to 256. - num_classes (int, optional): number of classes. Defaults to 80. - loss_weight (float, optional): global context loss weight. - Defaults to 1. - conv_cfg (dict, optional): config to init conv layer. Defaults to None. - norm_cfg (dict, optional): config to init norm layer. Defaults to None. - conv_to_res (bool, optional): if True, 2 convs will be grouped into - 1 `SimplifiedBasicBlock` using a skip connection. - Defaults to False. - init_cfg (:obj:`ConfigDict` or dict or list[dict] or - list[:obj:`ConfigDict`]): Initialization config dict. Defaults to - dict(type='Normal', std=0.01, override=dict(name='fc')). - """ - - def __init__( - self, - num_convs: int = 4, - in_channels: int = 256, - conv_out_channels: int = 256, - num_classes: int = 80, - loss_weight: float = 1.0, - conv_cfg: OptConfigType = None, - norm_cfg: OptConfigType = None, - conv_to_res: bool = False, - init_cfg: MultiConfig = dict( - type='Normal', std=0.01, override=dict(name='fc')) - ) -> None: - super().__init__(init_cfg=init_cfg) - self.num_convs = num_convs - self.in_channels = in_channels - self.conv_out_channels = conv_out_channels - self.num_classes = num_classes - self.loss_weight = loss_weight - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.conv_to_res = conv_to_res - self.fp16_enabled = False - - if self.conv_to_res: - num_res_blocks = num_convs // 2 - self.convs = ResLayer( - SimplifiedBasicBlock, - in_channels, - self.conv_out_channels, - num_res_blocks, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - self.num_convs = num_res_blocks - else: - self.convs = nn.ModuleList() - for i in range(self.num_convs): - in_channels = self.in_channels if i == 0 else conv_out_channels - self.convs.append( - ConvModule( - in_channels, - conv_out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - - self.pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Linear(conv_out_channels, num_classes) - - self.criterion = nn.BCEWithLogitsLoss() - - def forward(self, feats: Tuple[Tensor]) -> Tuple[Tensor]: - """Forward function. - - Args: - feats (Tuple[Tensor]): Multi-scale feature maps. - - Returns: - Tuple[Tensor]: - - - mc_pred (Tensor): Multi-class prediction. - - x (Tensor): Global context feature. - """ - x = feats[-1] - for i in range(self.num_convs): - x = self.convs[i](x) - x = self.pool(x) - - # multi-class prediction - mc_pred = x.reshape(x.size(0), -1) - mc_pred = self.fc(mc_pred) - - return mc_pred, x - - def loss(self, pred: Tensor, labels: List[Tensor]) -> Tensor: - """Loss function. - - Args: - pred (Tensor): Logits. - labels (list[Tensor]): Grouth truths. - - Returns: - Tensor: Loss. - """ - labels = [lbl.unique() for lbl in labels] - targets = pred.new_zeros(pred.size()) - for i, label in enumerate(labels): - targets[i, label] = 1.0 - loss = self.loss_weight * self.criterion(pred, targets) - return loss diff --git a/spaces/KyanChen/RSPrompter/mmdet/structures/bbox/base_boxes.py b/spaces/KyanChen/RSPrompter/mmdet/structures/bbox/base_boxes.py deleted file mode 100644 index 0ed667664a8a57a1b9b7e422af03d41274882747..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/structures/bbox/base_boxes.py +++ /dev/null @@ -1,549 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod, abstractproperty, abstractstaticmethod -from typing import List, Optional, Sequence, Tuple, Type, TypeVar, Union - -import numpy as np -import torch -from torch import BoolTensor, Tensor - -from mmdet.structures.mask.structures import BitmapMasks, PolygonMasks - -T = TypeVar('T') -DeviceType = Union[str, torch.device] -IndexType = Union[slice, int, list, torch.LongTensor, torch.cuda.LongTensor, - torch.BoolTensor, torch.cuda.BoolTensor, np.ndarray] -MaskType = Union[BitmapMasks, PolygonMasks] - - -class BaseBoxes(metaclass=ABCMeta): - """The base class for 2D box types. - - The functions of ``BaseBoxes`` lie in three fields: - - - Verify the boxes shape. - - Support tensor-like operations. - - Define abstract functions for 2D boxes. - - In ``__init__`` , ``BaseBoxes`` verifies the validity of the data shape - w.r.t ``box_dim``. The tensor with the dimension >= 2 and the length - of the last dimension being ``box_dim`` will be regarded as valid. - ``BaseBoxes`` will restore them at the field ``tensor``. It's necessary - to override ``box_dim`` in subclass to guarantee the data shape is - correct. - - There are many basic tensor-like functions implemented in ``BaseBoxes``. - In most cases, users can operate ``BaseBoxes`` instance like a normal - tensor. To protect the validity of data shape, All tensor-like functions - cannot modify the last dimension of ``self.tensor``. - - When creating a new box type, users need to inherit from ``BaseBoxes`` - and override abstract methods and specify the ``box_dim``. Then, register - the new box type by using the decorator ``register_box_type``. - - Args: - data (Tensor or np.ndarray or Sequence): The box data with shape - (..., box_dim). - dtype (torch.dtype, Optional): data type of boxes. Defaults to None. - device (str or torch.device, Optional): device of boxes. - Default to None. - clone (bool): Whether clone ``boxes`` or not. Defaults to True. - """ - - # Used to verify the last dimension length - # Should override it in subclass. - box_dim: int = 0 - - def __init__(self, - data: Union[Tensor, np.ndarray, Sequence], - dtype: Optional[torch.dtype] = None, - device: Optional[DeviceType] = None, - clone: bool = True) -> None: - if isinstance(data, (np.ndarray, Tensor, Sequence)): - data = torch.as_tensor(data) - else: - raise TypeError('boxes should be Tensor, ndarray, or Sequence, ', - f'but got {type(data)}') - - if device is not None or dtype is not None: - data = data.to(dtype=dtype, device=device) - # Clone the data to avoid potential bugs - if clone: - data = data.clone() - # handle the empty input like [] - if data.numel() == 0: - data = data.reshape((-1, self.box_dim)) - - assert data.dim() >= 2 and data.size(-1) == self.box_dim, \ - ('The boxes dimension must >= 2 and the length of the last ' - f'dimension must be {self.box_dim}, but got boxes with ' - f'shape {data.shape}.') - self.tensor = data - - def convert_to(self, dst_type: Union[str, type]) -> 'BaseBoxes': - """Convert self to another box type. - - Args: - dst_type (str or type): destination box type. - - Returns: - :obj:`BaseBoxes`: destination box type object . - """ - from .box_type import convert_box_type - return convert_box_type(self, dst_type=dst_type) - - def empty_boxes(self: T, - dtype: Optional[torch.dtype] = None, - device: Optional[DeviceType] = None) -> T: - """Create empty box. - - Args: - dtype (torch.dtype, Optional): data type of boxes. - device (str or torch.device, Optional): device of boxes. - - Returns: - T: empty boxes with shape of (0, box_dim). - """ - empty_box = self.tensor.new_zeros( - 0, self.box_dim, dtype=dtype, device=device) - return type(self)(empty_box, clone=False) - - def fake_boxes(self: T, - sizes: Tuple[int], - fill: float = 0, - dtype: Optional[torch.dtype] = None, - device: Optional[DeviceType] = None) -> T: - """Create fake boxes with specific sizes and fill values. - - Args: - sizes (Tuple[int]): The size of fake boxes. The last value must - be equal with ``self.box_dim``. - fill (float): filling value. Defaults to 0. - dtype (torch.dtype, Optional): data type of boxes. - device (str or torch.device, Optional): device of boxes. - - Returns: - T: Fake boxes with shape of ``sizes``. - """ - fake_boxes = self.tensor.new_full( - sizes, fill, dtype=dtype, device=device) - return type(self)(fake_boxes, clone=False) - - def __getitem__(self: T, index: IndexType) -> T: - """Rewrite getitem to protect the last dimension shape.""" - boxes = self.tensor - if isinstance(index, np.ndarray): - index = torch.as_tensor(index, device=self.device) - if isinstance(index, Tensor) and index.dtype == torch.bool: - assert index.dim() < boxes.dim() - elif isinstance(index, tuple): - assert len(index) < boxes.dim() - # `Ellipsis`(...) is commonly used in index like [None, ...]. - # When `Ellipsis` is in index, it must be the last item. - if Ellipsis in index: - assert index[-1] is Ellipsis - - boxes = boxes[index] - if boxes.dim() == 1: - boxes = boxes.reshape(1, -1) - return type(self)(boxes, clone=False) - - def __setitem__(self: T, index: IndexType, values: Union[Tensor, T]) -> T: - """Rewrite setitem to protect the last dimension shape.""" - assert type(values) is type(self), \ - 'The value to be set must be the same box type as self' - values = values.tensor - - if isinstance(index, np.ndarray): - index = torch.as_tensor(index, device=self.device) - if isinstance(index, Tensor) and index.dtype == torch.bool: - assert index.dim() < self.tensor.dim() - elif isinstance(index, tuple): - assert len(index) < self.tensor.dim() - # `Ellipsis`(...) is commonly used in index like [None, ...]. - # When `Ellipsis` is in index, it must be the last item. - if Ellipsis in index: - assert index[-1] is Ellipsis - - self.tensor[index] = values - - def __len__(self) -> int: - """Return the length of self.tensor first dimension.""" - return self.tensor.size(0) - - def __deepcopy__(self, memo): - """Only clone the ``self.tensor`` when applying deepcopy.""" - cls = self.__class__ - other = cls.__new__(cls) - memo[id(self)] = other - other.tensor = self.tensor.clone() - return other - - def __repr__(self) -> str: - """Return a strings that describes the object.""" - return self.__class__.__name__ + '(\n' + str(self.tensor) + ')' - - def new_tensor(self, *args, **kwargs) -> Tensor: - """Reload ``new_tensor`` from self.tensor.""" - return self.tensor.new_tensor(*args, **kwargs) - - def new_full(self, *args, **kwargs) -> Tensor: - """Reload ``new_full`` from self.tensor.""" - return self.tensor.new_full(*args, **kwargs) - - def new_empty(self, *args, **kwargs) -> Tensor: - """Reload ``new_empty`` from self.tensor.""" - return self.tensor.new_empty(*args, **kwargs) - - def new_ones(self, *args, **kwargs) -> Tensor: - """Reload ``new_ones`` from self.tensor.""" - return self.tensor.new_ones(*args, **kwargs) - - def new_zeros(self, *args, **kwargs) -> Tensor: - """Reload ``new_zeros`` from self.tensor.""" - return self.tensor.new_zeros(*args, **kwargs) - - def size(self, dim: Optional[int] = None) -> Union[int, torch.Size]: - """Reload new_zeros from self.tensor.""" - # self.tensor.size(dim) cannot work when dim=None. - return self.tensor.size() if dim is None else self.tensor.size(dim) - - def dim(self) -> int: - """Reload ``dim`` from self.tensor.""" - return self.tensor.dim() - - @property - def device(self) -> torch.device: - """Reload ``device`` from self.tensor.""" - return self.tensor.device - - @property - def dtype(self) -> torch.dtype: - """Reload ``dtype`` from self.tensor.""" - return self.tensor.dtype - - @property - def shape(self) -> torch.Size: - return self.tensor.shape - - def numel(self) -> int: - """Reload ``numel`` from self.tensor.""" - return self.tensor.numel() - - def numpy(self) -> np.ndarray: - """Reload ``numpy`` from self.tensor.""" - return self.tensor.numpy() - - def to(self: T, *args, **kwargs) -> T: - """Reload ``to`` from self.tensor.""" - return type(self)(self.tensor.to(*args, **kwargs), clone=False) - - def cpu(self: T) -> T: - """Reload ``cpu`` from self.tensor.""" - return type(self)(self.tensor.cpu(), clone=False) - - def cuda(self: T, *args, **kwargs) -> T: - """Reload ``cuda`` from self.tensor.""" - return type(self)(self.tensor.cuda(*args, **kwargs), clone=False) - - def clone(self: T) -> T: - """Reload ``clone`` from self.tensor.""" - return type(self)(self.tensor) - - def detach(self: T) -> T: - """Reload ``detach`` from self.tensor.""" - return type(self)(self.tensor.detach(), clone=False) - - def view(self: T, *shape: Tuple[int]) -> T: - """Reload ``view`` from self.tensor.""" - return type(self)(self.tensor.view(shape), clone=False) - - def reshape(self: T, *shape: Tuple[int]) -> T: - """Reload ``reshape`` from self.tensor.""" - return type(self)(self.tensor.reshape(shape), clone=False) - - def expand(self: T, *sizes: Tuple[int]) -> T: - """Reload ``expand`` from self.tensor.""" - return type(self)(self.tensor.expand(sizes), clone=False) - - def repeat(self: T, *sizes: Tuple[int]) -> T: - """Reload ``repeat`` from self.tensor.""" - return type(self)(self.tensor.repeat(sizes), clone=False) - - def transpose(self: T, dim0: int, dim1: int) -> T: - """Reload ``transpose`` from self.tensor.""" - ndim = self.tensor.dim() - assert dim0 != -1 and dim0 != ndim - 1 - assert dim1 != -1 and dim1 != ndim - 1 - return type(self)(self.tensor.transpose(dim0, dim1), clone=False) - - def permute(self: T, *dims: Tuple[int]) -> T: - """Reload ``permute`` from self.tensor.""" - assert dims[-1] == -1 or dims[-1] == self.tensor.dim() - 1 - return type(self)(self.tensor.permute(dims), clone=False) - - def split(self: T, - split_size_or_sections: Union[int, Sequence[int]], - dim: int = 0) -> List[T]: - """Reload ``split`` from self.tensor.""" - assert dim != -1 and dim != self.tensor.dim() - 1 - boxes_list = self.tensor.split(split_size_or_sections, dim=dim) - return [type(self)(boxes, clone=False) for boxes in boxes_list] - - def chunk(self: T, chunks: int, dim: int = 0) -> List[T]: - """Reload ``chunk`` from self.tensor.""" - assert dim != -1 and dim != self.tensor.dim() - 1 - boxes_list = self.tensor.chunk(chunks, dim=dim) - return [type(self)(boxes, clone=False) for boxes in boxes_list] - - def unbind(self: T, dim: int = 0) -> T: - """Reload ``unbind`` from self.tensor.""" - assert dim != -1 and dim != self.tensor.dim() - 1 - boxes_list = self.tensor.unbind(dim=dim) - return [type(self)(boxes, clone=False) for boxes in boxes_list] - - def flatten(self: T, start_dim: int = 0, end_dim: int = -2) -> T: - """Reload ``flatten`` from self.tensor.""" - assert end_dim != -1 and end_dim != self.tensor.dim() - 1 - return type(self)(self.tensor.flatten(start_dim, end_dim), clone=False) - - def squeeze(self: T, dim: Optional[int] = None) -> T: - """Reload ``squeeze`` from self.tensor.""" - boxes = self.tensor.squeeze() if dim is None else \ - self.tensor.squeeze(dim) - return type(self)(boxes, clone=False) - - def unsqueeze(self: T, dim: int) -> T: - """Reload ``unsqueeze`` from self.tensor.""" - assert dim != -1 and dim != self.tensor.dim() - return type(self)(self.tensor.unsqueeze(dim), clone=False) - - @classmethod - def cat(cls: Type[T], box_list: Sequence[T], dim: int = 0) -> T: - """Cancatenates a box instance list into one single box instance. - Similar to ``torch.cat``. - - Args: - box_list (Sequence[T]): A sequence of box instances. - dim (int): The dimension over which the box are concatenated. - Defaults to 0. - - Returns: - T: Concatenated box instance. - """ - assert isinstance(box_list, Sequence) - if len(box_list) == 0: - raise ValueError('box_list should not be a empty list.') - - assert dim != -1 and dim != box_list[0].dim() - 1 - assert all(isinstance(boxes, cls) for boxes in box_list) - - th_box_list = [boxes.tensor for boxes in box_list] - return cls(torch.cat(th_box_list, dim=dim), clone=False) - - @classmethod - def stack(cls: Type[T], box_list: Sequence[T], dim: int = 0) -> T: - """Concatenates a sequence of tensors along a new dimension. Similar to - ``torch.stack``. - - Args: - box_list (Sequence[T]): A sequence of box instances. - dim (int): Dimension to insert. Defaults to 0. - - Returns: - T: Concatenated box instance. - """ - assert isinstance(box_list, Sequence) - if len(box_list) == 0: - raise ValueError('box_list should not be a empty list.') - - assert dim != -1 and dim != box_list[0].dim() - assert all(isinstance(boxes, cls) for boxes in box_list) - - th_box_list = [boxes.tensor for boxes in box_list] - return cls(torch.stack(th_box_list, dim=dim), clone=False) - - @abstractproperty - def centers(self) -> Tensor: - """Return a tensor representing the centers of boxes.""" - pass - - @abstractproperty - def areas(self) -> Tensor: - """Return a tensor representing the areas of boxes.""" - pass - - @abstractproperty - def widths(self) -> Tensor: - """Return a tensor representing the widths of boxes.""" - pass - - @abstractproperty - def heights(self) -> Tensor: - """Return a tensor representing the heights of boxes.""" - pass - - @abstractmethod - def flip_(self, - img_shape: Tuple[int, int], - direction: str = 'horizontal') -> None: - """Flip boxes horizontally or vertically in-place. - - Args: - img_shape (Tuple[int, int]): A tuple of image height and width. - direction (str): Flip direction, options are "horizontal", - "vertical" and "diagonal". Defaults to "horizontal" - """ - pass - - @abstractmethod - def translate_(self, distances: Tuple[float, float]) -> None: - """Translate boxes in-place. - - Args: - distances (Tuple[float, float]): translate distances. The first - is horizontal distance and the second is vertical distance. - """ - pass - - @abstractmethod - def clip_(self, img_shape: Tuple[int, int]) -> None: - """Clip boxes according to the image shape in-place. - - Args: - img_shape (Tuple[int, int]): A tuple of image height and width. - """ - pass - - @abstractmethod - def rotate_(self, center: Tuple[float, float], angle: float) -> None: - """Rotate all boxes in-place. - - Args: - center (Tuple[float, float]): Rotation origin. - angle (float): Rotation angle represented in degrees. Positive - values mean clockwise rotation. - """ - pass - - @abstractmethod - def project_(self, homography_matrix: Union[Tensor, np.ndarray]) -> None: - """Geometric transformat boxes in-place. - - Args: - homography_matrix (Tensor or np.ndarray]): - Shape (3, 3) for geometric transformation. - """ - pass - - @abstractmethod - def rescale_(self, scale_factor: Tuple[float, float]) -> None: - """Rescale boxes w.r.t. rescale_factor in-place. - - Note: - Both ``rescale_`` and ``resize_`` will enlarge or shrink boxes - w.r.t ``scale_facotr``. The difference is that ``resize_`` only - changes the width and the height of boxes, but ``rescale_`` also - rescales the box centers simultaneously. - - Args: - scale_factor (Tuple[float, float]): factors for scaling boxes. - The length should be 2. - """ - pass - - @abstractmethod - def resize_(self, scale_factor: Tuple[float, float]) -> None: - """Resize the box width and height w.r.t scale_factor in-place. - - Note: - Both ``rescale_`` and ``resize_`` will enlarge or shrink boxes - w.r.t ``scale_facotr``. The difference is that ``resize_`` only - changes the width and the height of boxes, but ``rescale_`` also - rescales the box centers simultaneously. - - Args: - scale_factor (Tuple[float, float]): factors for scaling box - shapes. The length should be 2. - """ - pass - - @abstractmethod - def is_inside(self, - img_shape: Tuple[int, int], - all_inside: bool = False, - allowed_border: int = 0) -> BoolTensor: - """Find boxes inside the image. - - Args: - img_shape (Tuple[int, int]): A tuple of image height and width. - all_inside (bool): Whether the boxes are all inside the image or - part inside the image. Defaults to False. - allowed_border (int): Boxes that extend beyond the image shape - boundary by more than ``allowed_border`` are considered - "outside" Defaults to 0. - Returns: - BoolTensor: A BoolTensor indicating whether the box is inside - the image. Assuming the original boxes have shape (m, n, box_dim), - the output has shape (m, n). - """ - pass - - @abstractmethod - def find_inside_points(self, - points: Tensor, - is_aligned: bool = False) -> BoolTensor: - """Find inside box points. Boxes dimension must be 2. - - Args: - points (Tensor): Points coordinates. Has shape of (m, 2). - is_aligned (bool): Whether ``points`` has been aligned with boxes - or not. If True, the length of boxes and ``points`` should be - the same. Defaults to False. - - Returns: - BoolTensor: A BoolTensor indicating whether a point is inside - boxes. Assuming the boxes has shape of (n, box_dim), if - ``is_aligned`` is False. The index has shape of (m, n). If - ``is_aligned`` is True, m should be equal to n and the index has - shape of (m, ). - """ - pass - - @abstractstaticmethod - def overlaps(boxes1: 'BaseBoxes', - boxes2: 'BaseBoxes', - mode: str = 'iou', - is_aligned: bool = False, - eps: float = 1e-6) -> Tensor: - """Calculate overlap between two set of boxes with their types - converted to the present box type. - - Args: - boxes1 (:obj:`BaseBoxes`): BaseBoxes with shape of (m, box_dim) - or empty. - boxes2 (:obj:`BaseBoxes`): BaseBoxes with shape of (n, box_dim) - or empty. - mode (str): "iou" (intersection over union), "iof" (intersection - over foreground). Defaults to "iou". - is_aligned (bool): If True, then m and n must be equal. Defaults - to False. - eps (float): A value added to the denominator for numerical - stability. Defaults to 1e-6. - - Returns: - Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,) - """ - pass - - @abstractstaticmethod - def from_instance_masks(masks: MaskType) -> 'BaseBoxes': - """Create boxes from instance masks. - - Args: - masks (:obj:`BitmapMasks` or :obj:`PolygonMasks`): BitmapMasks or - PolygonMasks instance with length of n. - - Returns: - :obj:`BaseBoxes`: Converted boxes with shape of (n, box_dim). - """ - pass diff --git a/spaces/MWilinski/bot/api/config.py b/spaces/MWilinski/bot/api/config.py deleted file mode 100644 index d18b380d4c433919f1f71165a0c210a51ee37b95..0000000000000000000000000000000000000000 --- a/spaces/MWilinski/bot/api/config.py +++ /dev/null @@ -1,48 +0,0 @@ -import os -from dataclasses import dataclass, asdict -from typing import Dict, Union -from api.logger import logger - - -def get_env(env_name: str, default = None) -> str: - env = os.getenv(env_name) - if not env: - if default: - logger.warning( - f'Environment variable {env_name} not found.' \ - f'Using the default value: {default}.' - ) - return default - else: - raise ValueError(f'Cannot parse: {env_name}') - else: - return env - - -@dataclass -class Config: - huggingface_token: str = get_env('HUGGINGFACEHUB_API_TOKEN') - question_answering_model_id: str = get_env('QUESTION_ANSWERING_MODEL_ID') - embedding_model_id: str = get_env('EMBEDDING_MODEL_ID') - index_repo_id: str = get_env('INDEX_REPO_ID') - use_docs_for_context: bool = eval(get_env('USE_DOCS_FOR_CONTEXT', 'True')) - add_sources_to_response: bool = eval(get_env('ADD_SOURCES_TO_RESPONSE', 'True')) - use_messages_in_context: bool = eval(get_env('USE_MESSAGES_IN_CONTEXT', 'True')) - num_relevant_docs: bool = eval(get_env('NUM_RELEVANT_DOCS', 3)) - debug: bool = eval(get_env('DEBUG', 'True')) - - def __post_init__(self): - # validate config - if not self.use_docs_for_context and self.add_sources_to_response: - raise ValueError('Cannot add sources to response if not using docs in context') - if self.num_relevant_docs < 1: - raise ValueError('num_relevant_docs must be greater than 0') - self.log() - - def asdict(self) -> Dict: - return asdict(self) - - def log(self) -> None: - logger.info('Config:') - for key, value in self.asdict().items(): - logger.info(f'{key}: {value}') diff --git a/spaces/Mahiruoshi/vits-chatbot/text/mandarin.py b/spaces/Mahiruoshi/vits-chatbot/text/mandarin.py deleted file mode 100644 index a9ce0c4b223cd7fbb00e8332d2dd53de4c7cea09..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/vits-chatbot/text/mandarin.py +++ /dev/null @@ -1,328 +0,0 @@ -import os -import sys -import re -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba -import cn2an - - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (romaji, ipa) pairs: -_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ʃy', 'ʃ'), - ('ʧʰy', 'ʧʰ'), - ('ʧ⁼y', 'ʧ⁼'), - ('NN', 'n'), - ('Ng', 'ŋ'), - ('y', 'j'), - ('h', 'x') -]] - -# List of (bopomofo, ipa) pairs: -_bopomofo_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'x'), - ('ㄐ', 'tʃ⁼'), - ('ㄑ', 'tʃʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ts`⁼'), - ('ㄔ', 'ts`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ts⁼'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'ɥæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'ɥn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'əŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - -# List of (bopomofo, ipa2) pairs: -_bopomofo_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄅㄛ', 'pwo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'tɕ'), - ('ㄑ', 'tɕʰ'), - ('ㄒ', 'ɕ'), - ('ㄓ', 'tʂ'), - ('ㄔ', 'tʂʰ'), - ('ㄕ', 'ʂ'), - ('ㄖ', 'ɻ'), - ('ㄗ', 'ts'), - ('ㄘ', 'tsʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ɤ'), - ('ㄝ', 'ɛ'), - ('ㄞ', 'aɪ'), - ('ㄟ', 'eɪ'), - ('ㄠ', 'ɑʊ'), - ('ㄡ', 'oʊ'), - ('ㄧㄢ', 'jɛn'), - ('ㄩㄢ', 'yæn'), - ('ㄢ', 'an'), - ('ㄧㄣ', 'in'), - ('ㄩㄣ', 'yn'), - ('ㄣ', 'ən'), - ('ㄤ', 'ɑŋ'), - ('ㄧㄥ', 'iŋ'), - ('ㄨㄥ', 'ʊŋ'), - ('ㄩㄥ', 'jʊŋ'), - ('ㄥ', 'ɤŋ'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'y'), - ('ˉ', '˥'), - ('ˊ', '˧˥'), - ('ˇ', '˨˩˦'), - ('ˋ', '˥˩'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def number_to_chinese(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - return text - - -def chinese_to_bopomofo(text, taiwanese=False): - text = text.replace('、', ',').replace(';', ',').replace(':', ',') - words = jieba.lcut(text, cut_all=False) - text = '' - for word in words: - bopomofos = lazy_pinyin(word, BOPOMOFO) - if not re.search('[\u4e00-\u9fff]', word): - text += word - continue - for i in range(len(bopomofos)): - bopomofos[i] = re.sub(r'([\u3105-\u3129])$', r'\1ˉ', bopomofos[i]) - if text != '': - text += ' ' - if taiwanese: - text += '#'+'#'.join(bopomofos) - else: - text += ''.join(bopomofos) - return text - - -def latin_to_bopomofo(text): - for regex, replacement in _latin_to_bopomofo: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_romaji(text): - for regex, replacement in _bopomofo_to_romaji: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa(text): - for regex, replacement in _bopomofo_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def bopomofo_to_ipa2(text): - for regex, replacement in _bopomofo_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_romaji(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_romaji(text) - text = re.sub('i([aoe])', r'y\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_lazy_ipa(text): - text = chinese_to_romaji(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def chinese_to_ipa(text): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa(text) - text = re.sub('i([aoe])', r'j\1', text) - text = re.sub('u([aoəe])', r'w\1', text) - text = re.sub('([sɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', - r'\1ɹ`\2', text).replace('ɻ', 'ɹ`') - text = re.sub('([s][⁼ʰ]?)([→↓↑ ]+|$)', r'\1ɹ\2', text) - return text - - -def chinese_to_ipa2(text, taiwanese=False): - text = number_to_chinese(text) - text = chinese_to_bopomofo(text, taiwanese) - text = latin_to_bopomofo(text) - text = bopomofo_to_ipa2(text) - text = re.sub(r'i([aoe])', r'j\1', text) - text = re.sub(r'u([aoəe])', r'w\1', text) - text = re.sub(r'([ʂɹ]ʰ?)([˩˨˧˦˥ ]+|$)', r'\1ʅ\2', text) - text = re.sub(r'(sʰ?)([˩˨˧˦˥ ]+|$)', r'\1ɿ\2', text) - return text diff --git a/spaces/Makiing/coolb-in-gtest/src/components/ui/textarea.tsx b/spaces/Makiing/coolb-in-gtest/src/components/ui/textarea.tsx deleted file mode 100644 index e25af722c7a5dc1121a9ab58d6716952f9f76081..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/src/components/ui/textarea.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface TextareaProps - extends React.TextareaHTMLAttributes {} - -const Textarea = React.forwardRef( - ({ className, ...props }, ref) => { - return ( -