diff --git a/spaces/101-5/gpt4free/g4f/.v1/testing/forefront_test.py b/spaces/101-5/gpt4free/g4f/.v1/testing/forefront_test.py deleted file mode 100644 index b7b5c57c1e86016687611e2260078b5c800bec71..0000000000000000000000000000000000000000 --- a/spaces/101-5/gpt4free/g4f/.v1/testing/forefront_test.py +++ /dev/null @@ -1,9 +0,0 @@ -from gpt4free import forefront - -# create an account -token = forefront.Account.create(logging=True) -print(token) - -# get a response -for response in forefront.StreamingCompletion.create(token=token, prompt='hello world', model='gpt-4'): - print(response.text, end='') diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autocad 2022 Repair.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autocad 2022 Repair.md deleted file mode 100644 index 7fa37511462b11b4028b0ef6dea4f3ffa3f7817a..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autocad 2022 Repair.md +++ /dev/null @@ -1,36 +0,0 @@ - -

How to Repair AutoCAD 2022 - Easy and Effective Methods

-

AutoCAD 2022 is a powerful and versatile software that allows you to create and edit 2D and 3D designs. However, sometimes AutoCAD 2022 may encounter problems that prevent it from working properly. These problems can be caused by various factors, such as corrupted files, missing components, incompatible drivers, or malware infections. If you are facing any issues with AutoCAD 2022, don't worry. In this article, we will show you how to repair AutoCAD 2022 using some easy and effective methods.

-

Method 1: Use the Repair Tool in the Control Panel

-

One of the simplest ways to repair AutoCAD 2022 is to use the built-in repair tool in the Control Panel. This tool can fix common errors and restore the default settings of AutoCAD 2022. To use this method, follow these steps:

-

autocad 2022 repair


Download File --->>> https://byltly.com/2uKvUh



-
    -
  1. Close any running instances of AutoCAD 2022.
  2. -
  3. Go to the Start menu and type "Control Panel". Click on the Control Panel app that appears.
  4. -
  5. In the Control Panel, click on "Programs and Features". This will show you a list of all the installed programs on your PC.
  6. -
  7. Find and select AutoCAD 2022 from the list. Then click on the "Uninstall/Change" button above the list.
  8. -
  9. A window will pop up with two options: "Repair" and "Uninstall". Choose the "Repair" option and click on "Continue".
  10. -
  11. Follow the instructions on the screen to complete the repair process. This may take some time depending on the size and condition of your AutoCAD 2022 installation.
  12. -
  13. When the repair is finished, restart your PC and launch AutoCAD 2022. Check if the problem is resolved.
  14. -
-

Method 2: Reinstall AutoCAD 2022

-

If the repair tool does not work or if you want to start fresh with AutoCAD 2022, you can try reinstalling it. This will remove all the existing files and settings of AutoCAD 2022 and install a new copy. To do this, follow these steps:

-
    -
  1. Close any running instances of AutoCAD 2022.
  2. -
  3. Go to the Start menu and type "Control Panel". Click on the Control Panel app that appears.
  4. -
  5. In the Control Panel, click on "Programs and Features". This will show you a list of all the installed programs on your PC.
  6. -
  7. Find and select AutoCAD 2022 from the list. Then click on the "Uninstall/Change" button above the list.
  8. -
  9. A window will pop up with two options: "Repair" and "Uninstall". Choose the "Uninstall" option and click on "Continue".
  10. -
  11. Follow the instructions on the screen to complete the uninstallation process. This may take some time depending on the size and condition of your AutoCAD 2022 installation.
  12. -
  13. When the uninstallation is finished, restart your PC and go to https://www.autodesk.com/products/autocad/free-trial. Download and install a new copy of AutoCAD 2022 following the instructions on the website.
  14. -
  15. Launch AutoCAD 2022 and activate it with your license key. Check if the problem is resolved.
  16. -
-

Method 3: Update Your Drivers

-

Sometimes, outdated or incompatible drivers can cause problems with AutoCAD 2022. Drivers are software components that enable your PC to communicate with your hardware devices, such as your graphics card, sound card, or printer. To ensure that AutoCAD 2022 runs smoothly, you need to update your drivers regularly. To do this, follow these steps:

-
    -
  1. Go to the Start menu and type "Device Manager". Click on the Device Manager app that appears.
  2. -
  3. In the Device Manager, expand the categories of devices that you want to update. For example, if you want to update your graphics card driver, expand the "Display adapters" category.
  4. -
  5. Right-click on the device that you want to update and choose "Update driver".
  6. -
  7. A window will pop up with two options: "Search automatically for updated driver software" and "Browse my computer for driver software".

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chess Opening Trainer Keygen Crack Discover the Secrets of Chess Grandmasters with This App.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chess Opening Trainer Keygen Crack Discover the Secrets of Chess Grandmasters with This App.md deleted file mode 100644 index 6dad1e9e4268f37b3f55c2a032fa87485ee27c98..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chess Opening Trainer Keygen Crack Discover the Secrets of Chess Grandmasters with This App.md +++ /dev/null @@ -1,120 +0,0 @@ -
    -

    Chess Opening Trainer Keygen Crack: How to Download and Use It

    -

    If you are a chess enthusiast who wants to improve your skills and knowledge of chess openings, you might be interested in Chess Opening Trainer, a software that helps you learn and practice chess openings. However, this software is not free and requires a serial key for activation. In this article, we will show you how to download and use Chess Opening Trainer keygen crack, a tool that generates serial keys for software activation. We will also explain what Chess Opening Trainer is, what keygen crack is, and what are the risks and drawbacks of using it.

    -

    chess opening trainer keygen crack


    Download File --->>> https://byltly.com/2uKxMS



    -

    What is Chess Opening Trainer?

    -

    Chess Opening Trainer is a software that helps you learn and practice chess openings. It allows you to create your own opening repertoire, test your knowledge with quizzes and puzzles, analyze your games with a powerful engine, and play against the computer or online opponents. Chess Opening Trainer also provides you with a database of over 100,000 chess games from grandmasters and experts, as well as a collection of opening books and videos.

    -

    Features and benefits of Chess Opening Trainer

    -

    Some of the features and benefits of Chess Opening Trainer are:

    - -

    What is keygen crack?

    -

    Keygen crack is a tool that generates serial keys for software activation. It is usually used by people who want to use paid software for free without purchasing a license. Keygen crack works by exploiting the algorithm or code that the software uses to verify the validity of the serial key. By using keygen crack, you can bypass the activation process and use the software without any restrictions.

    -

    chess opening trainer activation code generator
    -chess opening trainer license key free download
    -chess opening trainer full version cracked software
    -chess opening trainer serial number online
    -chess opening trainer registration key hack
    -chess opening trainer product key finder
    -chess opening trainer crack file torrent
    -chess opening trainer keygen software download
    -chess opening trainer patch file zip
    -chess opening trainer unlock code online
    -chess opening trainer activation key crack
    -chess opening trainer license code free
    -chess opening trainer full crack download
    -chess opening trainer serial key generator
    -chess opening trainer registration code hack
    -chess opening trainer product code finder
    -chess opening trainer crack file download
    -chess opening trainer keygen download
    -chess opening trainer patch file download
    -chess opening trainer unlock key online
    -chess opening trainer activation code free
    -chess opening trainer license key crack
    -chess opening trainer full version cracked download
    -chess opening trainer serial number online generator
    -chess opening trainer registration key free
    -chess opening trainer product key hack
    -chess opening trainer crack file free download
    -chess opening trainer keygen software free download
    -chess opening trainer patch file free download
    -chess opening trainer unlock code generator
    -chess opening trainer activation key free download
    -chess opening trainer license code crack
    -chess opening trainer full crack software download
    -chess opening trainer serial key online generator
    -chess opening trainer registration code free download
    -chess opening trainer product code hack tool
    -chess opening trainer crack file torrent download
    -chess opening trainer keygen software torrent download
    -chess opening trainer patch file torrent download
    -chess opening trainer unlock key generator online
    -chess opening trainer activation code torrent download
    -chess opening trainer license key free online
    -chess opening trainer full version cracked torrent download
    -chess opening trainer serial number hack tool online
    -chess opening trainer registration key torrent download
    -chess opening trainer product key free online generator
    -chess opening trainer crack file direct download link
    -chess opening trainer keygen software direct download link
    -chess opening trainer patch file direct download link

    -

    Risks and drawbacks of using keygen crack

    -

    However, using keygen crack is not recommended for several reasons:

    - -

    How to download and use Chess Opening Trainer keygen crack?

    -

    If you still want to download and use Chess Opening Trainer keygen crack despite the risks and drawbacks mentioned above, here are the steps you need to follow:

    -

    Step 1: Find a reliable source for the keygen crack file

    -

    The first step is to find a reliable source for the keygen crack file. You can search online for websites or forums that offer Chess Opening Trainer keygen crack. However, be careful not to click on suspicious links or download files from untrusted sources. You can also use antivirus software or online scanners to check if the file is safe or not.

    -

    Step 2: Run the keygen crack file and generate a serial key

    -

    The second step is to run the keygen crack file and generate a serial key. You may need to extract the file first if it is compressed or archived. Then, double-click on the file or right-click on it and select Run as administrator. You may see a window like this:

    -
    -Chess Opening Trainer Keygen Crack v1.0 -------------------------------------- Enter your name: _________ Press Generate button Serial Key: _____________ Copy the serial key Press Exit button 
    -

    Enter your name or any name you want in the blank space. Then press Generate button. You will see a serial key generated for you. Copy the serial key and save it somewhere safe.

    -

    Step 3: Download and install Chess Opening Trainer from the official website

    -

    The third step is to download and install Chess Opening Trainer from the official website. You can go to https://chesstempo.com/opening-training/ and click on Download button. You will see a window like this:

    -
    -Chess Opening Trainer Download ----------------------------- Choose your platform: Windows | Mac | Linux | Android | iOS 
    -

    Select your platform and follow the instructions to download and install Chess Opening Trainer on your device.

    -

    Step 4: Enter the serial key and activate the software

    -

    The fourth and final step is to enter the serial key and activate the software. Launch Chess Opening Trainer and go to Help menu. Select Activate License and enter the serial key that you generated earlier. Click on Activate button and you will see a message like this:

    -
    -Chess Opening Trainer Activation ------------------------------- Your license has been activated successfully. Thank you for choosing Chess Opening Trainer. Enjoy learning chess openings! 
    -

    Congratulations! You have successfully downloaded and used Chess Opening Trainer keygen crack. You can now use all the features and benefits of Chess Opening Trainer without any limitations.

    -

    Conclusion

    -

    In this article, we have shown you how to download and use Chess Opening Trainer keygen crack, a tool that generates serial keys for software activation. We have also explained what Chess Opening Trainer is, what keygen crack is, and what are the risks and drawbacks of using it. We hope you have found this article useful and informative.

    -

    FAQs

    -

    Here are some frequently asked questions about Chess Opening Trainer keygen crack:

    -
      -
    1. Is Chess Opening Trainer keygen crack legal?
    2. -

      No, it is not legal. By using Chess Opening Trainer keygen crack, you are violating the intellectual property rights of the software developers and distributors. You are also depriving them of their income and incentive to create more quality products.

      -
    3. Is Chess Opening Trainer keygen crack safe?
    4. -

      No, it is not safe. By downloading Chess Opening Trainer keygen crack from unknown sources, you are exposing your computer to malware, viruses, spyware, ransomware, etc. These malicious programs can damage your system, steal your data, compromise your privacy, etc.

      -
    5. Is Chess Opening Trainer keygen crack reliable?
    6. -

      No, it is not reliable. By using Chess Opening Trainer keygen crack, you are not guaranteed that the software will work properly or at all. You may encounter errors, bugs, crashes, compatibility issues, etc. You may also miss out on updates, patches, support, etc.

      -
    7. What are some alternatives to Chess Opening Trainer keygen crack?
    8. -

      Some alternatives to Chess Opening Trainer keygen crack are:

      -
        -
      • Purchasing a license for Chess Opening Trainer from the official website.
      • -
      • Using free or open source chess opening software such as SCID or Lichess.
      • -
      • Hiring a professional chess coach or joining a chess club.
      • -
      • Reading chess books or watching chess videos on chess openings.
      • -
      -
    9. How can I contact Chess Opening Trainer support?
    10. -

      You can contact Chess Opening Trainer support by sending an email to support@chesstempo.com or by visiting their website https://chesstempo.com/contact-us/.

      -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Stata 14 For Mac BETTER.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Stata 14 For Mac BETTER.md deleted file mode 100644 index b2f70343523fa857826bd2663ca87eab19c22b49..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Stata 14 For Mac BETTER.md +++ /dev/null @@ -1,117 +0,0 @@ -
    - - - - - - - - - - - - - - - - - - - -

    ¿Por qué deberías escuchar Reste Toi?

    -

    Las críticas y valoraciones positivas de la canción

    -

    Reste Toi ha recibido críticas y valoraciones positivas tanto de los críticos como de los oyentes. La canción ha sido elogiada por su melodía pegadiza, letras edificantes y colaboración diversa. Algunos de los comentarios de las plataformas en línea incluyen:

    -
    -

    "Esta canción es un banger! Me encanta cómo se mezcla amapiano con hip-hop y francés. Me hace querer bailar y cantar a lo largo."

    -

    "Este es un mensaje tan hermoso. Creo que todos deben escuchar esta canción y estar orgullosos de lo que son. Es tan refrescante escuchar algo positivo en estos tiempos."

    -

    "Esta es una obra maestra. La producción es increíble, las voces son suaves, y el rap es fuego. No puedo tener suficiente de esta canción." -

    -

    La canción también ha recibido altas calificaciones en varias plataformas, como 4.8 de 5 estrellas en Spotify, 4.7 de 5 estrellas en Apple Music y 4.6 de 5 estrellas en YouTube Music.

    -

    El sonido pegadizo y optimista de la canción

    -

    Reste Toi es una canción que te hará sentir bien y con energía. La canción tiene un sonido pegadizo y alegre que combina los elementos de amapiano, hip-hop y pop francés. La canción tiene un ritmo rápido, una línea de bajo groovy, y una melodía de piano suave. La canción también cuenta con algunos sonidos electrónicos, como sintetizadores, tambores y efectos. La canción es fácil de cantar, ya que tiene un coro simple y repetitivo. La canción también es adecuada para bailar, ya que tiene un ritmo rítmico y animado.

    -

    La relevancia cultural y social de la canción

    - -

    Conclusión

    -

    Reste Toi de Blacknoise, Kazeli, y Mashaya es una canción que debes escuchar si estás buscando una pista de amapiano pegadiza y edificante que te hará sentir bien y orgulloso de quién eres. La canción está disponible en varias plataformas de streaming, y también se puede descargar en formato Mp3 de diferentes fuentes. La canción ha recibido críticas y valoraciones positivas de críticos y oyentes, que han elogiado su sonido, letras y mensaje. La canción es también un reflejo de la diversidad cultural y social de Sudáfrica y el mundo, que es algo para celebrar y apreciar.

    -

    Preguntas frecuentes

    -

    P: ¿Quiénes son los artistas detrás de Reste Toi?

    -

    A: Reste Toi es una canción de Blacknoise, Kazeli y Mashaya. Blacknoise es un artista sudafricano de hip-hop y activista que es el fundador de la banda Black Noise. Kazeli es una cantante y compositora francesa que se mudó a Sudáfrica en 2019. Mashaya es un cantante y productor sudafricano conocido por sus éxitos de amapiano.

    -

    Q: ¿Qué significa Reste Toi?

    -

    A: Reste Toi es una frase francesa que significa "quédate tú" o "sé tú mismo". También es el título de la canción de Blacknoise, Kazeli y Mashaya.

    -

    P: ¿Qué género es Reste Toi?

    -

    A: Reste Toi es una pista de amapiano que cuenta con voces en francés, inglés y Xhosa. Amapiano es un popular género sudafricano de música house que combina sonidos africanos tradicionales con ritmos y samples modernos.

    -

    Q: ¿Cómo puedo descargar Reste Toi en formato Mp3?

    -

    A: Puede descargar Reste Toi en formato Mp3 desde diferentes fuentes, como Spotify, YouTube o SoundCloud. Tendrá que copiar el enlace de la canción desde la plataforma de transmisión y pegarlo en un sitio web convertidor que convertirá la canción en formato Mp3. A continuación, puede descargar el archivo Mp3 a su dispositivo.

    -

    P: ¿Por qué debería escuchar Reste Toi?

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langhebrewmodel.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langhebrewmodel.py deleted file mode 100644 index 56d2975877f092ac62ad403803f6456858affcba..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langhebrewmodel.py +++ /dev/null @@ -1,4380 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -HEBREW_LANG_MODEL = { - 50: { # 'a' - 50: 0, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 2, # 'l' - 54: 2, # 'n' - 49: 0, # 'o' - 51: 2, # 'r' - 43: 1, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 1, # 'ק' - 7: 0, # 'ר' - 10: 1, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 60: { # 'c' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 0, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 0, # 'n' - 49: 1, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 61: { # 'd' - 50: 1, # 'a' - 60: 0, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 1, # 'n' - 49: 2, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 0, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 1, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 42: { # 'e' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 2, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 2, # 'l' - 54: 2, # 'n' - 49: 1, # 'o' - 51: 2, # 'r' - 43: 2, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 1, # '–' - 52: 2, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 53: { # 'i' - 50: 1, # 'a' - 60: 2, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 0, # 'i' - 56: 1, # 'l' - 54: 2, # 'n' - 49: 2, # 'o' - 51: 1, # 'r' - 43: 2, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 56: { # 'l' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 2, # 'e' - 53: 2, # 'i' - 56: 2, # 'l' - 54: 1, # 'n' - 49: 1, # 'o' - 51: 0, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 54: { # 'n' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 1, # 'n' - 49: 1, # 'o' - 51: 0, # 'r' - 43: 1, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 2, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 49: { # 'o' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 2, # 'n' - 49: 1, # 'o' - 51: 2, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 51: { # 'r' - 50: 2, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 2, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 1, # 'n' - 49: 2, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 2, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 43: { # 's' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 0, # 'd' - 42: 2, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 1, # 'n' - 49: 1, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 2, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 44: { # 't' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 0, # 'd' - 42: 2, # 'e' - 53: 2, # 'i' - 56: 1, # 'l' - 54: 0, # 'n' - 49: 1, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 1, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 2, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 63: { # 'u' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 1, # 'n' - 49: 0, # 'o' - 51: 1, # 'r' - 43: 2, # 's' - 44: 1, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 34: { # '\xa0' - 50: 1, # 'a' - 60: 0, # 'c' - 61: 1, # 'd' - 42: 0, # 'e' - 53: 1, # 'i' - 56: 0, # 'l' - 54: 1, # 'n' - 49: 1, # 'o' - 51: 0, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 0, # 'u' - 34: 2, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 1, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 2, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 2, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 1, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 55: { # '´' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 1, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 2, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 1, # 'ן' - 12: 1, # 'נ' - 19: 1, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 48: { # '¼' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 1, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 39: { # '½' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 57: { # '¾' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 30: { # 'ְ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 1, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 1, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 2, # 'ג' - 16: 2, # 'ד' - 3: 2, # 'ה' - 2: 2, # 'ו' - 24: 2, # 'ז' - 14: 2, # 'ח' - 22: 2, # 'ט' - 1: 2, # 'י' - 25: 2, # 'ך' - 15: 2, # 'כ' - 4: 2, # 'ל' - 11: 1, # 'ם' - 6: 2, # 'מ' - 23: 0, # 'ן' - 12: 2, # 'נ' - 19: 2, # 'ס' - 13: 2, # 'ע' - 26: 0, # 'ף' - 18: 2, # 'פ' - 27: 0, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 59: { # 'ֱ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 1, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 1, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 2, # 'ל' - 11: 0, # 'ם' - 6: 2, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 41: { # 'ֲ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 2, # 'ב' - 20: 1, # 'ג' - 16: 2, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 1, # 'י' - 25: 1, # 'ך' - 15: 1, # 'כ' - 4: 2, # 'ל' - 11: 0, # 'ם' - 6: 2, # 'מ' - 23: 0, # 'ן' - 12: 2, # 'נ' - 19: 1, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 2, # 'צ' - 17: 1, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 33: { # 'ִ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 1, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 1, # 'ִ' - 37: 0, # 'ֵ' - 36: 1, # 'ֶ' - 31: 0, # 'ַ' - 29: 1, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 1, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 2, # 'ב' - 20: 2, # 'ג' - 16: 2, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 2, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 2, # 'כ' - 4: 2, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 2, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 2, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 37: { # 'ֵ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 1, # 'ֶ' - 31: 1, # 'ַ' - 29: 1, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 1, # 'ג' - 16: 2, # 'ד' - 3: 2, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 2, # 'ח' - 22: 1, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 1, # 'כ' - 4: 2, # 'ל' - 11: 2, # 'ם' - 6: 1, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 1, # 'ס' - 13: 2, # 'ע' - 26: 1, # 'ף' - 18: 1, # 'פ' - 27: 1, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 36: { # 'ֶ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 1, # 'ֶ' - 31: 1, # 'ַ' - 29: 1, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 1, # 'ג' - 16: 2, # 'ד' - 3: 2, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 2, # 'ח' - 22: 1, # 'ט' - 1: 2, # 'י' - 25: 2, # 'ך' - 15: 1, # 'כ' - 4: 2, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 2, # 'ס' - 13: 1, # 'ע' - 26: 1, # 'ף' - 18: 1, # 'פ' - 27: 2, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 31: { # 'ַ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 1, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 1, # 'ֶ' - 31: 0, # 'ַ' - 29: 2, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 2, # 'ג' - 16: 2, # 'ד' - 3: 2, # 'ה' - 2: 1, # 'ו' - 24: 2, # 'ז' - 14: 2, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 2, # 'כ' - 4: 2, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 2, # 'ס' - 13: 2, # 'ע' - 26: 2, # 'ף' - 18: 2, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 29: { # 'ָ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 1, # 'ַ' - 29: 2, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 1, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 2, # 'ג' - 16: 2, # 'ד' - 3: 3, # 'ה' - 2: 2, # 'ו' - 24: 2, # 'ז' - 14: 2, # 'ח' - 22: 1, # 'ט' - 1: 2, # 'י' - 25: 2, # 'ך' - 15: 2, # 'כ' - 4: 2, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 1, # 'ס' - 13: 2, # 'ע' - 26: 1, # 'ף' - 18: 2, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 35: { # 'ֹ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 1, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 1, # 'ג' - 16: 2, # 'ד' - 3: 2, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 1, # 'י' - 25: 1, # 'ך' - 15: 2, # 'כ' - 4: 2, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 2, # 'ס' - 13: 2, # 'ע' - 26: 1, # 'ף' - 18: 2, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 62: { # 'ֻ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 1, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 2, # 'ל' - 11: 1, # 'ם' - 6: 1, # 'מ' - 23: 1, # 'ן' - 12: 1, # 'נ' - 19: 1, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 28: { # 'ּ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 3, # 'ְ' - 59: 0, # 'ֱ' - 41: 1, # 'ֲ' - 33: 3, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 3, # 'ַ' - 29: 3, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 0, # 'ּ' - 38: 2, # 'ׁ' - 45: 1, # 'ׂ' - 9: 2, # 'א' - 8: 2, # 'ב' - 20: 1, # 'ג' - 16: 2, # 'ד' - 3: 1, # 'ה' - 2: 2, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 2, # 'י' - 25: 2, # 'ך' - 15: 2, # 'כ' - 4: 2, # 'ל' - 11: 1, # 'ם' - 6: 2, # 'מ' - 23: 1, # 'ן' - 12: 2, # 'נ' - 19: 1, # 'ס' - 13: 2, # 'ע' - 26: 1, # 'ף' - 18: 1, # 'פ' - 27: 1, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 2, # 'ר' - 10: 2, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 38: { # 'ׁ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 2, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 45: { # 'ׂ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 1, # 'ֵ' - 36: 2, # 'ֶ' - 31: 1, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 1, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 2, # 'ו' - 24: 0, # 'ז' - 14: 1, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 1, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 0, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 0, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 9: { # 'א' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 2, # 'ֱ' - 41: 2, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 2, # 'ע' - 26: 3, # 'ף' - 18: 3, # 'פ' - 27: 1, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 8: { # 'ב' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 1, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 3, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 1, # 'ף' - 18: 3, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 1, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 20: { # 'ג' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 2, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 1, # 'ִ' - 37: 1, # 'ֵ' - 36: 1, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 0, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 3, # 'ב' - 20: 2, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 2, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 1, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 2, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 2, # 'פ' - 27: 1, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 16: { # 'ד' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 1, # 'ז' - 14: 2, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 2, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 2, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 0, # 'ץ' - 21: 2, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 3: { # 'ה' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 1, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 1, # 'ְ' - 59: 1, # 'ֱ' - 41: 2, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 3, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 0, # 'ף' - 18: 3, # 'פ' - 27: 1, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 1, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 2: { # 'ו' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 1, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 1, # 'ֵ' - 36: 1, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 3, # 'ֹ' - 62: 0, # 'ֻ' - 28: 3, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 3, # 'ף' - 18: 3, # 'פ' - 27: 3, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 1, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 24: { # 'ז' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 1, # 'ֲ' - 33: 1, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 2, # 'ב' - 20: 2, # 'ג' - 16: 2, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 2, # 'ז' - 14: 2, # 'ח' - 22: 1, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 2, # 'נ' - 19: 1, # 'ס' - 13: 2, # 'ע' - 26: 1, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 2, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 1, # 'ש' - 5: 2, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 14: { # 'ח' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 1, # 'ֱ' - 41: 2, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 3, # 'ב' - 20: 2, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 2, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 2, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 1, # 'ע' - 26: 2, # 'ף' - 18: 2, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 22: { # 'ט' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 1, # 'ֵ' - 36: 1, # 'ֶ' - 31: 2, # 'ַ' - 29: 1, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 1, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 1, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 2, # 'ז' - 14: 3, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 2, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 2, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 3, # 'ר' - 10: 2, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 1: { # 'י' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 1, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 3, # 'ף' - 18: 3, # 'פ' - 27: 3, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 1, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 25: { # 'ך' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 2, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 1, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 1, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 1, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 15: { # 'כ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 3, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 2, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 2, # 'ע' - 26: 3, # 'ף' - 18: 3, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 2, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 4: { # 'ל' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 3, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 1, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 11: { # 'ם' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 1, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 0, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 1, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 6: { # 'מ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 0, # 'ף' - 18: 3, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 23: { # 'ן' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 1, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 0, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 1, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 1, # 'ס' - 13: 1, # 'ע' - 26: 1, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 1, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 1, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 12: { # 'נ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 19: { # 'ס' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 1, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 1, # 'ָ' - 35: 1, # 'ֹ' - 62: 2, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 1, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 2, # 'ס' - 13: 3, # 'ע' - 26: 3, # 'ף' - 18: 3, # 'פ' - 27: 0, # 'ץ' - 21: 2, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 1, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 13: { # 'ע' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 1, # 'ְ' - 59: 1, # 'ֱ' - 41: 2, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 1, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 2, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 2, # 'ע' - 26: 1, # 'ף' - 18: 2, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 26: { # 'ף' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 1, # 'ו' - 24: 0, # 'ז' - 14: 1, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 1, # 'ס' - 13: 0, # 'ע' - 26: 1, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 1, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 18: { # 'פ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 1, # 'ֵ' - 36: 2, # 'ֶ' - 31: 1, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 2, # 'ב' - 20: 3, # 'ג' - 16: 2, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 2, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 2, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 2, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 27: { # 'ץ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 1, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 0, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 21: { # 'צ' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 1, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 2, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 1, # 'ז' - 14: 3, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 1, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 1, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 2, # 'ץ' - 21: 2, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 0, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 17: { # 'ק' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 1, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 1, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 2, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 2, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 1, # 'ך' - 15: 1, # 'כ' - 4: 3, # 'ל' - 11: 2, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 2, # 'ץ' - 21: 3, # 'צ' - 17: 2, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 7: { # 'ר' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 2, # '´' - 48: 1, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 1, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 2, # 'ֹ' - 62: 1, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 3, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 3, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 3, # 'ץ' - 21: 3, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 10: { # 'ש' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 1, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 1, # 'ִ' - 37: 1, # 'ֵ' - 36: 1, # 'ֶ' - 31: 1, # 'ַ' - 29: 1, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 3, # 'ׁ' - 45: 2, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 3, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 2, # 'ז' - 14: 3, # 'ח' - 22: 3, # 'ט' - 1: 3, # 'י' - 25: 3, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 2, # 'ן' - 12: 3, # 'נ' - 19: 2, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 1, # '…' - }, - 5: { # 'ת' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 1, # '\xa0' - 55: 0, # '´' - 48: 1, # '¼' - 39: 1, # '½' - 57: 0, # '¾' - 30: 2, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 2, # 'ִ' - 37: 2, # 'ֵ' - 36: 2, # 'ֶ' - 31: 2, # 'ַ' - 29: 2, # 'ָ' - 35: 1, # 'ֹ' - 62: 1, # 'ֻ' - 28: 2, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 3, # 'א' - 8: 3, # 'ב' - 20: 3, # 'ג' - 16: 2, # 'ד' - 3: 3, # 'ה' - 2: 3, # 'ו' - 24: 2, # 'ז' - 14: 3, # 'ח' - 22: 2, # 'ט' - 1: 3, # 'י' - 25: 2, # 'ך' - 15: 3, # 'כ' - 4: 3, # 'ל' - 11: 3, # 'ם' - 6: 3, # 'מ' - 23: 3, # 'ן' - 12: 3, # 'נ' - 19: 2, # 'ס' - 13: 3, # 'ע' - 26: 2, # 'ף' - 18: 3, # 'פ' - 27: 1, # 'ץ' - 21: 2, # 'צ' - 17: 3, # 'ק' - 7: 3, # 'ר' - 10: 3, # 'ש' - 5: 3, # 'ת' - 32: 1, # '–' - 52: 1, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, - 32: { # '–' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 1, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 1, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 0, # 'ז' - 14: 1, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 1, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 1, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 52: { # '’' - 50: 1, # 'a' - 60: 0, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 1, # 'r' - 43: 2, # 's' - 44: 2, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 1, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 47: { # '“' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 1, # 'l' - 54: 1, # 'n' - 49: 1, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 1, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 2, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 1, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 1, # 'ח' - 22: 1, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 1, # 'ס' - 13: 1, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 1, # 'צ' - 17: 1, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 46: { # '”' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 1, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 1, # 'ב' - 20: 1, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 1, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 0, # '†' - 40: 0, # '…' - }, - 58: { # '†' - 50: 0, # 'a' - 60: 0, # 'c' - 61: 0, # 'd' - 42: 0, # 'e' - 53: 0, # 'i' - 56: 0, # 'l' - 54: 0, # 'n' - 49: 0, # 'o' - 51: 0, # 'r' - 43: 0, # 's' - 44: 0, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 0, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 0, # 'ה' - 2: 0, # 'ו' - 24: 0, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 0, # 'י' - 25: 0, # 'ך' - 15: 0, # 'כ' - 4: 0, # 'ל' - 11: 0, # 'ם' - 6: 0, # 'מ' - 23: 0, # 'ן' - 12: 0, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 0, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 0, # 'ר' - 10: 0, # 'ש' - 5: 0, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 0, # '”' - 58: 2, # '†' - 40: 0, # '…' - }, - 40: { # '…' - 50: 1, # 'a' - 60: 1, # 'c' - 61: 1, # 'd' - 42: 1, # 'e' - 53: 1, # 'i' - 56: 0, # 'l' - 54: 1, # 'n' - 49: 0, # 'o' - 51: 1, # 'r' - 43: 1, # 's' - 44: 1, # 't' - 63: 0, # 'u' - 34: 0, # '\xa0' - 55: 0, # '´' - 48: 0, # '¼' - 39: 0, # '½' - 57: 0, # '¾' - 30: 0, # 'ְ' - 59: 0, # 'ֱ' - 41: 0, # 'ֲ' - 33: 0, # 'ִ' - 37: 0, # 'ֵ' - 36: 0, # 'ֶ' - 31: 0, # 'ַ' - 29: 0, # 'ָ' - 35: 0, # 'ֹ' - 62: 0, # 'ֻ' - 28: 0, # 'ּ' - 38: 0, # 'ׁ' - 45: 0, # 'ׂ' - 9: 1, # 'א' - 8: 0, # 'ב' - 20: 0, # 'ג' - 16: 0, # 'ד' - 3: 1, # 'ה' - 2: 1, # 'ו' - 24: 1, # 'ז' - 14: 0, # 'ח' - 22: 0, # 'ט' - 1: 1, # 'י' - 25: 0, # 'ך' - 15: 1, # 'כ' - 4: 1, # 'ל' - 11: 0, # 'ם' - 6: 1, # 'מ' - 23: 0, # 'ן' - 12: 1, # 'נ' - 19: 0, # 'ס' - 13: 0, # 'ע' - 26: 0, # 'ף' - 18: 1, # 'פ' - 27: 0, # 'ץ' - 21: 0, # 'צ' - 17: 0, # 'ק' - 7: 1, # 'ר' - 10: 1, # 'ש' - 5: 1, # 'ת' - 32: 0, # '–' - 52: 0, # '’' - 47: 0, # '“' - 46: 1, # '”' - 58: 0, # '†' - 40: 2, # '…' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -WINDOWS_1255_HEBREW_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 69, # 'A' - 66: 91, # 'B' - 67: 79, # 'C' - 68: 80, # 'D' - 69: 92, # 'E' - 70: 89, # 'F' - 71: 97, # 'G' - 72: 90, # 'H' - 73: 68, # 'I' - 74: 111, # 'J' - 75: 112, # 'K' - 76: 82, # 'L' - 77: 73, # 'M' - 78: 95, # 'N' - 79: 85, # 'O' - 80: 78, # 'P' - 81: 121, # 'Q' - 82: 86, # 'R' - 83: 71, # 'S' - 84: 67, # 'T' - 85: 102, # 'U' - 86: 107, # 'V' - 87: 84, # 'W' - 88: 114, # 'X' - 89: 103, # 'Y' - 90: 115, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 50, # 'a' - 98: 74, # 'b' - 99: 60, # 'c' - 100: 61, # 'd' - 101: 42, # 'e' - 102: 76, # 'f' - 103: 70, # 'g' - 104: 64, # 'h' - 105: 53, # 'i' - 106: 105, # 'j' - 107: 93, # 'k' - 108: 56, # 'l' - 109: 65, # 'm' - 110: 54, # 'n' - 111: 49, # 'o' - 112: 66, # 'p' - 113: 110, # 'q' - 114: 51, # 'r' - 115: 43, # 's' - 116: 44, # 't' - 117: 63, # 'u' - 118: 81, # 'v' - 119: 77, # 'w' - 120: 98, # 'x' - 121: 75, # 'y' - 122: 108, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 124, # '€' - 129: 202, # None - 130: 203, # '‚' - 131: 204, # 'ƒ' - 132: 205, # '„' - 133: 40, # '…' - 134: 58, # '†' - 135: 206, # '‡' - 136: 207, # 'ˆ' - 137: 208, # '‰' - 138: 209, # None - 139: 210, # '‹' - 140: 211, # None - 141: 212, # None - 142: 213, # None - 143: 214, # None - 144: 215, # None - 145: 83, # '‘' - 146: 52, # '’' - 147: 47, # '“' - 148: 46, # '”' - 149: 72, # '•' - 150: 32, # '–' - 151: 94, # '—' - 152: 216, # '˜' - 153: 113, # '™' - 154: 217, # None - 155: 109, # '›' - 156: 218, # None - 157: 219, # None - 158: 220, # None - 159: 221, # None - 160: 34, # '\xa0' - 161: 116, # '¡' - 162: 222, # '¢' - 163: 118, # '£' - 164: 100, # '₪' - 165: 223, # '¥' - 166: 224, # '¦' - 167: 117, # '§' - 168: 119, # '¨' - 169: 104, # '©' - 170: 125, # '×' - 171: 225, # '«' - 172: 226, # '¬' - 173: 87, # '\xad' - 174: 99, # '®' - 175: 227, # '¯' - 176: 106, # '°' - 177: 122, # '±' - 178: 123, # '²' - 179: 228, # '³' - 180: 55, # '´' - 181: 229, # 'µ' - 182: 230, # '¶' - 183: 101, # '·' - 184: 231, # '¸' - 185: 232, # '¹' - 186: 120, # '÷' - 187: 233, # '»' - 188: 48, # '¼' - 189: 39, # '½' - 190: 57, # '¾' - 191: 234, # '¿' - 192: 30, # 'ְ' - 193: 59, # 'ֱ' - 194: 41, # 'ֲ' - 195: 88, # 'ֳ' - 196: 33, # 'ִ' - 197: 37, # 'ֵ' - 198: 36, # 'ֶ' - 199: 31, # 'ַ' - 200: 29, # 'ָ' - 201: 35, # 'ֹ' - 202: 235, # None - 203: 62, # 'ֻ' - 204: 28, # 'ּ' - 205: 236, # 'ֽ' - 206: 126, # '־' - 207: 237, # 'ֿ' - 208: 238, # '׀' - 209: 38, # 'ׁ' - 210: 45, # 'ׂ' - 211: 239, # '׃' - 212: 240, # 'װ' - 213: 241, # 'ױ' - 214: 242, # 'ײ' - 215: 243, # '׳' - 216: 127, # '״' - 217: 244, # None - 218: 245, # None - 219: 246, # None - 220: 247, # None - 221: 248, # None - 222: 249, # None - 223: 250, # None - 224: 9, # 'א' - 225: 8, # 'ב' - 226: 20, # 'ג' - 227: 16, # 'ד' - 228: 3, # 'ה' - 229: 2, # 'ו' - 230: 24, # 'ז' - 231: 14, # 'ח' - 232: 22, # 'ט' - 233: 1, # 'י' - 234: 25, # 'ך' - 235: 15, # 'כ' - 236: 4, # 'ל' - 237: 11, # 'ם' - 238: 6, # 'מ' - 239: 23, # 'ן' - 240: 12, # 'נ' - 241: 19, # 'ס' - 242: 13, # 'ע' - 243: 26, # 'ף' - 244: 18, # 'פ' - 245: 27, # 'ץ' - 246: 21, # 'צ' - 247: 17, # 'ק' - 248: 7, # 'ר' - 249: 10, # 'ש' - 250: 5, # 'ת' - 251: 251, # None - 252: 252, # None - 253: 128, # '\u200e' - 254: 96, # '\u200f' - 255: 253, # None -} - -WINDOWS_1255_HEBREW_MODEL = SingleByteCharSetModel( - charset_name="windows-1255", - language="Hebrew", - char_to_order_map=WINDOWS_1255_HEBREW_CHAR_TO_ORDER, - language_model=HEBREW_LANG_MODEL, - typical_positive_ratio=0.984004, - keep_ascii_letters=False, - alphabet="אבגדהוזחטיךכלםמןנסעףפץצקרשתװױײ", -) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/packaging/__about__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/packaging/__about__.py deleted file mode 100644 index 3551bc2d29846441299cf57b397b02fc164c99b9..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/packaging/__about__.py +++ /dev/null @@ -1,26 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -__all__ = [ - "__title__", - "__summary__", - "__uri__", - "__version__", - "__author__", - "__email__", - "__license__", - "__copyright__", -] - -__title__ = "packaging" -__summary__ = "Core utilities for Python packages" -__uri__ = "https://github.com/pypa/packaging" - -__version__ = "21.3" - -__author__ = "Donald Stufft and individual contributors" -__email__ = "donald@stufft.io" - -__license__ = "BSD-2-Clause or Apache-2.0" -__copyright__ = "2014-2019 %s" % __author__ diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/util/retry.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/util/retry.py deleted file mode 100644 index 2490d5e5b63359a7f826922dc69c0015cb9a5b2e..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/util/retry.py +++ /dev/null @@ -1,620 +0,0 @@ -from __future__ import absolute_import - -import email -import logging -import re -import time -import warnings -from collections import namedtuple -from itertools import takewhile - -from ..exceptions import ( - ConnectTimeoutError, - InvalidHeader, - MaxRetryError, - ProtocolError, - ProxyError, - ReadTimeoutError, - ResponseError, -) -from ..packages import six - -log = logging.getLogger(__name__) - - -# Data structure for representing the metadata of requests that result in a retry. -RequestHistory = namedtuple( - "RequestHistory", ["method", "url", "error", "status", "redirect_location"] -) - - -# TODO: In v2 we can remove this sentinel and metaclass with deprecated options. -_Default = object() - - -class _RetryMeta(type): - @property - def DEFAULT_METHOD_WHITELIST(cls): - warnings.warn( - "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", - DeprecationWarning, - ) - return cls.DEFAULT_ALLOWED_METHODS - - @DEFAULT_METHOD_WHITELIST.setter - def DEFAULT_METHOD_WHITELIST(cls, value): - warnings.warn( - "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", - DeprecationWarning, - ) - cls.DEFAULT_ALLOWED_METHODS = value - - @property - def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls): - warnings.warn( - "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead", - DeprecationWarning, - ) - return cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT - - @DEFAULT_REDIRECT_HEADERS_BLACKLIST.setter - def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls, value): - warnings.warn( - "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead", - DeprecationWarning, - ) - cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT = value - - @property - def BACKOFF_MAX(cls): - warnings.warn( - "Using 'Retry.BACKOFF_MAX' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead", - DeprecationWarning, - ) - return cls.DEFAULT_BACKOFF_MAX - - @BACKOFF_MAX.setter - def BACKOFF_MAX(cls, value): - warnings.warn( - "Using 'Retry.BACKOFF_MAX' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead", - DeprecationWarning, - ) - cls.DEFAULT_BACKOFF_MAX = value - - -@six.add_metaclass(_RetryMeta) -class Retry(object): - """Retry configuration. - - Each retry attempt will create a new Retry object with updated values, so - they can be safely reused. - - Retries can be defined as a default for a pool:: - - retries = Retry(connect=5, read=2, redirect=5) - http = PoolManager(retries=retries) - response = http.request('GET', 'http://example.com/') - - Or per-request (which overrides the default for the pool):: - - response = http.request('GET', 'http://example.com/', retries=Retry(10)) - - Retries can be disabled by passing ``False``:: - - response = http.request('GET', 'http://example.com/', retries=False) - - Errors will be wrapped in :class:`~urllib3.exceptions.MaxRetryError` unless - retries are disabled, in which case the causing exception will be raised. - - :param int total: - Total number of retries to allow. Takes precedence over other counts. - - Set to ``None`` to remove this constraint and fall back on other - counts. - - Set to ``0`` to fail on the first retry. - - Set to ``False`` to disable and imply ``raise_on_redirect=False``. - - :param int connect: - How many connection-related errors to retry on. - - These are errors raised before the request is sent to the remote server, - which we assume has not triggered the server to process the request. - - Set to ``0`` to fail on the first retry of this type. - - :param int read: - How many times to retry on read errors. - - These errors are raised after the request was sent to the server, so the - request may have side-effects. - - Set to ``0`` to fail on the first retry of this type. - - :param int redirect: - How many redirects to perform. Limit this to avoid infinite redirect - loops. - - A redirect is a HTTP response with a status code 301, 302, 303, 307 or - 308. - - Set to ``0`` to fail on the first retry of this type. - - Set to ``False`` to disable and imply ``raise_on_redirect=False``. - - :param int status: - How many times to retry on bad status codes. - - These are retries made on responses, where status code matches - ``status_forcelist``. - - Set to ``0`` to fail on the first retry of this type. - - :param int other: - How many times to retry on other errors. - - Other errors are errors that are not connect, read, redirect or status errors. - These errors might be raised after the request was sent to the server, so the - request might have side-effects. - - Set to ``0`` to fail on the first retry of this type. - - If ``total`` is not set, it's a good idea to set this to 0 to account - for unexpected edge cases and avoid infinite retry loops. - - :param iterable allowed_methods: - Set of uppercased HTTP method verbs that we should retry on. - - By default, we only retry on methods which are considered to be - idempotent (multiple requests with the same parameters end with the - same state). See :attr:`Retry.DEFAULT_ALLOWED_METHODS`. - - Set to a ``False`` value to retry on any verb. - - .. warning:: - - Previously this parameter was named ``method_whitelist``, that - usage is deprecated in v1.26.0 and will be removed in v2.0. - - :param iterable status_forcelist: - A set of integer HTTP status codes that we should force a retry on. - A retry is initiated if the request method is in ``allowed_methods`` - and the response status code is in ``status_forcelist``. - - By default, this is disabled with ``None``. - - :param float backoff_factor: - A backoff factor to apply between attempts after the second try - (most errors are resolved immediately by a second try without a - delay). urllib3 will sleep for:: - - {backoff factor} * (2 ** ({number of total retries} - 1)) - - seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep - for [0.0s, 0.2s, 0.4s, ...] between retries. It will never be longer - than :attr:`Retry.DEFAULT_BACKOFF_MAX`. - - By default, backoff is disabled (set to 0). - - :param bool raise_on_redirect: Whether, if the number of redirects is - exhausted, to raise a MaxRetryError, or to return a response with a - response code in the 3xx range. - - :param bool raise_on_status: Similar meaning to ``raise_on_redirect``: - whether we should raise an exception, or return a response, - if status falls in ``status_forcelist`` range and retries have - been exhausted. - - :param tuple history: The history of the request encountered during - each call to :meth:`~Retry.increment`. The list is in the order - the requests occurred. Each list item is of class :class:`RequestHistory`. - - :param bool respect_retry_after_header: - Whether to respect Retry-After header on status codes defined as - :attr:`Retry.RETRY_AFTER_STATUS_CODES` or not. - - :param iterable remove_headers_on_redirect: - Sequence of headers to remove from the request when a response - indicating a redirect is returned before firing off the redirected - request. - """ - - #: Default methods to be used for ``allowed_methods`` - DEFAULT_ALLOWED_METHODS = frozenset( - ["HEAD", "GET", "PUT", "DELETE", "OPTIONS", "TRACE"] - ) - - #: Default status codes to be used for ``status_forcelist`` - RETRY_AFTER_STATUS_CODES = frozenset([413, 429, 503]) - - #: Default headers to be used for ``remove_headers_on_redirect`` - DEFAULT_REMOVE_HEADERS_ON_REDIRECT = frozenset(["Authorization"]) - - #: Maximum backoff time. - DEFAULT_BACKOFF_MAX = 120 - - def __init__( - self, - total=10, - connect=None, - read=None, - redirect=None, - status=None, - other=None, - allowed_methods=_Default, - status_forcelist=None, - backoff_factor=0, - raise_on_redirect=True, - raise_on_status=True, - history=None, - respect_retry_after_header=True, - remove_headers_on_redirect=_Default, - # TODO: Deprecated, remove in v2.0 - method_whitelist=_Default, - ): - - if method_whitelist is not _Default: - if allowed_methods is not _Default: - raise ValueError( - "Using both 'allowed_methods' and " - "'method_whitelist' together is not allowed. " - "Instead only use 'allowed_methods'" - ) - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - stacklevel=2, - ) - allowed_methods = method_whitelist - if allowed_methods is _Default: - allowed_methods = self.DEFAULT_ALLOWED_METHODS - if remove_headers_on_redirect is _Default: - remove_headers_on_redirect = self.DEFAULT_REMOVE_HEADERS_ON_REDIRECT - - self.total = total - self.connect = connect - self.read = read - self.status = status - self.other = other - - if redirect is False or total is False: - redirect = 0 - raise_on_redirect = False - - self.redirect = redirect - self.status_forcelist = status_forcelist or set() - self.allowed_methods = allowed_methods - self.backoff_factor = backoff_factor - self.raise_on_redirect = raise_on_redirect - self.raise_on_status = raise_on_status - self.history = history or tuple() - self.respect_retry_after_header = respect_retry_after_header - self.remove_headers_on_redirect = frozenset( - [h.lower() for h in remove_headers_on_redirect] - ) - - def new(self, **kw): - params = dict( - total=self.total, - connect=self.connect, - read=self.read, - redirect=self.redirect, - status=self.status, - other=self.other, - status_forcelist=self.status_forcelist, - backoff_factor=self.backoff_factor, - raise_on_redirect=self.raise_on_redirect, - raise_on_status=self.raise_on_status, - history=self.history, - remove_headers_on_redirect=self.remove_headers_on_redirect, - respect_retry_after_header=self.respect_retry_after_header, - ) - - # TODO: If already given in **kw we use what's given to us - # If not given we need to figure out what to pass. We decide - # based on whether our class has the 'method_whitelist' property - # and if so we pass the deprecated 'method_whitelist' otherwise - # we use 'allowed_methods'. Remove in v2.0 - if "method_whitelist" not in kw and "allowed_methods" not in kw: - if "method_whitelist" in self.__dict__: - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - params["method_whitelist"] = self.allowed_methods - else: - params["allowed_methods"] = self.allowed_methods - - params.update(kw) - return type(self)(**params) - - @classmethod - def from_int(cls, retries, redirect=True, default=None): - """Backwards-compatibility for the old retries format.""" - if retries is None: - retries = default if default is not None else cls.DEFAULT - - if isinstance(retries, Retry): - return retries - - redirect = bool(redirect) and None - new_retries = cls(retries, redirect=redirect) - log.debug("Converted retries value: %r -> %r", retries, new_retries) - return new_retries - - def get_backoff_time(self): - """Formula for computing the current backoff - - :rtype: float - """ - # We want to consider only the last consecutive errors sequence (Ignore redirects). - consecutive_errors_len = len( - list( - takewhile(lambda x: x.redirect_location is None, reversed(self.history)) - ) - ) - if consecutive_errors_len <= 1: - return 0 - - backoff_value = self.backoff_factor * (2 ** (consecutive_errors_len - 1)) - return min(self.DEFAULT_BACKOFF_MAX, backoff_value) - - def parse_retry_after(self, retry_after): - # Whitespace: https://tools.ietf.org/html/rfc7230#section-3.2.4 - if re.match(r"^\s*[0-9]+\s*$", retry_after): - seconds = int(retry_after) - else: - retry_date_tuple = email.utils.parsedate_tz(retry_after) - if retry_date_tuple is None: - raise InvalidHeader("Invalid Retry-After header: %s" % retry_after) - if retry_date_tuple[9] is None: # Python 2 - # Assume UTC if no timezone was specified - # On Python2.7, parsedate_tz returns None for a timezone offset - # instead of 0 if no timezone is given, where mktime_tz treats - # a None timezone offset as local time. - retry_date_tuple = retry_date_tuple[:9] + (0,) + retry_date_tuple[10:] - - retry_date = email.utils.mktime_tz(retry_date_tuple) - seconds = retry_date - time.time() - - if seconds < 0: - seconds = 0 - - return seconds - - def get_retry_after(self, response): - """Get the value of Retry-After in seconds.""" - - retry_after = response.headers.get("Retry-After") - - if retry_after is None: - return None - - return self.parse_retry_after(retry_after) - - def sleep_for_retry(self, response=None): - retry_after = self.get_retry_after(response) - if retry_after: - time.sleep(retry_after) - return True - - return False - - def _sleep_backoff(self): - backoff = self.get_backoff_time() - if backoff <= 0: - return - time.sleep(backoff) - - def sleep(self, response=None): - """Sleep between retry attempts. - - This method will respect a server's ``Retry-After`` response header - and sleep the duration of the time requested. If that is not present, it - will use an exponential backoff. By default, the backoff factor is 0 and - this method will return immediately. - """ - - if self.respect_retry_after_header and response: - slept = self.sleep_for_retry(response) - if slept: - return - - self._sleep_backoff() - - def _is_connection_error(self, err): - """Errors when we're fairly sure that the server did not receive the - request, so it should be safe to retry. - """ - if isinstance(err, ProxyError): - err = err.original_error - return isinstance(err, ConnectTimeoutError) - - def _is_read_error(self, err): - """Errors that occur after the request has been started, so we should - assume that the server began processing it. - """ - return isinstance(err, (ReadTimeoutError, ProtocolError)) - - def _is_method_retryable(self, method): - """Checks if a given HTTP method should be retried upon, depending if - it is included in the allowed_methods - """ - # TODO: For now favor if the Retry implementation sets its own method_whitelist - # property outside of our constructor to avoid breaking custom implementations. - if "method_whitelist" in self.__dict__: - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - allowed_methods = self.method_whitelist - else: - allowed_methods = self.allowed_methods - - if allowed_methods and method.upper() not in allowed_methods: - return False - return True - - def is_retry(self, method, status_code, has_retry_after=False): - """Is this method/status code retryable? (Based on allowlists and control - variables such as the number of total retries to allow, whether to - respect the Retry-After header, whether this header is present, and - whether the returned status code is on the list of status codes to - be retried upon on the presence of the aforementioned header) - """ - if not self._is_method_retryable(method): - return False - - if self.status_forcelist and status_code in self.status_forcelist: - return True - - return ( - self.total - and self.respect_retry_after_header - and has_retry_after - and (status_code in self.RETRY_AFTER_STATUS_CODES) - ) - - def is_exhausted(self): - """Are we out of retries?""" - retry_counts = ( - self.total, - self.connect, - self.read, - self.redirect, - self.status, - self.other, - ) - retry_counts = list(filter(None, retry_counts)) - if not retry_counts: - return False - - return min(retry_counts) < 0 - - def increment( - self, - method=None, - url=None, - response=None, - error=None, - _pool=None, - _stacktrace=None, - ): - """Return a new Retry object with incremented retry counters. - - :param response: A response object, or None, if the server did not - return a response. - :type response: :class:`~urllib3.response.HTTPResponse` - :param Exception error: An error encountered during the request, or - None if the response was received successfully. - - :return: A new ``Retry`` object. - """ - if self.total is False and error: - # Disabled, indicate to re-raise the error. - raise six.reraise(type(error), error, _stacktrace) - - total = self.total - if total is not None: - total -= 1 - - connect = self.connect - read = self.read - redirect = self.redirect - status_count = self.status - other = self.other - cause = "unknown" - status = None - redirect_location = None - - if error and self._is_connection_error(error): - # Connect retry? - if connect is False: - raise six.reraise(type(error), error, _stacktrace) - elif connect is not None: - connect -= 1 - - elif error and self._is_read_error(error): - # Read retry? - if read is False or not self._is_method_retryable(method): - raise six.reraise(type(error), error, _stacktrace) - elif read is not None: - read -= 1 - - elif error: - # Other retry? - if other is not None: - other -= 1 - - elif response and response.get_redirect_location(): - # Redirect retry? - if redirect is not None: - redirect -= 1 - cause = "too many redirects" - redirect_location = response.get_redirect_location() - status = response.status - - else: - # Incrementing because of a server error like a 500 in - # status_forcelist and the given method is in the allowed_methods - cause = ResponseError.GENERIC_ERROR - if response and response.status: - if status_count is not None: - status_count -= 1 - cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) - status = response.status - - history = self.history + ( - RequestHistory(method, url, error, status, redirect_location), - ) - - new_retry = self.new( - total=total, - connect=connect, - read=read, - redirect=redirect, - status=status_count, - other=other, - history=history, - ) - - if new_retry.is_exhausted(): - raise MaxRetryError(_pool, url, error or ResponseError(cause)) - - log.debug("Incremented Retry for (url='%s'): %r", url, new_retry) - - return new_retry - - def __repr__(self): - return ( - "{cls.__name__}(total={self.total}, connect={self.connect}, " - "read={self.read}, redirect={self.redirect}, status={self.status})" - ).format(cls=type(self), self=self) - - def __getattr__(self, item): - if item == "method_whitelist": - # TODO: Remove this deprecated alias in v2.0 - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - return self.allowed_methods - try: - return getattr(super(Retry, self), item) - except AttributeError: - return getattr(Retry, item) - - -# For backwards compatibility (equivalent to pre-v1.9): -Retry.DEFAULT = Retry(3) diff --git a/spaces/Billius/VizLib-TopLargeHospitalsNewJersey-04-07-2023/app.py b/spaces/Billius/VizLib-TopLargeHospitalsNewJersey-04-07-2023/app.py deleted file mode 100644 index 05adfa181088800fc3ff4f4847de72688e4fe5a5..0000000000000000000000000000000000000000 --- a/spaces/Billius/VizLib-TopLargeHospitalsNewJersey-04-07-2023/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import streamlit as st -import graphviz as gv -from graphviz import Graph -import folium -from streamlit_folium import folium_static - -# Define the cluster relations graph using gvmap -g = Graph(format='svg') -g.graph_attr['bgcolor'] = '#FFFFFF' -g.graph_attr['outputorder'] = 'edgesfirst' -g.graph_attr['size'] = '10,10' -g.node_attr['style'] = 'filled' -g.node_attr['shape'] = 'box' -g.node_attr['fillcolor'] = '#FFDAB9' - -with g.subgraph(name='cluster_NJ') as c: - c.graph_attr['bgcolor'] = '#ADD8E6' - c.node_attr['color'] = '#000000' - c.node_attr['fontcolor'] = '#000000' - c.attr(label='New Jersey', fontsize='24') - c.node('Hackensack Meridian Health', URL='https://www.hackensackmeridianhealth.org/', target='_blank', tooltip='Hackensack Meridian Health: Hackensack University Medical Center') - c.node('RWJBarnabas Health', URL='https://www.rwjbh.org/', target='_blank', tooltip='RWJBarnabas Health: Robert Wood Johnson University Hospital') - c.node('Atlantic Health System', URL='https://www.atlantichealth.org/', target='_blank', tooltip='Atlantic Health System: Morristown Medical Center') - c.node('Virtua Health', URL='https://www.virtua.org/', target='_blank', tooltip='Virtua Health: Virtua Memorial Hospital') - c.node('Inspira Health', URL='https://www.inspirahealthnetwork.org/', target='_blank', tooltip='Inspira Health: Inspira Medical Center Vineland') - c.node('Cooper University Health Care', URL='https://www.cooperhealth.org/', target='_blank', tooltip='Cooper University Health Care: Cooper University Hospital') - c.node('University Hospital', URL='https://www.uhnj.org/', target='_blank', tooltip='University Hospital: University Hospital') - c.node('Robert Wood Johnson University Hospital Hamilton', URL='https://www.rwjbh.org/robert-wood-johnson-university-hospital-hamilton/', target='_blank', tooltip='Robert Wood Johnson University Hospital Hamilton: Robert Wood Johnson University Hospital Hamilton') - c.node('Trinitas Regional Medical Center', URL='https://www.trinitasrmc.org/', target='_blank', tooltip='Trinitas Regional Medical Center: Trinitas Regional Medical Center') - c.node('Capital Health Regional Medical Center', URL='https://www.capitalhealth.org/', target='_blank', tooltip='Capital Health Regional Medical Center: Capital Health Regional Medical Center') - -# Render the graph using streamlit -st.graphviz_chart(g) - -# Define hospitals data -hospitals = [('Hackensack Meridian Health', 'Hackensack University Medical Center', 40.899886, -74.039179), - ('RWJBarnabas Health', 'Robert Wood Johnson University Hospital', 40.491301, -74.450611), - ('Atlantic Health System', 'Morristown Medical Center', 40.787231, -74.473851), - ('Virtua Health', 'Virtua Memorial Hospital', 39.931229, -75.025831), - ('Inspira Health', 'Inspira Medical Center Vineland', 39.460225, -75.035542), - ('Cooper University Health Care', 'Cooper University Hospital', 39.942743, -75.119090), - ('University Hospital', 'University Hospital', 40.742310, -74.177609), - ('Robert Wood Johnson University Hospital Hamilton', 'Robert Wood Johnson University Hospital Hamilton', 40.214008, -74.679619), - ('Trinitas Regional Medical Center', 'Trinitas Regional Medical Center', 40.661474, -74.215013), - ('Capital Health Regional Medical Center', 'Capital Health Regional Medical Center', 40.266778, -74.796452)] - -#Create a map centered on New Jersey -m = folium.Map(location=[40.0583, -74.4057], zoom_start=8) - -#Add markers for each hospital -for hospital in hospitals: - folium.Marker( - location=[hospital[2], hospital[3]], - popup=f'{hospital[1]}
    {hospital[2]},{hospital[3]}' - ).add_to(m) - -#Display the map in Streamlit -folium_static(m) diff --git a/spaces/CAMP-ViL/Xplainer/README.md b/spaces/CAMP-ViL/Xplainer/README.md deleted file mode 100644 index 7ac83915c9365bf2f3b60c5e71fa8e118e957311..0000000000000000000000000000000000000000 --- a/spaces/CAMP-ViL/Xplainer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Xplainer -emoji: 📊 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.34.0 -python_version: 3.7.16 -app_file: app.py -pinned: false -license: mit ---- diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_docstring_options.py b/spaces/CVPR/LIVE/pybind11/tests/test_docstring_options.py deleted file mode 100644 index 80ade0f158c3fc7b8e21cf79461a430be7c82f3a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_docstring_options.py +++ /dev/null @@ -1,39 +0,0 @@ -# -*- coding: utf-8 -*- -from pybind11_tests import docstring_options as m - - -def test_docstring_options(): - # options.disable_function_signatures() - assert not m.test_function1.__doc__ - - assert m.test_function2.__doc__ == "A custom docstring" - - # docstring specified on just the first overload definition: - assert m.test_overloaded1.__doc__ == "Overload docstring" - - # docstring on both overloads: - assert m.test_overloaded2.__doc__ == "overload docstring 1\noverload docstring 2" - - # docstring on only second overload: - assert m.test_overloaded3.__doc__ == "Overload docstr" - - # options.enable_function_signatures() - assert m.test_function3.__doc__ .startswith("test_function3(a: int, b: int) -> None") - - assert m.test_function4.__doc__ .startswith("test_function4(a: int, b: int) -> None") - assert m.test_function4.__doc__ .endswith("A custom docstring\n") - - # options.disable_function_signatures() - # options.disable_user_defined_docstrings() - assert not m.test_function5.__doc__ - - # nested options.enable_user_defined_docstrings() - assert m.test_function6.__doc__ == "A custom docstring" - - # RAII destructor - assert m.test_function7.__doc__ .startswith("test_function7(a: int, b: int) -> None") - assert m.test_function7.__doc__ .endswith("A custom docstring\n") - - # Suppression of user-defined docstrings for non-function objects - assert not m.DocstringTestFoo.__doc__ - assert not m.DocstringTestFoo.value_prop.__doc__ diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/swap_ranges.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/swap_ranges.h deleted file mode 100644 index 8c3338b1baf58a3628245072f4f4700dcd3bc025..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/swap_ranges.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// omp inherits swap_ranges -#include - diff --git a/spaces/CVPR/TokenCut/app_backup.py b/spaces/CVPR/TokenCut/app_backup.py deleted file mode 100644 index 8ffb4ca17ba8a9c08f1faf48d4c96650498516ef..0000000000000000000000000000000000000000 --- a/spaces/CVPR/TokenCut/app_backup.py +++ /dev/null @@ -1,43 +0,0 @@ -import os -import requests -import pandas as pd -import gradio as gr -from huggingface_hub.hf_api import SpaceInfo -from pathlib import Path - - -path = f"https://huggingface.co/api/spaces" -os.system("git clone https://github.com/YangtaoWANG95/TokenCut.git") -os.chdir("TokenCut") -os.system("wget https://raw.githubusercontent.com/YangtaoWANG95/TokenCut/master/examples/VOC07_000064.jpg -O parrot.jpg") - - - -def get_blocks_party_spaces(): - r = requests.get(path) - d = r.json() - spaces = [SpaceInfo(**x) for x in d] - blocks_spaces = {} - for i in range(0,len(spaces)): - if spaces[i].id.split('/')[0] == 'CVPR' and hasattr(spaces[i], 'likes') and spaces[i].id != 'CVPR/Leaderboard' and spaces[i].id != 'CVPR/README': - blocks_spaces[spaces[i].id]=spaces[i].likes - df = pd.DataFrame( - [{"Spaces_Name": Spaces, "likes": likes} for Spaces,likes in blocks_spaces.items()]) - df = df.sort_values(by=['likes'],ascending=False) - return df - - -block = gr.Blocks() - -with block: - gr.Markdown("""Leaderboard for the most popular CVPR Spaces. To learn more and join, see CVPR Event""") - with gr.Tabs(): - with gr.TabItem("CVPR Leaderboard"): - with gr.Row(): - data = gr.outputs.Dataframe(type="pandas") - with gr.Row(): - data_run = gr.Button("Refresh") - data_run.click(get_blocks_party_spaces, inputs=None, outputs=data) - - block.load(get_blocks_party_spaces, inputs=None, outputs=data) -block.launch() diff --git a/spaces/CVPR/ml-talking-face/toxicity_estimator/__init__.py b/spaces/CVPR/ml-talking-face/toxicity_estimator/__init__.py deleted file mode 100644 index cf47bc9ac37afb00225e0d0d62ce2748cdcfed71..0000000000000000000000000000000000000000 --- a/spaces/CVPR/ml-talking-face/toxicity_estimator/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .module import PerspectiveAPI \ No newline at end of file diff --git a/spaces/CVPR/regionclip-demo/detectron2/projects/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/projects/__init__.py deleted file mode 100644 index a68207db4ee3c2578e1042b00b3071a946b7adea..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/projects/__init__.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import importlib -from pathlib import Path - -_PROJECTS = { - "point_rend": "PointRend", - "deeplab": "DeepLab", - "panoptic_deeplab": "Panoptic-DeepLab", -} -_PROJECT_ROOT = Path(__file__).resolve().parent.parent.parent / "projects" - -if _PROJECT_ROOT.is_dir(): - # This is true only for in-place installation (pip install -e, setup.py develop), - # where setup(package_dir=) does not work: https://github.com/pypa/setuptools/issues/230 - - class _D2ProjectsFinder(importlib.abc.MetaPathFinder): - def find_spec(self, name, path, target=None): - if not name.startswith("detectron2.projects."): - return - project_name = name.split(".")[-1] - project_dir = _PROJECTS.get(project_name) - if not project_dir: - return - target_file = _PROJECT_ROOT / f"{project_dir}/{project_name}/__init__.py" - if not target_file.is_file(): - return - return importlib.util.spec_from_file_location(name, target_file) - - import sys - - sys.meta_path.append(_D2ProjectsFinder()) diff --git a/spaces/CanKorkut/turkish-hatespeech-detection/app.py b/spaces/CanKorkut/turkish-hatespeech-detection/app.py deleted file mode 100644 index 6387070272170a00ce265638da0255849ddbb8dc..0000000000000000000000000000000000000000 --- a/spaces/CanKorkut/turkish-hatespeech-detection/app.py +++ /dev/null @@ -1,16 +0,0 @@ -from transformers import AutoTokenizer -import gradio as gr -from transformers import XLMRobertaForSequenceClassification -from transformers import pipeline - -id2label = {0:'INSULT', 1:'OTHER', 2:'PROFANITY', 3:'RACIST', 4:'SEXIST'} -model = XLMRobertaForSequenceClassification.from_pretrained(".") -tokenizer = AutoTokenizer.from_pretrained(".") - -def predict(text): - classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) - result = id2label[int(classifier(text)[0]['label'].split('_')[-1])] - return result - -iface = gr.Interface(fn=predict, inputs="text", outputs="text") -iface.launch() diff --git a/spaces/ChatGPT-GAIA/GAIA-GPT/app.py b/spaces/ChatGPT-GAIA/GAIA-GPT/app.py deleted file mode 100644 index c16825a3466066d5c01f7d687c0f4fdb04e2cd88..0000000000000000000000000000000000000000 --- a/spaces/ChatGPT-GAIA/GAIA-GPT/app.py +++ /dev/null @@ -1,202 +0,0 @@ -import gradio as gr -import os -import json -import requests - -#Streaming endpoint -API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream" -OPENAI_API_KEY= os.environ["HF_TOKEN"] # Add a token to this space . Then copy it to the repository secret in this spaces settings panel. os.environ reads from there. -# Keys for Open AI ChatGPT API usage are created from here: https://platform.openai.com/account/api-keys - -def predict(inputs, top_p, temperature, chat_counter, chatbot=[], history=[]): #repetition_penalty, top_k - - # 1. Set up a payload - payload = { - "model": "gpt-3.5-turbo", - "messages": [{"role": "user", "content": f"{inputs}"}], - "temperature" : 1.0, - "top_p":1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - - # 2. Define your headers and add a key from https://platform.openai.com/account/api-keys - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {OPENAI_API_KEY}" - } - - # 3. Create a chat counter loop that feeds [Predict next best anything based on last input and attention with memory defined by introspective attention over time] - print(f"chat_counter - {chat_counter}") - if chat_counter != 0 : - messages=[] - for data in chatbot: - temp1 = {} - temp1["role"] = "user" - temp1["content"] = data[0] - temp2 = {} - temp2["role"] = "assistant" - temp2["content"] = data[1] - messages.append(temp1) - messages.append(temp2) - temp3 = {} - temp3["role"] = "user" - temp3["content"] = inputs - messages.append(temp3) - payload = { - "model": "gpt-3.5-turbo", - "messages": messages, #[{"role": "user", "content": f"{inputs}"}], - "temperature" : temperature, #1.0, - "top_p": top_p, #1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - chat_counter+=1 - - # 4. POST it to OPENAI API - history.append(inputs) - print(f"payload is - {payload}") - response = requests.post(API_URL, headers=headers, json=payload, stream=True) - token_counter = 0 - partial_words = "" - - # 5. Iterate through response lines and structure readable response - counter=0 - for chunk in response.iter_lines(): - if counter == 0: - counter+=1 - continue - if chunk.decode() : - chunk = chunk.decode() - if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']: - partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"] - if token_counter == 0: - history.append(" " + partial_words) - else: - history[-1] = partial_words - chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list - token_counter+=1 - yield chat, history, chat_counter - - -def reset_textbox(): - return gr.update(value='') - - - - -# Episodic and Semantic IO -def list_files(file_path): - import os - icon_csv = "📄 " - icon_txt = "📑 " - current_directory = os.getcwd() - file_list = [] - for filename in os.listdir(current_directory): - if filename.endswith(".csv"): - file_list.append(icon_csv + filename) - elif filename.endswith(".txt"): - file_list.append(icon_txt + filename) - if file_list: - return "\n".join(file_list) - else: - return "No .csv or .txt files found in the current directory." - -# Function to read a file -def read_file(file_path): - try: - with open(file_path, "r") as file: - contents = file.read() - return f"{contents}" - #return f"Contents of {file_path}:\n{contents}" - except FileNotFoundError: - return "File not found." - -# Function to delete a file -def delete_file(file_path): - try: - import os - os.remove(file_path) - return f"{file_path} has been deleted." - except FileNotFoundError: - return "File not found." - -# Function to write to a file -def write_file(file_path, content): - try: - with open(file_path, "w") as file: - file.write(content) - return f"Successfully written to {file_path}." - except: - return "Error occurred while writing to file." - -# Function to append to a file -def append_file(file_path, content): - try: - with open(file_path, "a") as file: - file.write(content) - return f"Successfully appended to {file_path}." - except: - return "Error occurred while appending to file." - - -title = """

    Generative AI Intelligence Amplifier - GAIA

    """ -description = """ -## GAIA Dataset References: 📚 -- **WebText:** A dataset of web pages crawled from domains on the Alexa top 5,000 list. This dataset was used to pretrain GPT-2. - - [WebText: A Large-Scale Unsupervised Text Corpus by Radford et al.](https://paperswithcode.com/dataset/webtext) -- **Common Crawl:** A dataset of web pages from a variety of domains, which is updated regularly. This dataset was used to pretrain GPT-3. - - [Language Models are Few-Shot Learners](https://paperswithcode.com/dataset/common-crawl) by Brown et al. -- **BooksCorpus:** A dataset of over 11,000 books from a variety of genres. - - [Scalable Methods for 8 Billion Token Language Modeling](https://paperswithcode.com/dataset/bookcorpus) by Zhu et al. -- **English Wikipedia:** A dump of the English-language Wikipedia as of 2018, with articles from 2001-2017. - - [Improving Language Understanding by Generative Pre-Training](https://huggingface.co/spaces/awacke1/WikipediaUltimateAISearch?logs=build) Space for Wikipedia Search -- **Toronto Books Corpus:** A dataset of over 7,000 books from a variety of genres, collected by the University of Toronto. - - [Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond](https://paperswithcode.com/dataset/bookcorpus) by Schwenk and Douze. -- **OpenWebText:** A dataset of web pages that were filtered to remove content that was likely to be low-quality or spammy. This dataset was used to pretrain GPT-3. - - [Language Models are Few-Shot Learners](https://paperswithcode.com/dataset/openwebtext) by Brown et al. - """ - -# 6. Use Gradio to pull it all together -with gr.Blocks(css = """#col_container {width: 100%; margin-left: auto; margin-right: auto;} #chatbot {height: 400px; overflow: auto;}""") as demo: - gr.HTML(title) - with gr.Column(elem_id = "col_container"): - inputs = gr.Textbox(placeholder= "Paste Prompt with Context Data Here", label= "Type an input and press Enter") - chatbot = gr.Chatbot(elem_id='chatbot') - state = gr.State([]) - b1 = gr.Button() - with gr.Accordion("Parameters", open=False): - top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",) - chat_counter = gr.Number(value=0, visible=True, precision=0) - - - # Episodic/Semantic IO - fileName = gr.Textbox(label="Filename") - fileContent = gr.TextArea(label="File Content") - completedMessage = gr.Textbox(label="Completed") - label = gr.Label() - with gr.Row(): - listFiles = gr.Button("📄 List File(s)") - readFile = gr.Button("📖 Read File") - saveFile = gr.Button("💾 Save File") - deleteFile = gr.Button("🗑️ Delete File") - appendFile = gr.Button("➕ Append File") - listFiles.click(list_files, inputs=fileName, outputs=fileContent) - readFile.click(read_file, inputs=fileName, outputs=fileContent) - saveFile.click(write_file, inputs=[fileName, fileContent], outputs=completedMessage) - deleteFile.click(delete_file, inputs=fileName, outputs=completedMessage) - appendFile.click(append_file, inputs=[fileName, fileContent], outputs=completedMessage ) - - - inputs.submit(predict, [inputs, top_p, temperature,chat_counter, chatbot, state], [chatbot, state, chat_counter]) - b1.click(predict, [inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter]) - b1.click(reset_textbox, [], [inputs]) - inputs.submit(reset_textbox, [], [inputs]) - gr.Markdown(description) - - demo.queue().launch(debug=True) \ No newline at end of file diff --git a/spaces/Cyril666/my_abi/modules/resnet.py b/spaces/Cyril666/my_abi/modules/resnet.py deleted file mode 100644 index 5ffb908ff8bf874a496c9f4fad2eb04f49cadf44..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/my_abi/modules/resnet.py +++ /dev/null @@ -1,104 +0,0 @@ -import math - -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.model_zoo as model_zoo - - -def conv1x1(in_planes, out_planes, stride=1): - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) - - -def conv3x3(in_planes, out_planes, stride=1): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv1x1(inplanes, planes) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes, stride) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__(self, block, layers): - self.inplanes = 32 - super(ResNet, self).__init__() - self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1, - bias=False) - self.bn1 = nn.BatchNorm2d(32) - self.relu = nn.ReLU(inplace=True) - - self.layer1 = self._make_layer(block, 32, layers[0], stride=2) - self.layer2 = self._make_layer(block, 64, layers[1], stride=1) - self.layer3 = self._make_layer(block, 128, layers[2], stride=2) - self.layer4 = self._make_layer(block, 256, layers[3], stride=1) - self.layer5 = self._make_layer(block, 512, layers[4], stride=1) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.layer5(x) - return x - - -def resnet45(): - return ResNet(BasicBlock, [3, 4, 6, 6, 3]) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/utils/mimebundle.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/utils/mimebundle.py deleted file mode 100644 index 1e00542fb4617e01a6bece351494e512835779c8..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/utils/mimebundle.py +++ /dev/null @@ -1,196 +0,0 @@ -from .html import spec_to_html - - -def spec_to_mimebundle( - spec, - format, - mode=None, - vega_version=None, - vegaembed_version=None, - vegalite_version=None, - engine=None, - **kwargs, -): - """Convert a vega-lite specification to a mimebundle - - The mimebundle type is controlled by the ``format`` argument, which can be - one of the following ['html', 'json', 'png', 'svg', 'pdf', 'vega', 'vega-lite'] - - Parameters - ---------- - spec : dict - a dictionary representing a vega-lite plot spec - format : string {'html', 'json', 'png', 'svg', 'pdf', 'vega', 'vega-lite'} - the file format to be saved. - mode : string {'vega-lite'} - The rendering mode. - vega_version : string - The version of vega.js to use - vegaembed_version : string - The version of vegaembed.js to use - vegalite_version : string - The version of vegalite.js to use. Only required if mode=='vega-lite' - engine: string {'vl-convert', 'altair_saver'} - the conversion engine to use for 'png', 'svg', 'pdf', and 'vega' formats - **kwargs : - Additional arguments will be passed to the generating function - - Returns - ------- - output : dict - a mime-bundle representing the image - - Note - ---- - The png, svg, pdf, and vega outputs require the altair_saver package - """ - if mode != "vega-lite": - raise ValueError("mode must be 'vega-lite'") - - if format in ["png", "svg", "pdf", "vega"]: - return _spec_to_mimebundle_with_engine( - spec, format, mode, engine=engine, **kwargs - ) - if format == "html": - html = spec_to_html( - spec, - mode=mode, - vega_version=vega_version, - vegaembed_version=vegaembed_version, - vegalite_version=vegalite_version, - **kwargs, - ) - return {"text/html": html} - if format == "vega-lite": - if vegalite_version is None: - raise ValueError("Must specify vegalite_version") - return {"application/vnd.vegalite.v{}+json".format(vegalite_version[0]): spec} - if format == "json": - return {"application/json": spec} - raise ValueError( - "format must be one of " - "['html', 'json', 'png', 'svg', 'pdf', 'vega', 'vega-lite']" - ) - - -def _spec_to_mimebundle_with_engine(spec, format, mode, **kwargs): - """Helper for Vega-Lite to mimebundle conversions that require an engine - - Parameters - ---------- - spec : dict - a dictionary representing a vega-lite plot spec - format : string {'png', 'svg', 'pdf', 'vega'} - the format of the mimebundle to be returned - mode : string {'vega-lite'} - The rendering mode. - engine: string {'vl-convert', 'altair_saver'} - the conversion engine to use - **kwargs : - Additional arguments will be passed to the conversion function - """ - # Normalize the engine string (if any) by lower casing - # and removing underscores and hyphens - engine = kwargs.pop("engine", None) - normalized_engine = _validate_normalize_engine(engine, format) - - if normalized_engine == "vlconvert": - import vl_convert as vlc - from ..vegalite import SCHEMA_VERSION - - # Compute VlConvert's vl_version string (of the form 'v5_2') - # from SCHEMA_VERSION (of the form 'v5.2.0') - vl_version = "_".join(SCHEMA_VERSION.split(".")[:2]) - if format == "vega": - vg = vlc.vegalite_to_vega(spec, vl_version=vl_version) - return {"application/vnd.vega.v5+json": vg} - elif format == "svg": - svg = vlc.vegalite_to_svg(spec, vl_version=vl_version) - return {"image/svg+xml": svg} - elif format == "png": - png = vlc.vegalite_to_png( - spec, - vl_version=vl_version, - scale=kwargs.get("scale_factor", 1.0), - ) - return {"image/png": png} - else: - # This should be validated above - # but raise exception for the sake of future development - raise ValueError("Unexpected format {fmt!r}".format(fmt=format)) - elif normalized_engine == "altairsaver": - import altair_saver - - return altair_saver.render(spec, format, mode=mode, **kwargs) - else: - # This should be validated above - # but raise exception for the sake of future development - raise ValueError( - "Unexpected normalized_engine {eng!r}".format(eng=normalized_engine) - ) - - -def _validate_normalize_engine(engine, format): - """Helper to validate and normalize the user-provided engine - - engine : {None, 'vl-convert', 'altair_saver'} - the user-provided engine string - format : string {'png', 'svg', 'pdf', 'vega'} - the format of the mimebundle to be returned - """ - # Try to import vl_convert - try: - import vl_convert as vlc - except ImportError: - vlc = None - - # Try to import altair_saver - try: - import altair_saver - except ImportError: - altair_saver = None - - # Normalize engine string by lower casing and removing underscores and hyphens - normalized_engine = ( - None if engine is None else engine.lower().replace("-", "").replace("_", "") - ) - - # Validate or infer default value of normalized_engine - if normalized_engine == "vlconvert": - if vlc is None: - raise ValueError( - "The 'vl-convert' conversion engine requires the vl-convert-python package" - ) - if format == "pdf": - raise ValueError( - "The 'vl-convert' conversion engine does not support the {fmt!r} format.\n" - "Use the 'altair_saver' engine instead".format(fmt=format) - ) - elif normalized_engine == "altairsaver": - if altair_saver is None: - raise ValueError( - "The 'altair_saver' conversion engine requires the altair_saver package" - ) - elif normalized_engine is None: - if vlc is not None and format != "pdf": - normalized_engine = "vlconvert" - elif altair_saver is not None: - normalized_engine = "altairsaver" - else: - if format == "pdf": - raise ValueError( - "Saving charts in {fmt!r} format requires the altair_saver package: " - "see http://github.com/altair-viz/altair_saver/".format(fmt=format) - ) - else: - raise ValueError( - "Saving charts in {fmt!r} format requires the vl-convert-python or altair_saver package: " - "see http://github.com/altair-viz/altair_saver/".format(fmt=format) - ) - else: - raise ValueError( - "Invalid conversion engine {engine!r}. Expected one of {valid!r}".format( - engine=engine, valid=("vl-convert", "altair_saver") - ) - ) - return normalized_engine diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/param_functions.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/param_functions.py deleted file mode 100644 index a43afaf311798ebde5fb265e1d47d584d807152d..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/param_functions.py +++ /dev/null @@ -1,564 +0,0 @@ -from typing import Any, Callable, Dict, List, Optional, Sequence, Union - -from fastapi import params -from fastapi._compat import Undefined -from typing_extensions import Annotated, deprecated - -_Unset: Any = Undefined - - -def Path( # noqa: N802 - default: Any = ..., - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.Path( - default=default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Query( # noqa: N802 - default: Any = Undefined, - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.Query( - default=default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Header( # noqa: N802 - default: Any = Undefined, - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - convert_underscores: bool = True, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.Header( - default=default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - convert_underscores=convert_underscores, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Cookie( # noqa: N802 - default: Any = Undefined, - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.Cookie( - default=default, - default_factory=default_factory, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Body( # noqa: N802 - default: Any = Undefined, - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - embed: bool = False, - media_type: str = "application/json", - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.Body( - default=default, - default_factory=default_factory, - embed=embed, - media_type=media_type, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Form( # noqa: N802 - default: Any = Undefined, - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - media_type: str = "application/x-www-form-urlencoded", - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.Form( - default=default, - default_factory=default_factory, - media_type=media_type, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def File( # noqa: N802 - default: Any = Undefined, - *, - default_factory: Union[Callable[[], Any], None] = _Unset, - media_type: str = "multipart/form-data", - alias: Optional[str] = None, - alias_priority: Union[int, None] = _Unset, - # TODO: update when deprecating Pydantic v1, import these types - # validation_alias: str | AliasPath | AliasChoices | None - validation_alias: Union[str, None] = None, - serialization_alias: Union[str, None] = None, - title: Optional[str] = None, - description: Optional[str] = None, - gt: Optional[float] = None, - ge: Optional[float] = None, - lt: Optional[float] = None, - le: Optional[float] = None, - min_length: Optional[int] = None, - max_length: Optional[int] = None, - pattern: Optional[str] = None, - regex: Annotated[ - Optional[str], - deprecated( - "Deprecated in FastAPI 0.100.0 and Pydantic v2, use `pattern` instead." - ), - ] = None, - discriminator: Union[str, None] = None, - strict: Union[bool, None] = _Unset, - multiple_of: Union[float, None] = _Unset, - allow_inf_nan: Union[bool, None] = _Unset, - max_digits: Union[int, None] = _Unset, - decimal_places: Union[int, None] = _Unset, - examples: Optional[List[Any]] = None, - example: Annotated[ - Optional[Any], - deprecated( - "Deprecated in OpenAPI 3.1.0 that now uses JSON Schema 2020-12, " - "although still supported. Use examples instead." - ), - ] = _Unset, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - json_schema_extra: Union[Dict[str, Any], None] = None, - **extra: Any, -) -> Any: - return params.File( - default=default, - default_factory=default_factory, - media_type=media_type, - alias=alias, - alias_priority=alias_priority, - validation_alias=validation_alias, - serialization_alias=serialization_alias, - title=title, - description=description, - gt=gt, - ge=ge, - lt=lt, - le=le, - min_length=min_length, - max_length=max_length, - pattern=pattern, - regex=regex, - discriminator=discriminator, - strict=strict, - multiple_of=multiple_of, - allow_inf_nan=allow_inf_nan, - max_digits=max_digits, - decimal_places=decimal_places, - example=example, - examples=examples, - deprecated=deprecated, - include_in_schema=include_in_schema, - json_schema_extra=json_schema_extra, - **extra, - ) - - -def Depends( # noqa: N802 - dependency: Optional[Callable[..., Any]] = None, *, use_cache: bool = True -) -> Any: - return params.Depends(dependency=dependency, use_cache=use_cache) - - -def Security( # noqa: N802 - dependency: Optional[Callable[..., Any]] = None, - *, - scopes: Optional[Sequence[str]] = None, - use_cache: bool = True, -) -> Any: - return params.Security(dependency=dependency, scopes=scopes, use_cache=use_cache) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/t2CharStringPen.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/t2CharStringPen.py deleted file mode 100644 index 41ab0f92f2b683ac2dc87ca1b16f54047d0fef81..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/t2CharStringPen.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) 2009 Type Supply LLC -# Author: Tal Leming - -from fontTools.misc.roundTools import otRound, roundFunc -from fontTools.misc.psCharStrings import T2CharString -from fontTools.pens.basePen import BasePen -from fontTools.cffLib.specializer import specializeCommands, commandsToProgram - - -class T2CharStringPen(BasePen): - """Pen to draw Type 2 CharStrings. - - The 'roundTolerance' argument controls the rounding of point coordinates. - It is defined as the maximum absolute difference between the original - float and the rounded integer value. - The default tolerance of 0.5 means that all floats are rounded to integer; - a value of 0 disables rounding; values in between will only round floats - which are close to their integral part within the tolerated range. - """ - - def __init__(self, width, glyphSet, roundTolerance=0.5, CFF2=False): - super(T2CharStringPen, self).__init__(glyphSet) - self.round = roundFunc(roundTolerance) - self._CFF2 = CFF2 - self._width = width - self._commands = [] - self._p0 = (0, 0) - - def _p(self, pt): - p0 = self._p0 - pt = self._p0 = (self.round(pt[0]), self.round(pt[1])) - return [pt[0] - p0[0], pt[1] - p0[1]] - - def _moveTo(self, pt): - self._commands.append(("rmoveto", self._p(pt))) - - def _lineTo(self, pt): - self._commands.append(("rlineto", self._p(pt))) - - def _curveToOne(self, pt1, pt2, pt3): - _p = self._p - self._commands.append(("rrcurveto", _p(pt1) + _p(pt2) + _p(pt3))) - - def _closePath(self): - pass - - def _endPath(self): - pass - - def getCharString(self, private=None, globalSubrs=None, optimize=True): - commands = self._commands - if optimize: - maxstack = 48 if not self._CFF2 else 513 - commands = specializeCommands( - commands, generalizeFirst=False, maxstack=maxstack - ) - program = commandsToProgram(commands) - if self._width is not None: - assert ( - not self._CFF2 - ), "CFF2 does not allow encoding glyph width in CharString." - program.insert(0, otRound(self._width)) - if not self._CFF2: - program.append("endchar") - charString = T2CharString( - program=program, private=private, globalSubrs=globalSubrs - ) - return charString diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-711d7bc4.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-711d7bc4.js deleted file mode 100644 index 1ff2671fd1fe45e8d113c4f95d650c3c253453a3..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-711d7bc4.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as Q,e as Z,s as W,J as Y,K as g,p as L,M as C,n as G,A as M,Z as me,N as z,O as U,m as ge,Q as q,z as N,u as ne,v as R,y as te,a1 as we,B as be,G as x,L as H,af as ve,ao as oe,V as ke,P as ie,U as E,R as se,h as P,j as ee,k as I,o as K,ap as ue,t as le,x as V,am as Se,E as Ee,ae as Ne,q as ye,r as Re,F as X}from"./index-3370be2a.js";/* empty css */import{b as fe,B as Te}from"./Button-89624748.js";import{B as ze}from"./BlockTitle-bcf8c05e.js";import"./Info-5611e10f.js";function Je(n){let e,l;return{c(){e=Y("svg"),l=Y("path"),g(l,"d","M5 8l4 4 4-4z"),g(e,"class","dropdown-arrow svelte-p5edak"),g(e,"xmlns","http://www.w3.org/2000/svg"),g(e,"width","18"),g(e,"height","18"),g(e,"viewBox","0 0 18 18")},m(i,o){L(i,e,o),C(e,l)},p:G,i:G,o:G,d(i){i&&M(e)}}}class Le extends Q{constructor(e){super(),Z(this,e,null,Je,W,{})}}function Me(n){let e,l;return{c(){e=Y("svg"),l=Y("path"),g(l,"d","M19 6.41L17.59 5 12 10.59 6.41 5 5 6.41 10.59 12 5 17.59 6.41 19 12 13.41 17.59 19 19 17.59 13.41 12z"),g(e,"xmlns","http://www.w3.org/2000/svg"),g(e,"width","16"),g(e,"height","16"),g(e,"viewBox","0 0 24 24")},m(i,o){L(i,e,o),C(e,l)},p:G,i:G,o:G,d(i){i&&M(e)}}}class Oe extends Q{constructor(e){super(),Z(this,e,null,Me,W,{})}}function ae(n,e,l){const i=n.slice();return i[24]=e[l],i}function re(n){let e,l,i,o,d,t=x(n[0]),u=[];for(let s=0;s{i&&(l||(l=oe(e,fe,{duration:200,y:5},!0)),l.run(1))}),i=!0)},o(s){s&&(l||(l=oe(e,fe,{duration:200,y:5},!1)),l.run(0)),i=!1},d(s){s&&M(e),ke(u,s),n[21](null),s&&l&&l.end(),o=!1,d()}}}function _e(n){let e,l,i,o=n[24]+"",d,t,u,s;return{c(){e=z("li"),l=z("span"),l.textContent="✓",i=U(),d=ie(o),t=U(),g(l,"class","inner-item svelte-1aonegi"),E(l,"hide",!n[11].includes(n[24])),g(e,"class","item svelte-1aonegi"),g(e,"role","button"),g(e,"data-value",u=n[24]),g(e,"aria-label",s=n[24]),E(e,"selected",n[11].includes(n[24])),E(e,"active",n[2]===n[24]),E(e,"bg-gray-100",n[2]===n[24]),E(e,"dark:bg-gray-600",n[2]===n[24])},m(f,a){L(f,e,a),C(e,l),C(e,i),C(e,d),C(e,t)},p(f,a){a&2049&&E(l,"hide",!f[11].includes(f[24])),a&1&&o!==(o=f[24]+"")&&se(d,o),a&1&&u!==(u=f[24])&&g(e,"data-value",u),a&1&&s!==(s=f[24])&&g(e,"aria-label",s),a&2049&&E(e,"selected",f[11].includes(f[24])),a&5&&E(e,"active",f[2]===f[24]),a&5&&E(e,"bg-gray-100",f[2]===f[24]),a&5&&E(e,"dark:bg-gray-600",f[2]===f[24])},d(f){f&&M(e)}}}function Ue(n){let e,l,i,o,d;me(n[18]);let t=n[1]&&!n[3]&&re(n);return{c(){e=z("div"),l=U(),t&&t.c(),i=ge(),g(e,"class","reference")},m(u,s){L(u,e,s),n[19](e),L(u,l,s),t&&t.m(u,s),L(u,i,s),o||(d=[q(window,"scroll",n[12]),q(window,"resize",n[18])],o=!0)},p(u,[s]){u[1]&&!u[3]?t?(t.p(u,s),s&10&&N(t,1)):(t=re(u),t.c(),N(t,1),t.m(i.parentNode,i)):t&&(ne(),R(t,1,1,()=>{t=null}),te())},i(u){N(t)},o(u){R(t)},d(u){u&&(M(e),M(l),M(i)),n[19](null),t&&t.d(u),o=!1,we(d)}}}function qe(n,e,l){let i,{value:o=void 0}=e,{filtered:d}=e,{showOptions:t=!1}=e,{activeOption:u}=e,{disabled:s=!1}=e,f,a,m,_,w,A,b,B,v,k;const S=()=>{const{top:O,bottom:F}=w.getBoundingClientRect();l(15,f=O),l(16,a=k-F)};let y=null;const D=()=>{t&&(y!==null&&clearTimeout(y),y=setTimeout(()=>{S(),y=null},10))},j=be();function J(){l(10,k=window.innerHeight)}function h(O){P[O?"unshift":"push"](()=>{w=O,l(4,w)})}const p=O=>j("change",O);function r(O){P[O?"unshift":"push"](()=>{A=O,l(5,A)})}return n.$$set=O=>{"value"in O&&l(14,o=O.value),"filtered"in O&&l(0,d=O.filtered),"showOptions"in O&&l(1,t=O.showOptions),"activeOption"in O&&l(2,u=O.activeOption),"disabled"in O&&l(3,s=O.disabled)},n.$$.update=()=>{if(n.$$.dirty&245810){if(t&&w){if(A&&typeof o=="string"){let F=document.querySelector(`li[data-value="${o}"]`);F&&A.scrollTo(0,F.offsetTop)}S();const O=w.parentElement?.getBoundingClientRect();l(17,m=O?.height||0),l(6,_=O?.width||0)}a>f?(l(7,b=`${f}px`),l(9,v=a),l(8,B=null)):(l(8,B=`${a+m}px`),l(9,v=f-m),l(7,b=null))}n.$$.dirty&16384&&l(11,i=Array.isArray(o)?o:[o])},[d,t,u,s,w,A,_,b,B,v,k,i,D,j,o,f,a,m,J,h,p,r]}class je extends Q{constructor(e){super(),Z(this,e,qe,Ue,W,{value:14,filtered:0,showOptions:1,activeOption:2,disabled:3})}}function ce(n,e,l){const i=n.slice();return i[31]=e[l],i}function He(n){let e;return{c(){e=ie(n[1])},m(l,i){L(l,e,i)},p(l,i){i[0]&2&&se(e,l[1])},d(l){l&&M(e)}}}function de(n){let e,l,i=x(n[0]),o=[];for(let t=0;tR(o[t],1,1,()=>{o[t]=null});return{c(){for(let t=0;tee(B,"value",j)),B.$on("change",n[15]),{c(){e=z("label"),I(l.$$.fragment),i=U(),o=z("div"),d=z("div"),D&&D.c(),u=U(),s=z("div"),f=z("input"),a=U(),m=z("div"),I(_.$$.fragment),w=U(),I(A.$$.fragment),b=U(),I(B.$$.fragment),g(f,"class","border-none svelte-c0u3f0"),f.disabled=n[4],g(f,"autocomplete","off"),E(f,"subdued",n[0]!==n[8]&&!n[7]),g(m,"class","token-remove remove-all svelte-c0u3f0"),g(m,"title","Clear"),E(m,"hide",!n[3]||!n[0]?.length||n[4]),g(s,"class","secondary-wrap svelte-c0u3f0"),g(d,"class","wrap-inner svelte-c0u3f0"),E(d,"showOptions",n[11]),g(o,"class","wrap svelte-c0u3f0"),g(e,"class","svelte-c0u3f0"),E(e,"container",n[6])},m(h,p){L(h,e,p),K(l,e,null),C(e,i),C(e,o),C(o,d),D&&D.m(d,null),C(d,u),C(d,s),C(s,f),ue(f,n[8]),n[23](f),C(s,a),C(s,m),K(_,m,null),C(s,w),K(A,s,null),C(o,b),K(B,o,null),k=!0,S||(y=[q(f,"input",n[22]),q(f,"focus",n[24]),q(f,"keydown",n[16]),q(f,"keyup",n[25]),q(f,"blur",n[26]),q(m,"click",n[14])],S=!0)},p(h,p){const r={};p[0]&32&&(r.show_label=h[5]),p[0]&4&&(r.info=h[2]),p[0]&2|p[1]&8&&(r.$$scope={dirty:p,ctx:h}),l.$set(r),p[0]&9&&(t=h[3]&&Array.isArray(h[0])),t?D?(D.p(h,p),p[0]&9&&N(D,1)):(D=de(h),D.c(),N(D,1),D.m(d,u)):D&&(ne(),R(D,1,1,()=>{D=null}),te()),(!k||p[0]&16)&&(f.disabled=h[4]),p[0]&256&&f.value!==h[8]&&ue(f,h[8]),(!k||p[0]&385)&&E(f,"subdued",h[0]!==h[8]&&!h[7]),(!k||p[0]&25)&&E(m,"hide",!h[3]||!h[0]?.length||h[4]),(!k||p[0]&2048)&&E(d,"showOptions",h[11]);const O={};p[0]&2048&&(O.showOptions=h[11]),p[0]&1024&&(O.filtered=h[10]),p[0]&512&&(O.activeOption=h[9]),p[0]&16&&(O.disabled=h[4]),!v&&p[0]&1&&(v=!0,O.value=h[0],le(()=>v=!1)),B.$set(O),(!k||p[0]&64)&&E(e,"container",h[6])},i(h){k||(N(l.$$.fragment,h),N(D),N(_.$$.fragment,h),N(A.$$.fragment,h),N(B.$$.fragment,h),k=!0)},o(h){R(l.$$.fragment,h),R(D),R(_.$$.fragment,h),R(A.$$.fragment,h),R(B.$$.fragment,h),k=!1},d(h){h&&M(e),V(l),D&&D.d(),n[23](null),V(_),V(A),V(B),S=!1,we(y)}}}function Ke(n,e,l){let i,{label:o}=e,{info:d=void 0}=e,{value:t}=e,u=Array.isArray(t)?t.slice():t,{value_is_output:s=!1}=e,{multiselect:f=!1}=e,{max_choices:a}=e,{choices:m}=e,{disabled:_=!1}=e,{show_label:w}=e,{container:A=!0}=e,{allow_custom_value:b=!1}=e;const B=be();let v,k,S=!1,y;function D(){B("change",t),s||B("input")}Se(()=>{l(17,s=!1)});function j(c){l(0,t),(!a||t.lengthT!==c)),B("select",{index:m.indexOf(c),value:c,selected:!1})}function h(c){l(0,t=[]),l(8,v=""),c.preventDefault()}function p(c){const T=c.detail.target.dataset.value;if(b&&l(8,v=T),T!==void 0)if(f)t?.includes(T)?J(T):j(T),l(8,v="");else{l(0,t=T),l(8,v=T),l(11,S=!1),B("select",{index:m.indexOf(T),value:T,selected:!0});return}}function r(c){if(c.key==="Enter"&&k!=null)f?f&&Array.isArray(t)&&(t.includes(k)?J(k):j(k),l(8,v="")):(t!==k&&(l(0,t=k),B("select",{index:m.indexOf(t),value:t,selected:!0})),l(8,v=k),l(11,S=!1));else if(l(11,S=!0),c.key==="ArrowUp"||c.key==="ArrowDown"){k===null&&l(9,k=i[0]);const T=c.key==="ArrowUp"?-1:1,$=i.indexOf(k)+T;l(9,k=$<0?i[i.length-1]:$===i.length?i[0]:i[$]),c.preventDefault()}else c.key==="Escape"?l(11,S=!1):c.key==="Backspace"?f&&(!v||v==="")&&Array.isArray(t)&&t.length>0&&(J(t[t.length-1]),l(8,v="")):l(11,S=!0)}const O=c=>J(c);function F(){v=this.value,l(8,v),l(0,t)}function Ae(c){P[c?"unshift":"push"](()=>{y=c,l(12,y)})}const pe=()=>{l(11,S=!S),S?l(8,v=""):y.blur()},Be=()=>{b&&l(0,t=v)},De=()=>{f?l(8,v=""):b||t!==v&&(typeof t=="string"&&v==""?l(8,v=t):(l(0,t=void 0),l(8,v=""))),l(11,S=!1)};function Ce(c){t=c,l(0,t)}return n.$$set=c=>{"label"in c&&l(1,o=c.label),"info"in c&&l(2,d=c.info),"value"in c&&l(0,t=c.value),"value_is_output"in c&&l(17,s=c.value_is_output),"multiselect"in c&&l(3,f=c.multiselect),"max_choices"in c&&l(18,a=c.max_choices),"choices"in c&&l(19,m=c.choices),"disabled"in c&&l(4,_=c.disabled),"show_label"in c&&l(5,w=c.show_label),"container"in c&&l(6,A=c.container),"allow_custom_value"in c&&l(7,b=c.allow_custom_value)},n.$$.update=()=>{n.$$.dirty[0]&1&&(typeof t=="string"||t===null)&&l(8,v=t),n.$$.dirty[0]&524544&&l(10,i=m.filter(c=>v?c.toLowerCase().includes(v.toLowerCase()):c)),n.$$.dirty[0]&1536&&(!k||!i.includes(k))&&l(9,k=i.length?i[0]:null),n.$$.dirty[0]&1048577&&JSON.stringify(t)!=JSON.stringify(u)&&(l(20,u=Array.isArray(t)?t.slice():t),D()),n.$$.dirty[0]&1048577&&JSON.stringify(t)!=JSON.stringify(u)&&(B("change",t),l(20,u=Array.isArray(t)?t.slice():t))},[t,o,d,f,_,w,A,b,v,k,i,S,y,J,h,p,r,s,a,m,u,O,F,Ae,pe,Be,De,Ce]}class Ve extends Q{constructor(e){super(),Z(this,e,Ke,Ie,W,{label:1,info:2,value:0,value_is_output:17,multiselect:3,max_choices:18,choices:19,disabled:4,show_label:5,container:6,allow_custom_value:7},null,[-1,-1])}}function Fe(n){let e,l,i,o,d,t;const u=[n[14]];let s={};for(let _=0;_ee(i,"value",f)),P.push(()=>ee(i,"value_is_output",a)),i.$on("change",n[19]),i.$on("input",n[20]),i.$on("select",n[21]),i.$on("blur",n[22]),{c(){I(e.$$.fragment),l=U(),I(i.$$.fragment)},m(_,w){K(e,_,w),L(_,l,w),K(i,_,w),t=!0},p(_,w){const A=w&16384?ye(u,[Re(_[14])]):{};e.$set(A);const b={};w&512&&(b.choices=_[9]),w&128&&(b.multiselect=_[7]),w&256&&(b.max_choices=_[8]),w&4&&(b.label=_[2]),w&8&&(b.info=_[3]),w&1024&&(b.show_label=_[10]),w&32768&&(b.allow_custom_value=_[15]),w&2048&&(b.container=_[11]),w&65536&&(b.disabled=_[16]==="static"),!o&&w&1&&(o=!0,b.value=_[0],le(()=>o=!1)),!d&&w&2&&(d=!0,b.value_is_output=_[1],le(()=>d=!1)),i.$set(b)},i(_){t||(N(e.$$.fragment,_),N(i.$$.fragment,_),t=!0)},o(_){R(e.$$.fragment,_),R(i.$$.fragment,_),t=!1},d(_){_&&M(l),V(e,_),V(i,_)}}}function Ge(n){let e,l;return e=new Te({props:{visible:n[6],elem_id:n[4],elem_classes:n[5],padding:n[11],allow_overflow:!1,scale:n[12],min_width:n[13],$$slots:{default:[Fe]},$$scope:{ctx:n}}}),{c(){I(e.$$.fragment)},m(i,o){K(e,i,o),l=!0},p(i,[o]){const d={};o&64&&(d.visible=i[6]),o&16&&(d.elem_id=i[4]),o&32&&(d.elem_classes=i[5]),o&2048&&(d.padding=i[11]),o&4096&&(d.scale=i[12]),o&8192&&(d.min_width=i[13]),o&8507279&&(d.$$scope={dirty:o,ctx:i}),e.$set(d)},i(i){l||(N(e.$$.fragment,i),l=!0)},o(i){R(e.$$.fragment,i),l=!1},d(i){V(e,i)}}}function Pe(n,e,l){let{label:i="Dropdown"}=e,{info:o=void 0}=e,{elem_id:d=""}=e,{elem_classes:t=[]}=e,{visible:u=!0}=e,{value:s}=e,{value_is_output:f=!1}=e,{multiselect:a=!1}=e,{max_choices:m}=e,{choices:_}=e,{show_label:w}=e,{container:A=!0}=e,{scale:b=null}=e,{min_width:B=void 0}=e,{loading_status:v}=e,{allow_custom_value:k=!1}=e,{mode:S}=e;a&&!s?s=[]:s||(s="");function y(r){s=r,l(0,s)}function D(r){f=r,l(1,f)}function j(r){X.call(this,n,r)}function J(r){X.call(this,n,r)}function h(r){X.call(this,n,r)}function p(r){X.call(this,n,r)}return n.$$set=r=>{"label"in r&&l(2,i=r.label),"info"in r&&l(3,o=r.info),"elem_id"in r&&l(4,d=r.elem_id),"elem_classes"in r&&l(5,t=r.elem_classes),"visible"in r&&l(6,u=r.visible),"value"in r&&l(0,s=r.value),"value_is_output"in r&&l(1,f=r.value_is_output),"multiselect"in r&&l(7,a=r.multiselect),"max_choices"in r&&l(8,m=r.max_choices),"choices"in r&&l(9,_=r.choices),"show_label"in r&&l(10,w=r.show_label),"container"in r&&l(11,A=r.container),"scale"in r&&l(12,b=r.scale),"min_width"in r&&l(13,B=r.min_width),"loading_status"in r&&l(14,v=r.loading_status),"allow_custom_value"in r&&l(15,k=r.allow_custom_value),"mode"in r&&l(16,S=r.mode)},[s,f,i,o,d,t,u,a,m,_,w,A,b,B,v,k,S,y,D,j,J,h,p]}class Qe extends Q{constructor(e){super(),Z(this,e,Pe,Ge,W,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,value:0,value_is_output:1,multiselect:7,max_choices:8,choices:9,show_label:10,container:11,scale:12,min_width:13,loading_status:14,allow_custom_value:15,mode:16})}}const $e=Qe,el=["static","dynamic"],ll=n=>({type:{payload:"string"},description:{payload:"selected choice"},example_data:n.choices.length?n.choices[0]:""});export{$e as Component,ll as document,el as modes}; -//# sourceMappingURL=index-711d7bc4.js.map diff --git a/spaces/DavidLijun/FI/app.py b/spaces/DavidLijun/FI/app.py deleted file mode 100644 index 417c30fed08557a32b544709f1e6b5974349b717..0000000000000000000000000000000000000000 --- a/spaces/DavidLijun/FI/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import os -from dotenv import load_dotenv -from subprocess import Popen -load_dotenv() - -command = ["mercury", "run", f"0.0.0.0:{os.environ.get('PORT', 7860)}"] -worker = Popen(command) -worker.wait() \ No newline at end of file diff --git a/spaces/DenniSciFi/IconAutomation/icon_automation_interface_copy.py b/spaces/DenniSciFi/IconAutomation/icon_automation_interface_copy.py deleted file mode 100644 index 236ed6dc216bf02f066a73653b17b6ddd10054b1..0000000000000000000000000000000000000000 --- a/spaces/DenniSciFi/IconAutomation/icon_automation_interface_copy.py +++ /dev/null @@ -1,27 +0,0 @@ -import gradio as gr -from icon_automation_copy import update_related_opportunities_icons - -# Defining the interface function -def interface(notion_url, image_url): - # Calling the update_related_opportunities_icons function with the provided URLs - update_related_opportunities_icons(notion_url, image_url) - return "Operation Completed" - -# Creating an instance of the gr.Interface class -iface = gr.Interface( - # Setting the interface function as the function to be called when inputs are provided - fn=interface, - # Defining the inputs for the interface as two text boxes: Notion URL and Image URL - inputs=[ - gr.inputs.Textbox(label="Notion URL"), - gr.inputs.Textbox(label="Image URL"), - ], - # Setting the output of the interface as a single text box - outputs=gr.outputs.Textbox(), - # Setting the title and description of the interface - title="Icon Automation for Notion", - description="Enter the Notion URL of the organization page and the image URL you want to set as the icon. This will update the icons of all related opportunities.", -) - -# Launching the interface, making it accessible for user interaction -iface.launch() \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/fused_act.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/fused_act.py deleted file mode 100644 index bd4aedd84977884683fb213e11f33ca493aef583..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/fused_act.py +++ /dev/null @@ -1,32 +0,0 @@ -import os - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - - -module_path = os.path.dirname(__file__) - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5): - rest_dim = [1] * (input.ndim - bias.ndim - 1) - input = input.cuda() - return ( - F.leaky_relu( - input + bias.view(1, bias.shape[0], *rest_dim), negative_slope=negative_slope - ) - * scale - ) diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/op_edit/upfirdn2d.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/op_edit/upfirdn2d.py deleted file mode 100644 index ecdcabbe20d2405b71d049d0bf94ae576fe58493..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/op_edit/upfirdn2d.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -import os - -import torch -from torch.nn import functional as F -from torch.autograd import Function -from torch.utils.cpp_extension import load - - -module_path = os.path.dirname(__file__) -upfirdn2d_op = load( - "upfirdn2d", - sources=[ - os.path.join(module_path, "upfirdn2d.cpp"), - os.path.join(module_path, "upfirdn2d_kernel.cu"), - ], -) - - -class UpFirDn2dBackward(Function): - @staticmethod - def forward( - ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size - ): - - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_op.upfirdn2d( - grad_output, - grad_kernel, - down_x, - down_y, - up_x, - up_y, - g_pad_x0, - g_pad_x1, - g_pad_y0, - g_pad_y1, - ) - grad_input = grad_input.view( - in_size[0], in_size[1], in_size[2], in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - (kernel,) = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, - ctx.in_size[2], ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_op.upfirdn2d( - gradgrad_input, - kernel, - ctx.up_x, - ctx.up_y, - ctx.down_x, - ctx.down_y, - ctx.pad_x0, - ctx.pad_x1, - ctx.pad_y0, - ctx.pad_y1, - ) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view( - ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1] - ) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_op.upfirdn2d( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 - ) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - if input.device.type == "cpu": - out = upfirdn2d_native( - input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1] - ) - - else: - out = UpFirDn2d.apply( - input, kernel, (up, up), (down, - down), (pad[0], pad[1], pad[0], pad[1]) - ) - - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), - max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - - return out.view(-1, channel, out_h, out_w) diff --git a/spaces/Duskfallcrew/Duskfallcrew-Osenayan_Mix/README.md b/spaces/Duskfallcrew/Duskfallcrew-Osenayan_Mix/README.md deleted file mode 100644 index e3c34268f57c6bd75eaf69618892d1aafd713fcb..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/Duskfallcrew-Osenayan_Mix/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Duskfallcrew-Osenayan Mix -emoji: 📚 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/layers_123821KB.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/layers_123821KB.py deleted file mode 100644 index 4fc1b5cb85a3327f60cbb9f5deffbeeaaac516ad..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/layers_123821KB.py +++ /dev/null @@ -1,118 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/Eddycrack864/Applio-Inference/tools/dlmodels.sh b/spaces/Eddycrack864/Applio-Inference/tools/dlmodels.sh deleted file mode 100644 index 5fba0edef345c0a4384aa9402cfd5e93e29efdc3..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/tools/dlmodels.sh +++ /dev/null @@ -1,566 +0,0 @@ -#!/bin/bash - -echo working dir is $(pwd) -echo downloading requirement aria2 check. - -if command -v aria2c &> /dev/null -then - echo "aria2c command found" -else - echo failed. please install aria2 - sleep 5 - exit 1 -fi - -d32="f0D32k.pth" -d40="f0D40k.pth" -d48="f0D48k.pth" -g32="f0G32k.pth" -g40="f0G40k.pth" -g48="f0G48k.pth" - -d40v2="f0D40k.pth" -g40v2="f0G40k.pth" - -dld32="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth" -dld40="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth" -dld48="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth" -dlg32="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth" -dlg40="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth" -dlg48="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth" - -dld40v2="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth" -dlg40v2="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth" - -hp2_all="HP2_all_vocals.pth" -hp3_all="HP3_all_vocals.pth" -hp5_only="HP5_only_main_vocal.pth" -VR_DeEchoAggressive="VR-DeEchoAggressive.pth" -VR_DeEchoDeReverb="VR-DeEchoDeReverb.pth" -VR_DeEchoNormal="VR-DeEchoNormal.pth" -onnx_dereverb="vocals.onnx" -rmvpe="rmvpe.pt" - -dlhp2_all="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2_all_vocals.pth" -dlhp3_all="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP3_all_vocals.pth" -dlhp5_only="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5_only_main_vocal.pth" -dlVR_DeEchoAggressive="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoAggressive.pth" -dlVR_DeEchoDeReverb="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoDeReverb.pth" -dlVR_DeEchoNormal="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/VR-DeEchoNormal.pth" -dlonnx_dereverb="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/onnx_dereverb_By_FoxJoy/vocals.onnx" -dlrmvpe="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/rmvpe.pt" - -hb="hubert_base.pt" - -dlhb="https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt" - -echo dir check start. - -if [ -d "./assets/pretrained" ]; then - echo dir ./assets/pretrained checked. -else - echo failed. generating dir ./assets/pretrained. - mkdir pretrained -fi - -if [ -d "./assets/pretrained_v2" ]; then - echo dir ./assets/pretrained_v2 checked. -else - echo failed. generating dir ./assets/pretrained_v2. - mkdir pretrained_v2 -fi - -if [ -d "./assets/uvr5_weights" ]; then - echo dir ./assets/uvr5_weights checked. -else - echo failed. generating dir ./assets/uvr5_weights. - mkdir uvr5_weights -fi - -if [ -d "./assets/uvr5_weights/onnx_dereverb_By_FoxJoy" ]; then - echo dir ./assets/uvr5_weights/onnx_dereverb_By_FoxJoy checked. -else - echo failed. generating dir ./assets/uvr5_weights/onnx_dereverb_By_FoxJoy. - mkdir uvr5_weights/onnx_dereverb_By_FoxJoy -fi - -echo dir check finished. - -echo required files check start. - -echo checking D32k.pth -if [ -f "./assets/pretrained/D32k.pth" ]; then - echo D32k.pth in ./assets/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d ./assets/pretrained -o D32k.pth - if [ -f "./assets/pretrained/D32k.pth" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking D40k.pth -if [ -f "./assets/pretrained/D40k.pth" ]; then - echo D40k.pth in ./assets/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d ./assets/pretrained -o D40k.pth - if [ -f "./assets/pretrained/D40k.pth" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking D40k.pth -if [ -f "./assets/pretrained_v2/D40k.pth" ]; then - echo D40k.pth in ./assets/pretrained_v2 checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d ./assets/pretrained_v2 -o D40k.pth - if [ -f "./assets/pretrained_v2/D40k.pth" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking D48k.pth -if [ -f "./assets/pretrained/D48k.pth" ]; then - echo D48k.pth in ./assets/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d ./assets/pretrained -o D48k.pth - if [ -f "./assets/pretrained/D48k.pth" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking G32k.pth -if [ -f "./assets/pretrained/G32k.pth" ]; then - echo G32k.pth in ./assets/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d ./assets/pretrained -o G32k.pth - if [ -f "./assets/pretrained/G32k.pth" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking G40k.pth -if [ -f "./assets/pretrained/G40k.pth" ]; then - echo G40k.pth in ./assets/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d ./assets/pretrained -o G40k.pth - if [ -f "./assets/pretrained/G40k.pth" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking G40k.pth -if [ -f "./assets/pretrained_v2/G40k.pth" ]; then - echo G40k.pth in ./assets/pretrained_v2 checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d ./assets/pretrained_v2 -o G40k.pth - if [ -f "./assets/pretrained_v2/G40k.pth" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking G48k.pth -if [ -f "./assets/pretrained/G48k.pth" ]; then - echo G48k.pth in ./assets/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d ./assets/pretrained -o G48k.pth - if [ -f "./assets/pretrained/G48k.pth" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $d32 -if [ -f "./assets/pretrained/$d32" ]; then - echo $d32 in ./assets/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dld32 -d ./assets/pretrained -o $d32 - if [ -f "./assets/pretrained/$d32" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $d40 -if [ -f "./assets/pretrained/$d40" ]; then - echo $d40 in ./assets/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dld40 -d ./assets/pretrained -o $d40 - if [ -f "./assets/pretrained/$d40" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $d40v2 -if [ -f "./assets/pretrained_v2/$d40v2" ]; then - echo $d40v2 in ./assets/pretrained_v2 checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dld40v2 -d ./assets/pretrained_v2 -o $d40v2 - if [ -f "./assets/pretrained_v2/$d40v2" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $d48 -if [ -f "./assets/pretrained/$d48" ]; then - echo $d48 in ./assets/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dld48 -d ./assets/pretrained -o $d48 - if [ -f "./assets/pretrained/$d48" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $g32 -if [ -f "./assets/pretrained/$g32" ]; then - echo $g32 in ./assets/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlg32 -d ./assets/pretrained -o $g32 - if [ -f "./assets/pretrained/$g32" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $g40 -if [ -f "./assets/pretrained/$g40" ]; then - echo $g40 in ./assets/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlg40 -d ./assets/pretrained -o $g40 - if [ -f "./assets/pretrained/$g40" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $g40v2 -if [ -f "./assets/pretrained_v2/$g40v2" ]; then - echo $g40v2 in ./assets/pretrained_v2 checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlg40v2 -d ./assets/pretrained_v2 -o $g40v2 - if [ -f "./assets/pretrained_v2/$g40v2" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $g48 -if [ -f "./assets/pretrained/$g48" ]; then - echo $g48 in ./assets/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlg48 -d ./assets/pretrained -o $g48 - if [ -f "./assets/pretrained/$g48" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $hp2_all -if [ -f "./assets/uvr5_weights/$hp2_all" ]; then - echo $hp2_all in ./assets/uvr5_weights checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlhp2_all -d ./assets/uvr5_weights -o $hp2_all - if [ -f "./assets/uvr5_weights/$hp2_all" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $hp3_all -if [ -f "./assets/uvr5_weights/$hp3_all" ]; then - echo $hp3_all in ./assets/uvr5_weights checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlhp3_all -d ./assets/uvr5_weights -o $hp3_all - if [ -f "./assets/uvr5_weights/$hp3_all" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $hp5_only -if [ -f "./assets/uvr5_weights/$hp5_only" ]; then - echo $hp5_only in ./assets/uvr5_weights checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlhp5_only -d ./assets/uvr5_weights -o $hp5_only - if [ -f "./assets/uvr5_weights/$hp5_only" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $VR_DeEchoAggressive -if [ -f "./assets/uvr5_weights/$VR_DeEchoAggressive" ]; then - echo $VR_DeEchoAggressive in ./assets/uvr5_weights checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlVR_DeEchoAggressive -d ./assets/uvr5_weights -o $VR_DeEchoAggressive - if [ -f "./assets/uvr5_weights/$VR_DeEchoAggressive" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $VR_DeEchoDeReverb -if [ -f "./assets/uvr5_weights/$VR_DeEchoDeReverb" ]; then - echo $VR_DeEchoDeReverb in ./assets/uvr5_weights checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlVR_DeEchoDeReverb -d ./assets/uvr5_weights -o $VR_DeEchoDeReverb - if [ -f "./assets/uvr5_weights/$VR_DeEchoDeReverb" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $VR_DeEchoNormal -if [ -f "./assets/uvr5_weights/$VR_DeEchoNormal" ]; then - echo $VR_DeEchoNormal in ./assets/uvr5_weights checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlVR_DeEchoNormal -d ./assets/uvr5_weights -o $VR_DeEchoNormal - if [ -f "./assets/uvr5_weights/$VR_DeEchoNormal" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $onnx_dereverb -if [ -f "./assets/uvr5_weights/onnx_dereverb_By_FoxJoy/$onnx_dereverb" ]; then - echo $onnx_dereverb in ./assets/uvr5_weights/onnx_dereverb_By_FoxJoy checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlonnx_dereverb -d ./assets/uvr5_weights/onnx_dereverb_By_FoxJoy -o $onnx_dereverb - if [ -f "./assets/uvr5_weights/onnx_dereverb_By_FoxJoy/$onnx_dereverb" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $rmvpe -if [ -f "./assets/rmvpe/$rmvpe" ]; then - echo $rmvpe in ./assets/rmvpe checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlrmvpe -d ./assets/rmvpe -o $rmvpe - if [ -f "./assets/rmvpe/$rmvpe" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo checking $hb -if [ -f "./assets/hubert/$hb" ]; then - echo $hb in ./assets/hubert/pretrained checked. -else - echo failed. starting download from huggingface. - if command -v aria2c &> /dev/null; then - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M $dlhb -d ./assets/hubert/ -o $hb - if [ -f "./assets/hubert/$hb" ]; then - echo download successful. - else - echo please try again! - exit 1 - fi - else - echo aria2c command not found. Please install aria2c and try again. - exit 1 - fi -fi - -echo required files check finished. diff --git a/spaces/EuroSciPy2022/arxiv-cards/htmlcard.html b/spaces/EuroSciPy2022/arxiv-cards/htmlcard.html deleted file mode 100644 index 575f1d0b883fbe9bf6306cb1326b681482be22a8..0000000000000000000000000000000000000000 --- a/spaces/EuroSciPy2022/arxiv-cards/htmlcard.html +++ /dev/null @@ -1,36 +0,0 @@ - - - - - - - - {% for url, paper in paper_details.items() %} -
    -
    -
    -
    - arxiv logo -
    -

    [{{ paper.arxiv_id}}]

    -
    -
    {{ paper.title }}
    - -

    {{ paper.abstract }}

    - - -
    -
    - {% endfor %} - \ No newline at end of file diff --git a/spaces/Fantasy-Studio/Paint-by-Example/share_btn.py b/spaces/Fantasy-Studio/Paint-by-Example/share_btn.py deleted file mode 100644 index 5bce98ad54d491f9d5691fea427efeccc77690cc..0000000000000000000000000000000000000000 --- a/spaces/Fantasy-Studio/Paint-by-Example/share_btn.py +++ /dev/null @@ -1,93 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getInputImgFile(imgCanvas){ - const blob = await new Promise(resolve => imgCanvas.toBlob(resolve)); - const imgId = Date.now() % 200; - const fileName = `sd-inpainting-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - } - - async function getOutoutImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `sd-inpainting-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - } - - const gradioEl = document.querySelector('body > gradio-app'); - // const gradioEl = document.querySelector("gradio-app").shadowRoot; - const inputImgCanvas = gradioEl.querySelector('canvas[key="drawing"]'); - const outputImgEl = gradioEl.querySelector('#output-img img'); - const promptTxt = gradioEl.querySelector('#input-text textarea').value; - let titleTxt = promptTxt; - if(titleTxt.length > 100){ - titleTxt = titleTxt.slice(0, 100) + ' ...'; - } - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - if(!outputImgEl){ - return; - }; - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const inputImgFile = await getInputImgFile(inputImgCanvas); - const outputImgFile = await getOutoutImgFile(outputImgEl); - const files = [inputImgFile, outputImgFile]; - - const urls = await Promise.all(files.map((f) => uploadFile(f))); - - const htmlImgs = urls.map(url => ``); - const [inputImgUrl, outputImgUrl] = htmlImgs; - - const descriptionMd = `
    -
    -${inputImgUrl} - -${promptTxt} -
    -
    -${outputImgUrl} -
    -
    `; - - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - - const paramsStr = params.toString(); - window.open(`${window.location.href}/discussions/new?${paramsStr}`, '_blank'); - - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index ee3171bcb7c4a5066560723108b56e055f18be45..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,90 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class DioF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/setup.py b/spaces/Freiburg-AI-Research/dermoscopic_image_generation/setup.py deleted file mode 100644 index 76a029ee52d2847fe7f54dcafc7f5edc86005409..0000000000000000000000000000000000000000 --- a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/setup.py +++ /dev/null @@ -1,15 +0,0 @@ -from setuptools import setup - -setup( - name="glide-text2im", - packages=["glide_text2im"], - install_requires=[ - "Pillow", - "attrs", - "torch", - "filelock", - "requests", - "tqdm", - ], - author="OpenAI", -) diff --git a/spaces/FridaZuley/RVC_HFKawaii/Applio-RVC-Fork/utils/clonerepo_experimental.py b/spaces/FridaZuley/RVC_HFKawaii/Applio-RVC-Fork/utils/clonerepo_experimental.py deleted file mode 100644 index b0ae02648c1307562cf48033908edcf2996db5e2..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/Applio-RVC-Fork/utils/clonerepo_experimental.py +++ /dev/null @@ -1,253 +0,0 @@ -import os -import subprocess -import shutil -from concurrent.futures import ThreadPoolExecutor, as_completed -from tqdm.notebook import tqdm -from pathlib import Path -import requests - -def run_script(): - def run_cmd(cmd): - process = subprocess.run(cmd, shell=True, check=True, text=True) - return process.stdout - - # Change the current directory to /content/ - os.chdir('/content/') - print("Changing dir to /content/") - - # Your function to edit the file - def edit_file(file_path): - temp_file_path = "/tmp/temp_file.py" - changes_made = False - with open(file_path, "r") as file, open(temp_file_path, "w") as temp_file: - previous_line = "" - second_previous_line = "" - for line in file: - new_line = line.replace("value=160", "value=128") - if new_line != line: - print("Replaced 'value=160' with 'value=128'") - changes_made = True - line = new_line - - new_line = line.replace("crepe hop length: 160", "crepe hop length: 128") - if new_line != line: - print("Replaced 'crepe hop length: 160' with 'crepe hop length: 128'") - changes_made = True - line = new_line - - new_line = line.replace("value=0.88", "value=0.75") - if new_line != line: - print("Replaced 'value=0.88' with 'value=0.75'") - changes_made = True - line = new_line - - if "label=i18n(\"输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络\")" in previous_line and "value=1," in line: - new_line = line.replace("value=1,", "value=0.25,") - if new_line != line: - print("Replaced 'value=1,' with 'value=0.25,' based on the condition") - changes_made = True - line = new_line - - if "label=i18n(\"总训练轮数total_epoch\")" in previous_line and "value=20," in line: - new_line = line.replace("value=20,", "value=500,") - if new_line != line: - print("Replaced 'value=20,' with 'value=500,' based on the condition for DEFAULT EPOCH") - changes_made = True - line = new_line - - if 'choices=["pm", "harvest", "dio", "crepe", "crepe-tiny", "mangio-crepe", "mangio-crepe-tiny"], # Fork Feature. Add Crepe-Tiny' in previous_line: - if 'value="pm",' in line: - new_line = line.replace('value="pm",', 'value="mangio-crepe",') - if new_line != line: - print("Replaced 'value=\"pm\",' with 'value=\"mangio-crepe\",' based on the condition") - changes_made = True - line = new_line - - new_line = line.replace('label=i18n("输入训练文件夹路径"), value="E:\\\\语音音频+标注\\\\米津玄师\\\\src"', 'label=i18n("输入训练文件夹路径"), value="/content/dataset/"') - if new_line != line: - print("Replaced 'label=i18n(\"输入训练文件夹路径\"), value=\"E:\\\\语音音频+标注\\\\米津玄师\\\\src\"' with 'label=i18n(\"输入训练文件夹路径\"), value=\"/content/dataset/\"'") - changes_made = True - line = new_line - - if 'label=i18n("是否仅保存最新的ckpt文件以节省硬盘空间"),' in second_previous_line: - if 'value=i18n("否"),' in line: - new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),') - if new_line != line: - print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE ONLY LATEST") - changes_made = True - line = new_line - - if 'label=i18n("是否在每次保存时间点将最终小模型保存至weights文件夹"),' in second_previous_line: - if 'value=i18n("否"),' in line: - new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),') - if new_line != line: - print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE SMALL WEIGHTS") - changes_made = True - line = new_line - - temp_file.write(line) - second_previous_line = previous_line - previous_line = line - - # After finished, we replace the original file with the temp one - import shutil - shutil.move(temp_file_path, file_path) - - if changes_made: - print("Changes made and file saved successfully.") - else: - print("No changes were needed.") - - # Define the repo path - repo_path = '/content/Applio-RVC-Fork' - - def copy_all_files_in_directory(src_dir, dest_dir): - # Iterate over all files in source directory - for item in Path(src_dir).glob('*'): - if item.is_file(): - # Copy each file to destination directory - shutil.copy(item, dest_dir) - else: - # If it's a directory, make a new directory in the destination and copy the files recursively - new_dest = Path(dest_dir) / item.name - new_dest.mkdir(exist_ok=True) - copy_all_files_in_directory(str(item), str(new_dest)) - - def clone_and_copy_repo(repo_path): - # New repository link - new_repo_link = "https://github.com/IAHispano/Applio-RVC-Fork/" - # Temporary path to clone the repository - temp_repo_path = "/content/temp_Applio-RVC-Fork" - # New folder name - new_folder_name = "Applio-RVC-Fork" - - # Clone the latest code from the new repository to a temporary location - run_cmd(f"git clone {new_repo_link} {temp_repo_path}") - os.chdir(temp_repo_path) - - run_cmd(f"git checkout 3fa4dad3d8961e5ca2522e9e12c0b4ddb71ad402") - run_cmd(f"git checkout f9e606c279cb49420597519b0a83b92be81e42e4") - run_cmd(f"git checkout 9e305588844c5442d58add1061b29beeca89d679") - run_cmd(f"git checkout bf92dc1eb54b4f28d6396a4d1820a25896cc9af8") - run_cmd(f"git checkout c3810e197d3cb98039973b2f723edf967ecd9e61") - run_cmd(f"git checkout a33159efd134c2413b0afe26a76b7dc87926d2de") - run_cmd(f"git checkout 24e251fb62c662e39ac5cf9253cc65deb9be94ec") - run_cmd(f"git checkout ad5667d3017e93232dba85969cddac1322ba2902") - run_cmd(f"git checkout ce9715392cf52dd5a0e18e00d1b5e408f08dbf27") - run_cmd(f"git checkout 7c7da3f2ac68f3bd8f3ad5ca5c700f18ab9f90eb") - run_cmd(f"git checkout 4ac395eab101955e8960b50d772c26f592161764") - run_cmd(f"git checkout b15b358702294c7375761584e5276c811ffab5e8") - run_cmd(f"git checkout 1501793dc490982db9aca84a50647764caa66e51") - run_cmd(f"git checkout 21f7faf57219c75e6ba837062350391a803e9ae2") - run_cmd(f"git checkout b5eb689fbc409b49f065a431817f822f554cebe7") - run_cmd(f"git checkout 7e02fae1ebf24cb151bf6cbe787d06734aa65862") - run_cmd(f"git checkout 6aea5ea18ed0b9a1e03fa5d268d6bc3c616672a9") - run_cmd(f"git checkout f0f9b25717e59116473fb42bd7f9252cfc32b398") - run_cmd(f"git checkout b394de424088a81fc081224bc27338a8651ad3b2") - run_cmd(f"git checkout f1999406a88b80c965d2082340f5ea2bfa9ab67a") - run_cmd(f"git checkout d98a0fa8dc715308dfc73eac5c553b69c6ee072b") - run_cmd(f"git checkout d73267a415fb0eba98477afa43ef71ffd82a7157") - run_cmd(f"git checkout 1a03d01356ae79179e1fb8d8915dc9cc79925742") - run_cmd(f"git checkout 81497bb3115e92c754300c9b3992df428886a3e9") - run_cmd(f"git checkout c5af1f8edcf79cb70f065c0110e279e78e48caf9") - run_cmd(f"git checkout cdb3c90109387fa4dfa92f53c3864c71170ffc77") - - # Edit the file here, before copying - #edit_file(f"{temp_repo_path}/infer-web.py") - - # Copy all files from the cloned repository to the existing path - copy_all_files_in_directory(temp_repo_path, repo_path) - print(f"Copying all {new_folder_name} files from GitHub.") - - # Change working directory back to /content/ - os.chdir('/content/') - print("Changed path back to /content/") - - # Remove the temporary cloned repository - shutil.rmtree(temp_repo_path) - - # Call the function - clone_and_copy_repo(repo_path) - - # Download the credentials file for RVC archive sheet - os.makedirs('/content/Applio-RVC-Fork/stats/', exist_ok=True) - run_cmd("wget -q https://cdn.discordapp.com/attachments/945486970883285045/1114717554481569802/peppy-generator-388800-07722f17a188.json -O /content/Applio-RVC-Fork/stats/peppy-generator-388800-07722f17a188.json") - - # Forcefully delete any existing torchcrepe dependencies downloaded from an earlier run just in case - shutil.rmtree('/content/Applio-RVC-Fork/torchcrepe', ignore_errors=True) - shutil.rmtree('/content/torchcrepe', ignore_errors=True) - - # Download the torchcrepe folder from the maxrmorrison/torchcrepe repository - run_cmd("git clone https://github.com/maxrmorrison/torchcrepe.git") - shutil.move('/content/torchcrepe/torchcrepe', '/content/Applio-RVC-Fork/') - shutil.rmtree('/content/torchcrepe', ignore_errors=True) # Delete the torchcrepe repository folder - - # Change the current directory to /content/Applio-RVC-Fork - os.chdir('/content/Applio-RVC-Fork') - os.makedirs('pretrained', exist_ok=True) - os.makedirs('uvr5_weights', exist_ok=True) - -def download_file(url, filepath): - response = requests.get(url, stream=True) - response.raise_for_status() - - with open(filepath, "wb") as file: - for chunk in response.iter_content(chunk_size=8192): - if chunk: - file.write(chunk) - -def download_pretrained_models(): - pretrained_models = { - "pretrained": [ - "D40k.pth", - "G40k.pth", - "f0D40k.pth", - "f0G40k.pth" - ], - "pretrained_v2": [ - "D40k.pth", - "G40k.pth", - "f0D40k.pth", - "f0G40k.pth", - "f0G48k.pth", - "f0D48k.pth" - ], - "uvr5_weights": [ - "HP2-人声vocals+非人声instrumentals.pth", - "HP5-主旋律人声vocals+其他instrumentals.pth", - "VR-DeEchoNormal.pth", - "VR-DeEchoDeReverb.pth", - "VR-DeEchoAggressive.pth", - "HP5_only_main_vocal.pth", - "HP3_all_vocals.pth", - "HP2_all_vocals.pth" - ] - } - part2 = "I" - base_url = "https://huggingface.co/lj1995/VoiceConversionWebU" + part2 + "/resolve/main/" - base_path = "/content/Applio-RVC-Fork/" - base_pathm = base_path - - # Calculate total number of files to download - total_files = sum(len(files) for files in pretrained_models.values()) + 1 # +1 for hubert_base.pt - - with tqdm(total=total_files, desc="Downloading files") as pbar: - for folder, models in pretrained_models.items(): - folder_path = os.path.join(base_path, folder) - os.makedirs(folder_path, exist_ok=True) - for model in models: - url = base_url + folder + "/" + model - filepath = os.path.join(folder_path, model) - download_file(url, filepath) - pbar.update() - - # Download hubert_base.pt to the base path - hubert_url = base_url + "hubert_base.pt" - hubert_filepath = os.path.join(base_pathm, "hubert_base.pt") - download_file(hubert_url, hubert_filepath) - pbar.update() -def clone_repository(run_download): - with ThreadPoolExecutor(max_workers=2) as executor: - executor.submit(run_script) - if run_download: - executor.submit(download_pretrained_models) diff --git a/spaces/FridaZuley/RVC_HFKawaii/LazyImport.py b/spaces/FridaZuley/RVC_HFKawaii/LazyImport.py deleted file mode 100644 index 5bdb05ddd5a546a43adba7274b4c3465bb77f2f5..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/LazyImport.py +++ /dev/null @@ -1,13 +0,0 @@ -from importlib.util import find_spec, LazyLoader, module_from_spec -from sys import modules - -def lazyload(name): - if name in modules: - return modules[name] - else: - spec = find_spec(name) - loader = LazyLoader(spec.loader) - module = module_from_spec(spec) - modules[name] = module - loader.exec_module(module) - return module \ No newline at end of file diff --git a/spaces/FritsLyneborg/kunstnerfrits/src/dalle_mini/model/tokenizer.py b/spaces/FritsLyneborg/kunstnerfrits/src/dalle_mini/model/tokenizer.py deleted file mode 100644 index 1e6e84aefb199b5086a22ea5e52f0e5eef4f9ab5..0000000000000000000000000000000000000000 --- a/spaces/FritsLyneborg/kunstnerfrits/src/dalle_mini/model/tokenizer.py +++ /dev/null @@ -1,8 +0,0 @@ -""" DalleBart tokenizer """ -from transformers import BartTokenizerFast - -from .utils import PretrainedFromWandbMixin - - -class DalleBartTokenizer(PretrainedFromWandbMixin, BartTokenizerFast): - pass diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train5_gptmixcliport2_new_pickplace_demo10.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train5_gptmixcliport2_new_pickplace_demo10.sh deleted file mode 100644 index 96a11bace36182223f89daba2963c4966c4debae..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train5_gptmixcliport2_new_pickplace_demo10.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/bash -#SBATCH -c 10 -#SBATCH -n 1 -#SBATCH -o logs/%j.out -#SBATCH --exclusive -STEPS=${1-'50000'} - - -sh scripts/traintest_scripts/train_test_multi_task_goal_demo10.sh data \ - "[stack-block-pyramid,put-block-in-bowl,color-coordinated-sphere-insertion,rainbow-stack,vertical-insertion-blocks,stack-blocks-in-container]" \ - "[stack-block-pyramid,put-block-in-bowl]" \ - gpt5_mixcliport2_task_new diff --git a/spaces/GitHunter0/100_prisoners_problem_app/pages/02_Riddle_Description.py b/spaces/GitHunter0/100_prisoners_problem_app/pages/02_Riddle_Description.py deleted file mode 100644 index c27abb4a6c6a3962564790eb5b42fe300f39646b..0000000000000000000000000000000000000000 --- a/spaces/GitHunter0/100_prisoners_problem_app/pages/02_Riddle_Description.py +++ /dev/null @@ -1,40 +0,0 @@ -import streamlit as st -import streamlit.components.v1 as components -import requests -import json - -from functions.module_project_specific_functions import ( - f_streamlit_hide_menu_and_marks, - f_streamlit_customize_page, -) - -st.set_page_config( - page_title = "100 Prisoners Game Riddle", - page_icon='www/100_prisoners_problem_favicon_1.jpg', # None, ":memo:", ... - layout='wide', # centered, wide - initial_sidebar_state='auto' # auto, expanded, collapsed -) - -# Hide Hamburger Menu and Streamlit logo 'Made with Streamlit' -f_streamlit_hide_menu_and_marks() - -# -f_streamlit_customize_page(padding_left="10px", margin_left="0px", - padding_top="10px", margin_top="10px") - - -# wiki_problem_url = "https://en.wikipedia.org/wiki/100_prisoners_problem#Problem" -# components.iframe(wiki_problem_url, height=500, width= 550, scrolling=True) - -cols = st.columns([5,2,5]) - -with cols[0]: - - wiki_problem_url = "https://en.wikipedia.org/wiki/100_prisoners_problem#Problem" - components.iframe(wiki_problem_url, height=500, width= 550, scrolling=True) - -with cols[2]: - - wiki_solution_url = "https://en.wikipedia.org/wiki/100_prisoners_problem#Solution" - components.iframe(wiki_solution_url, height=500, width= 450, scrolling=True) - diff --git a/spaces/Goodsea/deprem-ocr-paddleocr/README.md b/spaces/Goodsea/deprem-ocr-paddleocr/README.md deleted file mode 100644 index edb3e5062bc85fa65aa6550dc3ce441461637319..0000000000000000000000000000000000000000 --- a/spaces/Goodsea/deprem-ocr-paddleocr/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Deprem Ocr 2 -emoji: 👀 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -duplicated_from: deprem-ml/deprem-ocr ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/foveabox/fovea_r101_fpn_4x4_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/foveabox/fovea_r101_fpn_4x4_2x_coco.py deleted file mode 100644 index 92963935466ab2db968a8f241420c9795ab2b1b0..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/foveabox/fovea_r101_fpn_4x4_2x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './fovea_r50_fpn_4x4_2x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/Hackathon2022/BigColumnDiabetes/README.md b/spaces/Hackathon2022/BigColumnDiabetes/README.md deleted file mode 100644 index 342ca48e56b00ab60434934f754a01fccee3b6b0..0000000000000000000000000000000000000000 --- a/spaces/Hackathon2022/BigColumnDiabetes/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: BigColumnDiabetes -emoji: 👀 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.1.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HaloMaster/chinesesummary/fengshen/data/task_dataloader/medicalQADataset.py b/spaces/HaloMaster/chinesesummary/fengshen/data/task_dataloader/medicalQADataset.py deleted file mode 100644 index 3d76ed583c7d150769c81d830293909e1c110485..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/data/task_dataloader/medicalQADataset.py +++ /dev/null @@ -1,137 +0,0 @@ -# coding=utf8 -import os -import pytorch_lightning as pl -from torch.utils.data import DataLoader, Dataset -from tqdm import tqdm -from transformers import AutoTokenizer - - -class GPT2QADataset(Dataset): - ''' - Dataset Used for yuyuan medical qa task. - Just surpport small datasets, when deal with large datasets it may be slowly. - for large datasets please use mmapdatasets(doing) - ''' - - def __init__(self, data_path, name, args): - super().__init__() - self.tokenizer = AutoTokenizer.from_pretrained( - args.pretrained_model_path) - if self.tokenizer.pad_token is None: - self.tokenizer.add_special_tokens({'pad_token': '<|endoftext|>'}) - self.data_size = os.path.getsize(data_path)/1024/1024/1024 - self.data_type_name = name - self.data = self.load_data(data_path) - self.max_seq_length = args.max_seq_length - - def __len__(self): - return len(self.data) - - def __getitem__(self, index): - return self.encode(self.data[index]) - - def load_data(self, data_path): - # 有进度条展示 - if self.data_size <= 5: - with open(data_path, "rt", encoding='utf8') as f: - lines = f.readlines() - total_num = len(lines) - data_gen = lines - else: - data_gen = open(data_path, "rt", encoding='utf8') - total_num = None - - data = [] - with tqdm(total=total_num, desc=f'{self.data_type_name}处理进度', mininterval=0.3) as bar: - for idx, line in enumerate(data_gen): - data.append(self.data_parse(line)) - bar.update() - - if self.data_size > 5: - data_gen.close() - return data - - def data_parse(self, line): - """ - 解析不同格式的数据 - """ - dic = eval(line.strip()) - return dic - - def encode(self, item): - """ - 将数据转换成模型训练的输入 - """ - inputs_dict = self.tokenizer.encode_plus(item['Question']+item['answer'], - max_length=self.max_seq_length, padding='max_length', - truncation=True, return_tensors='pt') - target = inputs_dict['input_ids'] - labels = target.clone().detach() - labels[target == self.tokenizer.pad_token_id] = -100 - return { - "input_ids": inputs_dict['input_ids'].squeeze(), - "attention_mask": inputs_dict['attention_mask'].squeeze(), - "labels": labels.squeeze(), - "question": item['Question'], - "answer": item['answer'] - } - - -class GPT2QADataModel(pl.LightningDataModule): - @staticmethod - def add_data_specific_args(parent_args): - parser = parent_args.add_argument_group('GPT2QADataModel') - parser.add_argument('--data_dir', type=str, required=True) - parser.add_argument('--num_workers', default=2, type=int) - parser.add_argument('--train_data', default='train.txt', type=str) - parser.add_argument('--valid_data', default='valid.txt', type=str) - parser.add_argument('--test_data', default='test.txt', type=str) - parser.add_argument('--train_batchsize', type=int, required=True) - parser.add_argument('--valid_batchsize', type=int, required=True) - parser.add_argument('--max_seq_length', default=1024, type=int) - return parent_args - - def __init__(self, args): - super().__init__() - self.args = args - self.train_batchsize = args.train_batchsize - self.valid_batchsize = args.valid_batchsize - if not args.do_eval_only: - self.train_data = GPT2QADataset(os.path.join( - args.data_dir, args.train_data), '训练集', args) - self.valid_data = GPT2QADataset(os.path.join( - args.data_dir, args.valid_data), '验证集', args) - self.test_data = GPT2QADataset(os.path.join( - args.data_dir, args.test_data), '测试集', args) - - def train_dataloader(self): - return DataLoader( - self.train_data, shuffle=True, - batch_size=self.train_batchsize, - pin_memory=False, num_workers=self.args.num_workers) - - def val_dataloader(self): - return DataLoader(self.valid_data, shuffle=False, - batch_size=self.valid_batchsize, - pin_memory=False, num_workers=self.args.num_workers) - - def predict_dataloader(self): - return DataLoader(self.test_data, shuffle=False, - batch_size=self.valid_batchsize, pin_memory=False, - num_workers=self.args.num_workers) - - -if __name__ == '__main__': - import argparse - modelfile = '/cognitive_comp/wuziwei/pretrained_model_hf/medical_v2' - datafile = '/cognitive_comp/wuziwei/task-data/medical_qa/medical_qa_train.txt' - parser = argparse.ArgumentParser(description='hf test', allow_abbrev=False) - group = parser.add_argument_group(title='test args') - group.add_argument('--pretrained-model-path', type=str, default=modelfile, - help='Number of transformer layers.') - group.add_argument('--max-seq-length', type=int, default=1024) - args = parser.parse_args() - - testml = GPT2QADataset(datafile, 'medical_qa', args=args) - - print(testml[10]) diff --git a/spaces/Harsh12/Netflix-Movie-Recommender/app.py b/spaces/Harsh12/Netflix-Movie-Recommender/app.py deleted file mode 100644 index 4f80295cbc9bec1d19d59c70f383b2478a1d6aca..0000000000000000000000000000000000000000 --- a/spaces/Harsh12/Netflix-Movie-Recommender/app.py +++ /dev/null @@ -1,100 +0,0 @@ -import streamlit as st -import pickle -import pandas as pd -from sklearn.metrics.pairwise import cosine_similarity -from sklearn.feature_extraction.text import CountVectorizer -from imdb import IMDb - - -similarity = pickle.load(open('cosine_sim.pkl', 'rb')) -movie_dict = pickle.load(open('movie_dict.pkl', 'rb')) -movies = pd.DataFrame(movie_dict) - -programme_list=movies['title'].to_list() - -imdb = IMDb() -def get_movie_id(movie_title): - """Get the IMDb ID of the movie using the IMDbPY library.""" - try: - - movies = imdb.search_movie(movie_title) - movie_id = movies[0].getID() # get the ID of the first search result - return movie_id - - except Exception as e: - st.error("Error: Failed to retrieve IMDb ID for the selected movie. Please try again with a different movie.") - st.stop() - - - -def get_poster_url(imdb_id): - """Get the URL of the poster image of the movie using the IMDbPY library.""" - try: - - movie = imdb.get_movie(imdb_id) - poster_url = movie['full-size cover url'] - return poster_url - - except Exception as e: - st.error("Error: Failed to retrieve poster URL for the selected movie. Please try again with a different movie.") - st.stop() - - - -def recommend(movie): - index = programme_list.index(movie) - sim_score = list(enumerate(similarity[index])) #creates a list of tuples containing the similarity score and index between the input title and all other programmes in the dataset. - - #position 0 is the movie itself, thus exclude - sim_score = sorted(sim_score, key= lambda x: x[1], reverse=True)[1:6] #sorts the list of tuples by similarity score in descending order. - recommend_index = [i[0] for i in sim_score] - rec_movie = movies['title'].iloc[recommend_index] - rec_movie_ids = [get_movie_id(title) for title in rec_movie] - return rec_movie, rec_movie_ids - -st.set_page_config(page_title='Movie Recommender System', page_icon=':clapper:', layout='wide') -st.title('Movie Recommender System') - - -selected_movie_name = st.selectbox('Please select a Movie', -sorted(movies['title'].values)) - -if st.button('Recommend Me'): - try: - - recommendations, rec_movie_ids = recommend(selected_movie_name) - # st.write(recommendations, rec_movie_ids) - # st.write(recommendations[6195]) - final_movie_names = [] - for i, rec_id in zip(recommendations, rec_movie_ids): - final_movie_names.append(i) - # st.write(i) - # poster_url = get_poster_url(rec_id) - # st.image(poster_url) - - - col1, col2, col3, col4, col5 = st.columns(5) - cols = [col1, col2, col3, col4, col5] - with col1: - st.text(final_movie_names[0]) - poster_url = get_poster_url(rec_movie_ids[0]) - st.image(poster_url) - with col2: - st.text(final_movie_names[1]) - poster_url = get_poster_url(rec_movie_ids[1]) - st.image(poster_url) - with col3: - st.text(final_movie_names[2]) - poster_url = get_poster_url(rec_movie_ids[2]) - st.image(poster_url) - with col4: - st.text(final_movie_names[3]) - poster_url = get_poster_url(rec_movie_ids[3]) - st.image(poster_url) - with col5: - st.text(final_movie_names[4]) - poster_url = get_poster_url(rec_movie_ids[4]) - st.image(poster_url) - except Exception as e: - st.write('An error occurred while generating recommendations:', e) - diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/processtext/numbers.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/processtext/numbers.py deleted file mode 100644 index 5c30252e1c96fd9d3c762491e3107f0b5e811041..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/processtext/numbers.py +++ /dev/null @@ -1,73 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -import inflect -import re - - -_inflect = inflect.engine() -_comma_number_re = re.compile(r"([0-9][0-9\,]+[0-9])") -_decimal_number_re = re.compile(r"([0-9]+\.[0-9]+)") -_pounds_re = re.compile(r"£([0-9\,]*[0-9]+)") -_dollars_re = re.compile(r"\$([0-9\.\,]*[0-9]+)") -_ordinal_re = re.compile(r"[0-9]+(st|nd|rd|th)") -_number_re = re.compile(r"[0-9]+") - - -def _remove_commas(m): - return m.group(1).replace(",", "") - - -def _expand_decimal_point(m): - return m.group(1).replace(".", " point ") - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split(".") - if len(parts) > 2: - return match + " dollars" # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = "dollar" if dollars == 1 else "dollars" - cent_unit = "cent" if cents == 1 else "cents" - return "%s %s, %s %s" % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = "dollar" if dollars == 1 else "dollars" - return "%s %s" % (dollars, dollar_unit) - elif cents: - cent_unit = "cent" if cents == 1 else "cents" - return "%s %s" % (cents, cent_unit) - else: - return "zero dollars" - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return "two thousand" - elif num > 2000 and num < 2010: - return "two thousand " + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + " hundred" - else: - return _inflect.number_to_words( - num, andword="", zero="oh", group=2 - ).replace(", ", " ") - else: - return _inflect.number_to_words(num, andword="") - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r"\1 pounds", text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text diff --git a/spaces/Hina4867/bingo/src/lib/bots/bing/sr.ts b/spaces/Hina4867/bingo/src/lib/bots/bing/sr.ts deleted file mode 100644 index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/src/lib/bots/bing/sr.ts +++ /dev/null @@ -1,106 +0,0 @@ -// @ts-ignore -const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? ( - // @ts-ignore - window.SpeechRecognition || - window.webkitSpeechRecognition || - // @ts-ignore - window.mozSpeechRecognition || - // @ts-ignore - window.msSpeechRecognition || - // @ts-ignore - window.oSpeechRecognition -) as typeof webkitSpeechRecognition : undefined - -type subscriber = (msg: string, command?: string) => void - -export class SR { - recognition?: SpeechRecognition - onchange?: subscriber - transcript: boolean = false - listening: boolean = false - private commandsRe?: RegExp - constructor(commands: string[]) { - this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined - if (!this.recognition) { - return - } - this.configuration('zh-CN') - if (commands.length) { - this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`) - } - this.recognition.onresult = this.speechRecognition - this.recognition.onerror = (err) => { - console.log('err', err.error) - this.stop() - } - this.recognition.onend = () => { - if (this.recognition && this.listening) { - this.recognition.start() - } - } - } - - speechRecognition = (event: SpeechRecognitionEvent) => { - if (!this.listening) return - for (var i = event.resultIndex; i < event.results.length; i++) { - let result = event.results[i] - if (result.isFinal) { - var alt = result[0] - const text = alt.transcript.trim() - if (this.commandsRe && this.commandsRe.test(text)) { - return this.onchange?.('', RegExp.$1) - } - if (!this.transcript) return - this.onchange?.(text) - } - } - } - - private configuration = async (lang: string = 'zh-CN') => { - return new Promise((resolve) => { - if (this.recognition) { - this.recognition.continuous = true - this.recognition.lang = lang - this.recognition.onstart = resolve - } - }) - } - - start = async () => { - if (this.recognition && !this.listening) { - await this.recognition.start() - this.transcript = true - this.listening = true - } - } - - stop = () => { - if (this.recognition) { - this.recognition.stop() - this.transcript = false - this.listening = false - } - } - - - pause = () => { - if (this.recognition) { - this.transcript = false - } - } - - resume = () => { - if (this.recognition) { - this.transcript = true - } - } - - abort = () => { - if (this.recognition && this.transcript) { - this.recognition.abort() - this.transcript = false - this.listening = false - } - } -} - diff --git a/spaces/Hoodady/3DFuse/ldm/models/diffusion/dpm_solver/sampler.py b/spaces/Hoodady/3DFuse/ldm/models/diffusion/dpm_solver/sampler.py deleted file mode 100644 index 7d137b8cf36718c1c58faa09f9dd919e5fb2977b..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/ldm/models/diffusion/dpm_solver/sampler.py +++ /dev/null @@ -1,87 +0,0 @@ -"""SAMPLING ONLY.""" -import torch - -from .dpm_solver import NoiseScheduleVP, model_wrapper, DPM_Solver - - -MODEL_TYPES = { - "eps": "noise", - "v": "v" -} - - -class DPMSolverSampler(object): - def __init__(self, model, **kwargs): - super().__init__() - self.model = model - to_torch = lambda x: x.clone().detach().to(torch.float32).to(model.device) - self.register_buffer('alphas_cumprod', to_torch(model.alphas_cumprod)) - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - - print(f'Data shape for DPM-Solver sampling is {size}, sampling steps {S}') - - device = self.model.betas.device - if x_T is None: - img = torch.randn(size, device=device) - else: - img = x_T - - ns = NoiseScheduleVP('discrete', alphas_cumprod=self.alphas_cumprod) - - model_fn = model_wrapper( - lambda x, t, c: self.model.apply_model(x, t, c), - ns, - model_type=MODEL_TYPES[self.model.parameterization], - guidance_type="classifier-free", - condition=conditioning, - unconditional_condition=unconditional_conditioning, - guidance_scale=unconditional_guidance_scale, - ) - - dpm_solver = DPM_Solver(model_fn, ns, predict_x0=True, thresholding=False) - x = dpm_solver.sample(img, steps=S, skip_type="time_uniform", method="multistep", order=2, lower_order_final=True) - - return x.to(device), None \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/laser/laser_src/multitask_data_utils.py b/spaces/ICML2022/OFA/fairseq/examples/laser/laser_src/multitask_data_utils.py deleted file mode 100644 index b05caea26793bf5112a7abc29d76225f578f3ebe..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/laser/laser_src/multitask_data_utils.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import OrderedDict - -import numpy as np - -from fairseq.data import BaseWrapperDataset, FairseqDataset, iterators - - -class MultiItr(object): - def __init__(self, itr): - self.itr = itr - self._counts = [0 for x in itr] - - def __len__(self): - return sum(len(itr) for itr in self.itr) - - def __iter__(self): - return self - - def __next__(self): - ratios = [count / len(itr) for count, itr in zip(self._counts, self.itr)] - idx = ratios.index(min(ratios)) - self._counts[idx] += 1 - return next(self.itr[idx]) - - -class MultidatasetEpochBatchIterator(iterators.EpochBatchIterating): - """A wrapper around multiple epoch batch iterators.""" - - def __init__( - self, - dataset, - batch_sampler, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - ): - - assert isinstance(dataset, OrderedDict) - assert len(dataset) - assert isinstance(dataset[next(iter(dataset))], FairseqDataset) - - self.iterators = [] - - self.epoch = epoch - for key, dt in dataset.items(): - epoch_iter = iterators.EpochBatchIterator( - dataset=dt, - collate_fn=dt.collater, - batch_sampler=batch_sampler[key], - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=0, - epoch=epoch, - ) - self.iterators.append(epoch_iter) - - def __len__(self): - return sum(len(itr) for itr in self.iterators) - - def next_epoch_itr(self, shuffle=True, fix_batches_to_gpus=False): - # `self.epoch += 1` should be handled by underlying `EpochBatchIterator`s. - return MultiItr( - [ - itr.next_epoch_itr( - shuffle=shuffle, fix_batches_to_gpus=fix_batches_to_gpus - ) - for itr in self.iterators - ] - ) - - def end_of_epoch(self): - return all(itr.end_of_epoch() for itr in self.iterators) - - @property - def next_epoch_idx(self): - """Return the epoch index after *next_epoch_itr* is called.""" - - epochs = [itr.next_epoch_idx for itr in self.iterators] - self.epoch = epochs[0] - assert all(epoch == self.epoch for epoch in epochs) - - return self.epoch - - @property - def iterations_in_epoch(self): - return sum(itr.iterations_in_epoch for itr in self.iterators) - - def state_dict(self): - return { - "iterators": [it.state_dict() for it in self.iterators], - "epoch": self.epoch, - } - - def load_state_dict(self, state_dict): - self.epoch = state_dict["epoch"] - for it, d in zip(self.iterators, state_dict["iterators"]): - it.load_state_dict(d) - - -class MultitaskDatasetWrapper(BaseWrapperDataset): - """A wrapper for a multitask dataset.""" - - def __init__(self, dataset, target_language_id, sample=1.0, name=""): - super().__init__(dataset) - self.target_language_id = target_language_id - self.sample = sample - self.name = name - - def collater(self, *args, **kwargs): - ans = self.dataset.collater(*args, **kwargs) - if "net_input" in ans: - ans["net_input"]["target_language_id"] = self.target_language_id - ans["net_input"]["dataset_name"] = self.name - return ans - - def num_tokens(self, *args, **kwargs): - return self.dataset.num_tokens(*args, **kwargs) - - def ordered_indices(self, *args, **kwargs): - indices = self.dataset.ordered_indices(*args, **kwargs) - # Hacky solution for sampling - size = int(self.sample * indices.shape[0]) - - return indices.take(np.sort(np.random.permutation(indices.shape[0])[:size])) - - def size(self, index: int): - return self.dataset.size(index) - - @property - def supports_prefetch(self): - """Whether this dataset supports prefetching.""" - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/dynamicconv_layer/dynamiconv_cpu.cpp b/spaces/ICML2022/OFA/fairseq/fairseq/modules/dynamicconv_layer/dynamiconv_cpu.cpp deleted file mode 100644 index d7e57c859085f98ec10960330ca763ae2764585a..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/dynamicconv_layer/dynamiconv_cpu.cpp +++ /dev/null @@ -1,29 +0,0 @@ -#include -#include - -std::vector -dynamicconv_cpu_forward(float* input, float* filters, int padding_l); - -std::vector dynamicconv_cpu_backward( - float* gradOutput, - int padding_l, - float* input, - float* filters); - -std::vector -dynamicconv_forward(float* input, float* filters, int padding_l) { - return dynamicconv_cpu_forward(input, filters, padding_l); -} - -std::vector dynamicconv_backward( - float* gradOutput, - int padding_l, - float* input, - float* filters) { - return dynamicconv_cpu_backward(gradOutput, padding_l, input, filters); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("forward", &dynamicconv_forward, "dynamicconv forward (CPU)"); - m.def("backward", &dynamicconv_backward, "dynamicconv backward (CPU)"); -} diff --git a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h b/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h deleted file mode 100644 index b2b88e8c46f19b6db0933163e57ccdb51180f517..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h +++ /dev/null @@ -1,35 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include - -namespace groundingdino { - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/utils/dist_util.py b/spaces/Iceclear/StableSR/StableSR/basicsr/utils/dist_util.py deleted file mode 100644 index 0fab887b2cb1ce8533d2e8fdee72ae0c24f68fd0..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/utils/dist_util.py +++ /dev/null @@ -1,82 +0,0 @@ -# Modified from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/dist_utils.py # noqa: E501 -import functools -import os -import subprocess -import torch -import torch.distributed as dist -import torch.multiprocessing as mp - - -def init_dist(launcher, backend='nccl', **kwargs): - if mp.get_start_method(allow_none=True) is None: - mp.set_start_method('spawn') - if launcher == 'pytorch': - _init_dist_pytorch(backend, **kwargs) - elif launcher == 'slurm': - _init_dist_slurm(backend, **kwargs) - else: - raise ValueError(f'Invalid launcher type: {launcher}') - - -def _init_dist_pytorch(backend, **kwargs): - rank = int(os.environ['RANK']) - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(rank % num_gpus) - dist.init_process_group(backend=backend, **kwargs) - - -def _init_dist_slurm(backend, port=None): - """Initialize slurm distributed training environment. - - If argument ``port`` is not specified, then the master port will be system - environment variable ``MASTER_PORT``. If ``MASTER_PORT`` is not in system - environment variable, then a default port ``29500`` will be used. - - Args: - backend (str): Backend of torch.distributed. - port (int, optional): Master port. Defaults to None. - """ - proc_id = int(os.environ['SLURM_PROCID']) - ntasks = int(os.environ['SLURM_NTASKS']) - node_list = os.environ['SLURM_NODELIST'] - num_gpus = torch.cuda.device_count() - torch.cuda.set_device(proc_id % num_gpus) - addr = subprocess.getoutput(f'scontrol show hostname {node_list} | head -n1') - # specify master port - if port is not None: - os.environ['MASTER_PORT'] = str(port) - elif 'MASTER_PORT' in os.environ: - pass # use MASTER_PORT in the environment variable - else: - # 29500 is torch.distributed default port - os.environ['MASTER_PORT'] = '29500' - os.environ['MASTER_ADDR'] = addr - os.environ['WORLD_SIZE'] = str(ntasks) - os.environ['LOCAL_RANK'] = str(proc_id % num_gpus) - os.environ['RANK'] = str(proc_id) - dist.init_process_group(backend=backend) - - -def get_dist_info(): - if dist.is_available(): - initialized = dist.is_initialized() - else: - initialized = False - if initialized: - rank = dist.get_rank() - world_size = dist.get_world_size() - else: - rank = 0 - world_size = 1 - return rank, world_size - - -def master_only(func): - - @functools.wraps(func) - def wrapper(*args, **kwargs): - rank, _ = get_dist_info() - if rank == 0: - return func(*args, **kwargs) - - return wrapper diff --git a/spaces/Iceclear/StableSR/StableSR/taming/modules/vqvae/quantize.py b/spaces/Iceclear/StableSR/StableSR/taming/modules/vqvae/quantize.py deleted file mode 100644 index d75544e41fa01bce49dd822b1037963d62f79b51..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/taming/modules/vqvae/quantize.py +++ /dev/null @@ -1,445 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np -from torch import einsum -from einops import rearrange - - -class VectorQuantizer(nn.Module): - """ - see https://github.com/MishaLaskin/vqvae/blob/d761a999e2267766400dc646d82d3ac3657771d4/models/quantizer.py - ____________________________________________ - Discretization bottleneck part of the VQ-VAE. - Inputs: - - n_e : number of embeddings - - e_dim : dimension of embedding - - beta : commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2 - _____________________________________________ - """ - - # NOTE: this class contains a bug regarding beta; see VectorQuantizer2 for - # a fix and use legacy=False to apply that fix. VectorQuantizer2 can be - # used wherever VectorQuantizer has been used before and is additionally - # more efficient. - def __init__(self, n_e, e_dim, beta): - super(VectorQuantizer, self).__init__() - self.n_e = n_e - self.e_dim = e_dim - self.beta = beta - - self.embedding = nn.Embedding(self.n_e, self.e_dim) - self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - def forward(self, z): - """ - Inputs the output of the encoder network z and maps it to a discrete - one-hot vector that is the index of the closest embedding vector e_j - z (continuous) -> z_q (discrete) - z.shape = (batch, channel, height, width) - quantization pipeline: - 1. get encoder input (B,C,H,W) - 2. flatten input to (B*H*W,C) - """ - # reshape z -> (batch, height, width, channel) and flatten - z = z.permute(0, 2, 3, 1).contiguous() - z_flattened = z.view(-1, self.e_dim) - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - - d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \ - torch.sum(self.embedding.weight**2, dim=1) - 2 * \ - torch.matmul(z_flattened, self.embedding.weight.t()) - - ## could possible replace this here - # #\start... - # find closest encodings - min_encoding_indices = torch.argmin(d, dim=1).unsqueeze(1) - - min_encodings = torch.zeros( - min_encoding_indices.shape[0], self.n_e).to(z) - min_encodings.scatter_(1, min_encoding_indices, 1) - - # dtype min encodings: torch.float32 - # min_encodings shape: torch.Size([2048, 512]) - # min_encoding_indices.shape: torch.Size([2048, 1]) - - # get quantized latent vectors - z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape) - #.........\end - - # with: - # .........\start - #min_encoding_indices = torch.argmin(d, dim=1) - #z_q = self.embedding(min_encoding_indices) - # ......\end......... (TODO) - - # compute loss for embedding - loss = torch.mean((z_q.detach()-z)**2) + self.beta * \ - torch.mean((z_q - z.detach()) ** 2) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # perplexity - e_mean = torch.mean(min_encodings, dim=0) - perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10))) - - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q, loss, (perplexity, min_encodings, min_encoding_indices) - - def get_codebook_entry(self, indices, shape): - # shape specifying (batch, height, width, channel) - # TODO: check for more easy handling with nn.Embedding - min_encodings = torch.zeros(indices.shape[0], self.n_e).to(indices) - min_encodings.scatter_(1, indices[:,None], 1) - - # get quantized latent vectors - z_q = torch.matmul(min_encodings.float(), self.embedding.weight) - - if shape is not None: - z_q = z_q.view(shape) - - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q - - -class GumbelQuantize(nn.Module): - """ - credit to @karpathy: https://github.com/karpathy/deep-vector-quantization/blob/main/model.py (thanks!) - Gumbel Softmax trick quantizer - Categorical Reparameterization with Gumbel-Softmax, Jang et al. 2016 - https://arxiv.org/abs/1611.01144 - """ - def __init__(self, num_hiddens, embedding_dim, n_embed, straight_through=True, - kl_weight=5e-4, temp_init=1.0, use_vqinterface=True, - remap=None, unknown_index="random"): - super().__init__() - - self.embedding_dim = embedding_dim - self.n_embed = n_embed - - self.straight_through = straight_through - self.temperature = temp_init - self.kl_weight = kl_weight - - self.proj = nn.Conv2d(num_hiddens, n_embed, 1) - self.embed = nn.Embedding(n_embed, embedding_dim) - - self.use_vqinterface = use_vqinterface - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed+1 - print(f"Remapping {self.n_embed} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices.") - else: - self.re_embed = n_embed - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - match = (inds[:,:,None]==used[None,None,...]).long() - new = match.argmax(-1) - unknown = match.sum(2)<1 - if self.unknown_index == "random": - new[unknown]=torch.randint(0,self.re_embed,size=new[unknown].shape).to(device=new.device) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds>=self.used.shape[0]] = 0 # simply set to zero - back=torch.gather(used[None,:][inds.shape[0]*[0],:], 1, inds) - return back.reshape(ishape) - - def forward(self, z, temp=None, return_logits=False): - # force hard = True when we are in eval mode, as we must quantize. actually, always true seems to work - hard = self.straight_through if self.training else True - temp = self.temperature if temp is None else temp - - logits = self.proj(z) - if self.remap is not None: - # continue only with used logits - full_zeros = torch.zeros_like(logits) - logits = logits[:,self.used,...] - - soft_one_hot = F.gumbel_softmax(logits, tau=temp, dim=1, hard=hard) - if self.remap is not None: - # go back to all entries but unused set to zero - full_zeros[:,self.used,...] = soft_one_hot - soft_one_hot = full_zeros - z_q = einsum('b n h w, n d -> b d h w', soft_one_hot, self.embed.weight) - - # + kl divergence to the prior loss - qy = F.softmax(logits, dim=1) - diff = self.kl_weight * torch.sum(qy * torch.log(qy * self.n_embed + 1e-10), dim=1).mean() - - ind = soft_one_hot.argmax(dim=1) - if self.remap is not None: - ind = self.remap_to_used(ind) - if self.use_vqinterface: - if return_logits: - return z_q, diff, (None, None, ind), logits - return z_q, diff, (None, None, ind) - return z_q, diff, ind - - def get_codebook_entry(self, indices, shape): - b, h, w, c = shape - assert b*h*w == indices.shape[0] - indices = rearrange(indices, '(b h w) -> b h w', b=b, h=h, w=w) - if self.remap is not None: - indices = self.unmap_to_all(indices) - one_hot = F.one_hot(indices, num_classes=self.n_embed).permute(0, 3, 1, 2).float() - z_q = einsum('b n h w, n d -> b d h w', one_hot, self.embed.weight) - return z_q - - -class VectorQuantizer2(nn.Module): - """ - Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly - avoids costly matrix multiplications and allows for post-hoc remapping of indices. - """ - # NOTE: due to a bug the beta term was applied to the wrong term. for - # backwards compatibility we use the buggy version by default, but you can - # specify legacy=False to fix it. - def __init__(self, n_e, e_dim, beta, remap=None, unknown_index="random", - sane_index_shape=False, legacy=True): - super().__init__() - self.n_e = n_e - self.e_dim = e_dim - self.beta = beta - self.legacy = legacy - - self.embedding = nn.Embedding(self.n_e, self.e_dim) - self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed+1 - print(f"Remapping {self.n_e} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices.") - else: - self.re_embed = n_e - - self.sane_index_shape = sane_index_shape - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - match = (inds[:,:,None]==used[None,None,...]).long() - new = match.argmax(-1) - unknown = match.sum(2)<1 - if self.unknown_index == "random": - new[unknown]=torch.randint(0,self.re_embed,size=new[unknown].shape).to(device=new.device) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds>=self.used.shape[0]] = 0 # simply set to zero - back=torch.gather(used[None,:][inds.shape[0]*[0],:], 1, inds) - return back.reshape(ishape) - - def forward(self, z, temp=None, rescale_logits=False, return_logits=False): - assert temp is None or temp==1.0, "Only for interface compatible with Gumbel" - assert rescale_logits==False, "Only for interface compatible with Gumbel" - assert return_logits==False, "Only for interface compatible with Gumbel" - # reshape z -> (batch, height, width, channel) and flatten - z = rearrange(z, 'b c h w -> b h w c').contiguous() - z_flattened = z.view(-1, self.e_dim) - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - - d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \ - torch.sum(self.embedding.weight**2, dim=1) - 2 * \ - torch.einsum('bd,dn->bn', z_flattened, rearrange(self.embedding.weight, 'n d -> d n')) - - min_encoding_indices = torch.argmin(d, dim=1) - z_q = self.embedding(min_encoding_indices).view(z.shape) - perplexity = None - min_encodings = None - - # compute loss for embedding - if not self.legacy: - loss = self.beta * torch.mean((z_q.detach()-z)**2) + \ - torch.mean((z_q - z.detach()) ** 2) - else: - loss = torch.mean((z_q.detach()-z)**2) + self.beta * \ - torch.mean((z_q - z.detach()) ** 2) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # reshape back to match original input shape - z_q = rearrange(z_q, 'b h w c -> b c h w').contiguous() - - if self.remap is not None: - min_encoding_indices = min_encoding_indices.reshape(z.shape[0],-1) # add batch axis - min_encoding_indices = self.remap_to_used(min_encoding_indices) - min_encoding_indices = min_encoding_indices.reshape(-1,1) # flatten - - if self.sane_index_shape: - min_encoding_indices = min_encoding_indices.reshape( - z_q.shape[0], z_q.shape[2], z_q.shape[3]) - - return z_q, loss, (perplexity, min_encodings, min_encoding_indices) - - def get_codebook_entry(self, indices, shape): - # shape specifying (batch, height, width, channel) - if self.remap is not None: - indices = indices.reshape(shape[0],-1) # add batch axis - indices = self.unmap_to_all(indices) - indices = indices.reshape(-1) # flatten again - - # get quantized latent vectors - z_q = self.embedding(indices) - - if shape is not None: - z_q = z_q.view(shape) - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q - -class EmbeddingEMA(nn.Module): - def __init__(self, num_tokens, codebook_dim, decay=0.99, eps=1e-5): - super().__init__() - self.decay = decay - self.eps = eps - weight = torch.randn(num_tokens, codebook_dim) - self.weight = nn.Parameter(weight, requires_grad = False) - self.cluster_size = nn.Parameter(torch.zeros(num_tokens), requires_grad = False) - self.embed_avg = nn.Parameter(weight.clone(), requires_grad = False) - self.update = True - - def forward(self, embed_id): - return F.embedding(embed_id, self.weight) - - def cluster_size_ema_update(self, new_cluster_size): - self.cluster_size.data.mul_(self.decay).add_(new_cluster_size, alpha=1 - self.decay) - - def embed_avg_ema_update(self, new_embed_avg): - self.embed_avg.data.mul_(self.decay).add_(new_embed_avg, alpha=1 - self.decay) - - def weight_update(self, num_tokens): - n = self.cluster_size.sum() - smoothed_cluster_size = ( - (self.cluster_size + self.eps) / (n + num_tokens * self.eps) * n - ) - #normalize embedding average with smoothed cluster size - embed_normalized = self.embed_avg / smoothed_cluster_size.unsqueeze(1) - self.weight.data.copy_(embed_normalized) - - -class EMAVectorQuantizer(nn.Module): - def __init__(self, n_embed, embedding_dim, beta, decay=0.99, eps=1e-5, - remap=None, unknown_index="random"): - super().__init__() - self.codebook_dim = codebook_dim - self.num_tokens = num_tokens - self.beta = beta - self.embedding = EmbeddingEMA(self.num_tokens, self.codebook_dim, decay, eps) - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed+1 - print(f"Remapping {self.n_embed} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices.") - else: - self.re_embed = n_embed - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - match = (inds[:,:,None]==used[None,None,...]).long() - new = match.argmax(-1) - unknown = match.sum(2)<1 - if self.unknown_index == "random": - new[unknown]=torch.randint(0,self.re_embed,size=new[unknown].shape).to(device=new.device) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds>=self.used.shape[0]] = 0 # simply set to zero - back=torch.gather(used[None,:][inds.shape[0]*[0],:], 1, inds) - return back.reshape(ishape) - - def forward(self, z): - # reshape z -> (batch, height, width, channel) and flatten - #z, 'b c h w -> b h w c' - z = rearrange(z, 'b c h w -> b h w c') - z_flattened = z.reshape(-1, self.codebook_dim) - - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - d = z_flattened.pow(2).sum(dim=1, keepdim=True) + \ - self.embedding.weight.pow(2).sum(dim=1) - 2 * \ - torch.einsum('bd,nd->bn', z_flattened, self.embedding.weight) # 'n d -> d n' - - - encoding_indices = torch.argmin(d, dim=1) - - z_q = self.embedding(encoding_indices).view(z.shape) - encodings = F.one_hot(encoding_indices, self.num_tokens).type(z.dtype) - avg_probs = torch.mean(encodings, dim=0) - perplexity = torch.exp(-torch.sum(avg_probs * torch.log(avg_probs + 1e-10))) - - if self.training and self.embedding.update: - #EMA cluster size - encodings_sum = encodings.sum(0) - self.embedding.cluster_size_ema_update(encodings_sum) - #EMA embedding average - embed_sum = encodings.transpose(0,1) @ z_flattened - self.embedding.embed_avg_ema_update(embed_sum) - #normalize embed_avg and update weight - self.embedding.weight_update(self.num_tokens) - - # compute loss for embedding - loss = self.beta * F.mse_loss(z_q.detach(), z) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # reshape back to match original input shape - #z_q, 'b h w c -> b c h w' - z_q = rearrange(z_q, 'b h w c -> b c h w') - return z_q, loss, (perplexity, encodings, encoding_indices) diff --git a/spaces/Illumotion/Koboldcpp/examples/embd-input/embd-input-test.cpp b/spaces/Illumotion/Koboldcpp/examples/embd-input/embd-input-test.cpp deleted file mode 100644 index dc4a0e48854adce86d44a3ba8ce7d2a67996b125..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/embd-input/embd-input-test.cpp +++ /dev/null @@ -1,35 +0,0 @@ -#include "embd-input.h" -#include -#include -#include - -int main(int argc, char** argv) { - - auto mymodel = create_mymodel(argc, argv); - int N = 10; - int max_tgt_len = 500; - int n_embd = llama_n_embd(llama_get_model(mymodel->ctx)); - - // add random float embd to test evaluation - float * data = new float[N*n_embd]; - std::default_random_engine e; - std::uniform_real_distribution u(0,1); - for (int i=0;iparams.prompt.c_str()); - const char* tmp; - for (int i=0; i")==0) break; - printf("%s", tmp); - fflush(stdout); - } - printf("\n"); - free_mymodel(mymodel); - return 0; -} diff --git a/spaces/Illumotion/Koboldcpp/examples/finetune/finetune.cpp b/spaces/Illumotion/Koboldcpp/examples/finetune/finetune.cpp deleted file mode 100644 index 8ca1874dafc7e98c99c1c536b18b3577468516a1..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/finetune/finetune.cpp +++ /dev/null @@ -1,1940 +0,0 @@ -#include "ggml.h" -#include "ggml-alloc.h" -#include "llama.h" -#include "common.h" -#include "train.h" -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#if defined(_MSC_VER) -#pragma warning(disable: 4244 4267) // possible loss of data -#endif - -static const size_t tensor_alignment = 32; - -struct my_llama_hparams { - uint32_t n_vocab = 32000; - uint32_t n_ctx = 512; - uint32_t n_embd = 4096; - uint32_t n_ff = 11008; - uint32_t n_head = 32; - uint32_t n_head_kv = 32; - uint32_t n_layer = 32; - - // float f_norm_eps = 1e-5f; // falcon - float f_norm_rms_eps = 1e-5f; // llama - - float rope_freq_base = 10000.0f; - float rope_freq_scale = 1.0f; - - uint32_t n_gqa() const { - return n_head/n_head_kv; - } - - uint32_t n_embd_head() const { - return n_embd/n_head; - } - - uint32_t n_embd_gqa() const { - return n_embd/n_gqa(); - } - - bool operator!=(const my_llama_hparams& other) const { - return memcmp(this, &other, sizeof(other)); - } -}; - -struct my_llama_layer { - // normalization - struct ggml_tensor * attention_norm; - - // attention - struct ggml_tensor * wq; - struct ggml_tensor * wk; - struct ggml_tensor * wv; - struct ggml_tensor * wo; - - // normalization - struct ggml_tensor * ffn_norm; - - // ff - struct ggml_tensor * w1; - struct ggml_tensor * w2; - struct ggml_tensor * w3; -}; - -struct my_llama_model { - struct my_llama_hparams hparams; - - struct ggml_tensor * tok_embeddings; - - struct ggml_tensor * norm; - struct ggml_tensor * output; - - std::vector layers; -}; - -struct my_llama_lora_hparams { - uint32_t lora_r = 1; - uint32_t lora_alpha = 1; - uint32_t n_rank_attention_norm = 1; - uint32_t n_rank_wq = 4; - uint32_t n_rank_wk = 4; - uint32_t n_rank_wv = 4; - uint32_t n_rank_wo = 4; - uint32_t n_rank_ffn_norm = 1; - uint32_t n_rank_w1 = 4; - uint32_t n_rank_w2 = 4; - uint32_t n_rank_w3 = 4; - uint32_t n_rank_tok_embeddings = 4; - uint32_t n_rank_norm = 1; - uint32_t n_rank_output = 4; - - bool operator!=(const my_llama_lora_hparams& other) const { - return memcmp(this, &other, sizeof(other)); - } -}; - -struct my_llama_lora_layer { - // normalization - struct ggml_tensor * attention_norm_a; - struct ggml_tensor * attention_norm_b; - - // attention - struct ggml_tensor * wq_a; - struct ggml_tensor * wq_b; - struct ggml_tensor * wk_a; - struct ggml_tensor * wk_b; - struct ggml_tensor * wv_a; - struct ggml_tensor * wv_b; - struct ggml_tensor * wo_a; - struct ggml_tensor * wo_b; - - // normalization - struct ggml_tensor * ffn_norm_a; - struct ggml_tensor * ffn_norm_b; - - // ff - struct ggml_tensor * w1_a; - struct ggml_tensor * w1_b; - struct ggml_tensor * w2_a; - struct ggml_tensor * w2_b; - struct ggml_tensor * w3_a; - struct ggml_tensor * w3_b; -}; - -struct my_llama_lora { - struct ggml_context * ctx = NULL; - std::vector data; - - my_llama_lora_hparams hparams; - - struct ggml_tensor * tok_embeddings_a; - struct ggml_tensor * tok_embeddings_b; - - struct ggml_tensor * norm_a; - struct ggml_tensor * norm_b; - struct ggml_tensor * output_a; - struct ggml_tensor * output_b; - - std::vector layers; -}; - -// gguf constants -static const char * LLM_KV_TRAINING_TYPE_FINETUNE_LORA = "finetune_lora"; -static const char * LLM_KV_TRAINING_TYPE = "training.type"; - -static const char * LLM_KV_TRAINING_LORA_RANK_TOKEN_EMBD = "training.lora.rank.token_embd"; -static const char * LLM_KV_TRAINING_LORA_RANK_OUTPUT_NORM = "training.lora.rank.output_norm"; -static const char * LLM_KV_TRAINING_LORA_RANK_OUTPUT = "training.lora.rank.output"; -static const char * LLM_KV_TRAINING_LORA_RANK_ATTN_NORM = "training.lora.rank.attn_norm"; -static const char * LLM_KV_TRAINING_LORA_RANK_ATTN_Q = "training.lora.rank.attn_q"; -static const char * LLM_KV_TRAINING_LORA_RANK_ATTN_K = "training.lora.rank.attn_k"; -static const char * LLM_KV_TRAINING_LORA_RANK_ATTN_V = "training.lora.rank.attn_v"; -static const char * LLM_KV_TRAINING_LORA_RANK_ATTN_OUT = "training.lora.rank.attn_output"; -static const char * LLM_KV_TRAINING_LORA_RANK_FFN_NORM = "training.lora.rank.ffn_norm"; -static const char * LLM_KV_TRAINING_LORA_RANK_FFN_GATE = "training.lora.rank.ffn_gate"; -static const char * LLM_KV_TRAINING_LORA_RANK_FFN_DOWN = "training.lora.rank.ffn_down"; -static const char * LLM_KV_TRAINING_LORA_RANK_FFN_UP = "training.lora.rank.ffn_up"; - -// gguf constants (sync with gguf.py) - -static const char * LLM_KV_GENERAL_ARCHITECTURE = "general.architecture"; -static const char * LLM_KV_GENERAL_FILE_TYPE = "general.file_type"; - -static const char * LLM_KV_CONTEXT_LENGTH = "%s.context_length"; -static const char * LLM_KV_EMBEDDING_LENGTH = "%s.embedding_length"; -static const char * LLM_KV_BLOCK_COUNT = "%s.block_count"; -static const char * LLM_KV_FEED_FORWARD_LENGTH = "%s.feed_forward_length"; -static const char * LLM_KV_ATTENTION_HEAD_COUNT = "%s.attention.head_count"; -static const char * LLM_KV_ATTENTION_HEAD_COUNT_KV = "%s.attention.head_count_kv"; -static const char * LLM_KV_ATTENTION_LAYERNORM_RMS_EPS = "%s.attention.layer_norm_rms_epsilon"; -static const char * LLM_KV_ROPE_DIMENSION_COUNT = "%s.rope.dimension_count"; -static const char * LLM_KV_ROPE_FREQ_BASE = "%s.rope.freq_base"; // TODO load in llama.cpp -static const char * LLM_KV_ROPE_SCALE_LINEAR = "%s.rope.scale_linear"; - -static const char * LLM_TENSOR_TOKEN_EMBD = "token_embd"; -static const char * LLM_TENSOR_OUTPUT_NORM = "output_norm"; -static const char * LLM_TENSOR_OUTPUT = "output"; -static const char * LLM_TENSOR_ATTN_NORM = "blk.%d.attn_norm"; -static const char * LLM_TENSOR_ATTN_Q = "blk.%d.attn_q"; -static const char * LLM_TENSOR_ATTN_K = "blk.%d.attn_k"; -static const char * LLM_TENSOR_ATTN_V = "blk.%d.attn_v"; -static const char * LLM_TENSOR_ATTN_OUT = "blk.%d.attn_output"; -static const char * LLM_TENSOR_FFN_NORM = "blk.%d.ffn_norm"; -static const char * LLM_TENSOR_FFN_GATE = "blk.%d.ffn_gate"; -static const char * LLM_TENSOR_FFN_DOWN = "blk.%d.ffn_down"; -static const char * LLM_TENSOR_FFN_UP = "blk.%d.ffn_up"; - -static void print_params(struct my_llama_hparams * params) { - printf("%s: n_vocab: %u\n", __func__, params->n_vocab); - printf("%s: n_ctx: %u\n", __func__, params->n_ctx); - printf("%s: n_embd: %u\n", __func__, params->n_embd); - printf("%s: n_ff: %u\n", __func__, params->n_ff); - printf("%s: n_head: %u\n", __func__, params->n_head); - printf("%s: n_head_kv: %u\n", __func__, params->n_head_kv); - printf("%s: n_layer: %u\n", __func__, params->n_layer); - printf("%s: norm_rms_eps : %f\n", __func__, params->f_norm_rms_eps); - printf("%s: rope_freq_base : %f\n", __func__, params->rope_freq_base); - printf("%s: rope_freq_scale : %f\n", __func__, params->rope_freq_scale); -} - -static void print_lora_params(struct my_llama_lora_hparams * params) { - printf("%s: n_rank_attention_norm : %u\n", __func__, params->n_rank_attention_norm); - printf("%s: n_rank_wq : %u\n", __func__, params->n_rank_wq); - printf("%s: n_rank_wk : %u\n", __func__, params->n_rank_wk); - printf("%s: n_rank_wv : %u\n", __func__, params->n_rank_wv); - printf("%s: n_rank_wo : %u\n", __func__, params->n_rank_wo); - printf("%s: n_rank_ffn_norm : %u\n", __func__, params->n_rank_ffn_norm); - printf("%s: n_rank_w1 : %u\n", __func__, params->n_rank_w1); - printf("%s: n_rank_w2 : %u\n", __func__, params->n_rank_w2); - printf("%s: n_rank_w3 : %u\n", __func__, params->n_rank_w3); - printf("%s: n_rank_tok_embeddings : %u\n", __func__, params->n_rank_tok_embeddings); - printf("%s: n_rank_norm : %u\n", __func__, params->n_rank_norm); - printf("%s: n_rank_output : %u\n", __func__, params->n_rank_output); -} - -#define GGUF_GET_KEY(ctx, dst, func, type, req, key) \ -{ \ - const std::string skey(key); \ - const int kid = gguf_find_key(ctx, skey.c_str()); \ - if (kid >= 0) { \ - enum gguf_type ktype = gguf_get_kv_type(ctx, kid); \ - if (ktype != (type)) { \ - die_fmt("key %s has wrong type: %s", skey.c_str(), gguf_type_name(ktype)); \ - } \ - (dst) = func(ctx, kid); \ - } else if (req) { \ - die_fmt("key not found in model: %s", skey.c_str()); \ - } \ -} - -static void load_model_hparams_gguf(struct gguf_context * ctx, struct my_llama_hparams * hparams, const char * expected_arch) { - std::string arch; - - GGUF_GET_KEY(ctx, arch, gguf_get_val_str, GGUF_TYPE_STRING, true, LLM_KV_GENERAL_ARCHITECTURE); - if (expected_arch != NULL) { - if (arch != expected_arch) { - printf("%s: arch=%s expected_arch=%s\n", __func__, arch.c_str(), expected_arch); - } - GGML_ASSERT(arch == expected_arch); - } - - std::vector keybuf; - keybuf.resize(512); - auto kv = [&arch, &keybuf](const char * key) -> const char * { - snprintf(keybuf.data(), keybuf.size(), key, arch.c_str()); - return keybuf.data(); - }; - - GGUF_GET_KEY(ctx, hparams->n_embd, gguf_get_val_u32, GGUF_TYPE_UINT32, true, kv(LLM_KV_EMBEDDING_LENGTH)); - GGUF_GET_KEY(ctx, hparams->n_ctx, gguf_get_val_u32, GGUF_TYPE_UINT32, false, kv(LLM_KV_CONTEXT_LENGTH)); - GGUF_GET_KEY(ctx, hparams->n_ff, gguf_get_val_u32, GGUF_TYPE_UINT32, true, kv(LLM_KV_FEED_FORWARD_LENGTH)); - GGUF_GET_KEY(ctx, hparams->n_head, gguf_get_val_u32, GGUF_TYPE_UINT32, true, kv(LLM_KV_ATTENTION_HEAD_COUNT)); - GGUF_GET_KEY(ctx, hparams->n_layer, gguf_get_val_u32, GGUF_TYPE_UINT32, true, kv(LLM_KV_BLOCK_COUNT)); - - // n_head_kv is optional, default to n_head - hparams->n_head_kv = hparams->n_head; - GGUF_GET_KEY(ctx, hparams->n_head_kv, gguf_get_val_u32, GGUF_TYPE_UINT32, false, kv(LLM_KV_ATTENTION_HEAD_COUNT_KV)); - - float rope_freq_scale = 1.0f; - GGUF_GET_KEY(ctx, hparams->f_norm_rms_eps, gguf_get_val_f32, GGUF_TYPE_FLOAT32, false, kv(LLM_KV_ATTENTION_LAYERNORM_RMS_EPS)); - GGUF_GET_KEY(ctx, hparams->rope_freq_base, gguf_get_val_f32, GGUF_TYPE_FLOAT32, false, kv(LLM_KV_ROPE_FREQ_BASE)); - GGUF_GET_KEY(ctx, rope_freq_scale, gguf_get_val_f32, GGUF_TYPE_FLOAT32, false, kv(LLM_KV_ROPE_SCALE_LINEAR)); - if (rope_freq_scale != 1.0f) { - hparams->rope_freq_scale = 1.0f / rope_freq_scale; - } -} - -static void init_model(struct llama_model * input, struct my_llama_model * model, const char * fn_model, uint32_t n_ctx) { - auto & hparams = model->hparams; - - std::vector tn_buf; - tn_buf.resize(GGML_MAX_NAME); - auto tn = [&tn_buf](const char * key) -> const char * { - snprintf(tn_buf.data(), tn_buf.size(), "%s.weight", key); - return tn_buf.data(); - }; - auto tni = [&tn_buf](const char * key, int bid) -> const char * { - snprintf(tn_buf.data(), tn_buf.size(), key, bid); - std::string s = tn_buf.data(); - snprintf(tn_buf.data(), tn_buf.size(), "%s.weight", s.c_str()); - return tn_buf.data(); - }; - - - // get parameters directly from gguf file - { - struct gguf_init_params params = { - /*.no_alloc = */ false, - /*.ctx = */ NULL, - }; - struct gguf_context * mctx = gguf_init_from_file(fn_model, params); - - load_model_hparams_gguf(mctx, &hparams, "llama"); - - gguf_free(mctx); - } - hparams.n_vocab = llama_n_vocab(input); - hparams.n_ctx = n_ctx; - - // get tensors from llama_model (possibly mmapped) - model->tok_embeddings = llama_get_model_tensor(input, tn(LLM_TENSOR_TOKEN_EMBD)); - model->norm = llama_get_model_tensor(input, tn(LLM_TENSOR_OUTPUT_NORM)); - model->output = llama_get_model_tensor(input, tn(LLM_TENSOR_OUTPUT)); - - assert_shape_2d(model->tok_embeddings, hparams.n_embd, hparams.n_vocab); - assert_shape_1d(model->norm, hparams.n_embd); - assert_shape_2d(model->output, hparams.n_embd, hparams.n_vocab); - - model->layers.resize(hparams.n_layer); - for (uint32_t i = 0; i < hparams.n_layer; ++i) { - auto & layer = model->layers[i]; - - layer.attention_norm = llama_get_model_tensor(input, tni(LLM_TENSOR_ATTN_NORM, i)); - layer.wq = llama_get_model_tensor(input, tni(LLM_TENSOR_ATTN_Q, i)); - layer.wk = llama_get_model_tensor(input, tni(LLM_TENSOR_ATTN_K, i)); - layer.wv = llama_get_model_tensor(input, tni(LLM_TENSOR_ATTN_V, i)); - layer.wo = llama_get_model_tensor(input, tni(LLM_TENSOR_ATTN_OUT, i)); - layer.ffn_norm = llama_get_model_tensor(input, tni(LLM_TENSOR_FFN_NORM, i)); - layer.w1 = llama_get_model_tensor(input, tni(LLM_TENSOR_FFN_GATE, i)); - layer.w2 = llama_get_model_tensor(input, tni(LLM_TENSOR_FFN_DOWN, i)); - layer.w3 = llama_get_model_tensor(input, tni(LLM_TENSOR_FFN_UP, i)); - - assert_shape_1d(layer.attention_norm, hparams.n_embd); - assert_shape_2d(layer.wq, hparams.n_embd, hparams.n_embd); - assert_shape_2d(layer.wk, hparams.n_embd, hparams.n_embd); - assert_shape_2d(layer.wv, hparams.n_embd, hparams.n_embd); - assert_shape_2d(layer.wo, hparams.n_embd, hparams.n_embd); - assert_shape_1d(layer.ffn_norm, hparams.n_embd); - assert_shape_2d(layer.w1, hparams.n_embd, hparams.n_ff); - assert_shape_2d(layer.w2, hparams.n_ff, hparams.n_embd); - assert_shape_2d(layer.w3, hparams.n_embd, hparams.n_ff); - } -} - -static void set_param_lora(struct my_llama_lora * lora) { - const uint32_t n_layer = lora->layers.size(); - - struct ggml_context* ctx = lora->ctx; - - ggml_set_param(ctx, lora->tok_embeddings_a); - ggml_set_param(ctx, lora->tok_embeddings_b); - ggml_set_param(ctx, lora->norm_a); - ggml_set_param(ctx, lora->norm_b); - ggml_set_param(ctx, lora->output_a); - ggml_set_param(ctx, lora->output_b); - - for (uint32_t i = 0; i < n_layer; ++i) { - auto & layer = lora->layers[i]; - - ggml_set_param(ctx, layer.attention_norm_a); - ggml_set_param(ctx, layer.attention_norm_b); - ggml_set_param(ctx, layer.wq_a); - ggml_set_param(ctx, layer.wq_b); - ggml_set_param(ctx, layer.wk_a); - ggml_set_param(ctx, layer.wk_b); - ggml_set_param(ctx, layer.wv_a); - ggml_set_param(ctx, layer.wv_b); - ggml_set_param(ctx, layer.wo_a); - ggml_set_param(ctx, layer.wo_b); - ggml_set_param(ctx, layer.ffn_norm_a); - ggml_set_param(ctx, layer.ffn_norm_b); - ggml_set_param(ctx, layer.w1_a); - ggml_set_param(ctx, layer.w1_b); - ggml_set_param(ctx, layer.w2_a); - ggml_set_param(ctx, layer.w2_b); - ggml_set_param(ctx, layer.w3_a); - ggml_set_param(ctx, layer.w3_b); - } -} - -static void alloc_lora(struct ggml_allocr * alloc, struct my_llama_lora * lora) { - ggml_allocr_alloc(alloc, lora->tok_embeddings_a); - ggml_allocr_alloc(alloc, lora->tok_embeddings_b); - ggml_allocr_alloc(alloc, lora->norm_a); - ggml_allocr_alloc(alloc, lora->norm_b); - ggml_allocr_alloc(alloc, lora->output_a); - ggml_allocr_alloc(alloc, lora->output_b); - for (uint32_t i = 0; i < lora->layers.size(); ++i) { - auto & layer = lora->layers[i]; - ggml_allocr_alloc(alloc, layer.attention_norm_a); - ggml_allocr_alloc(alloc, layer.attention_norm_b); - ggml_allocr_alloc(alloc, layer.wq_a); - ggml_allocr_alloc(alloc, layer.wq_b); - ggml_allocr_alloc(alloc, layer.wk_a); - ggml_allocr_alloc(alloc, layer.wk_b); - ggml_allocr_alloc(alloc, layer.wv_a); - ggml_allocr_alloc(alloc, layer.wv_b); - ggml_allocr_alloc(alloc, layer.wo_a); - ggml_allocr_alloc(alloc, layer.wo_b); - ggml_allocr_alloc(alloc, layer.ffn_norm_a); - ggml_allocr_alloc(alloc, layer.ffn_norm_b); - ggml_allocr_alloc(alloc, layer.w1_a); - ggml_allocr_alloc(alloc, layer.w1_b); - ggml_allocr_alloc(alloc, layer.w2_a); - ggml_allocr_alloc(alloc, layer.w2_b); - ggml_allocr_alloc(alloc, layer.w3_a); - ggml_allocr_alloc(alloc, layer.w3_b); - } - ggml_allocr_alloc(alloc, lora->tok_embeddings_a->grad); - ggml_allocr_alloc(alloc, lora->tok_embeddings_b->grad); - ggml_allocr_alloc(alloc, lora->norm_a->grad); - ggml_allocr_alloc(alloc, lora->norm_b->grad); - ggml_allocr_alloc(alloc, lora->output_a->grad); - ggml_allocr_alloc(alloc, lora->output_b->grad); - for (uint32_t i = 0; i < lora->layers.size(); ++i) { - auto & layer = lora->layers[i]; - ggml_allocr_alloc(alloc, layer.attention_norm_a->grad); - ggml_allocr_alloc(alloc, layer.attention_norm_b->grad); - ggml_allocr_alloc(alloc, layer.wq_a->grad); - ggml_allocr_alloc(alloc, layer.wq_b->grad); - ggml_allocr_alloc(alloc, layer.wk_a->grad); - ggml_allocr_alloc(alloc, layer.wk_b->grad); - ggml_allocr_alloc(alloc, layer.wv_a->grad); - ggml_allocr_alloc(alloc, layer.wv_b->grad); - ggml_allocr_alloc(alloc, layer.wo_a->grad); - ggml_allocr_alloc(alloc, layer.wo_b->grad); - ggml_allocr_alloc(alloc, layer.ffn_norm_a->grad); - ggml_allocr_alloc(alloc, layer.ffn_norm_b->grad); - ggml_allocr_alloc(alloc, layer.w1_a->grad); - ggml_allocr_alloc(alloc, layer.w1_b->grad); - ggml_allocr_alloc(alloc, layer.w2_a->grad); - ggml_allocr_alloc(alloc, layer.w2_b->grad); - ggml_allocr_alloc(alloc, layer.w3_a->grad); - ggml_allocr_alloc(alloc, layer.w3_b->grad); - } -} - -static void init_lora(const struct my_llama_model * model, struct my_llama_lora * lora) { - const auto & lparams = lora->hparams; - - const uint32_t n_embd = model->hparams.n_embd; - const uint32_t n_embd_gqa = model->hparams.n_embd_gqa(); - const uint32_t n_layer = model->hparams.n_layer; - const uint32_t n_vocab = model->hparams.n_vocab; - const uint32_t n_ff = model->hparams.n_ff; - - std::vector tn_buf; - tn_buf.resize(GGML_MAX_NAME); - auto tn = [&tn_buf](const char * key, const char * suffix) -> const char * { - snprintf(tn_buf.data(), tn_buf.size(), "%s%s", key, suffix); - return tn_buf.data(); - }; - auto tni = [&tn_buf](const char * key, const char * suffix, int bid) -> const char * { - snprintf(tn_buf.data(), tn_buf.size(), key, bid); - std::string s = tn_buf.data(); - snprintf(tn_buf.data(), tn_buf.size(), "%s%s", s.c_str(), suffix); - return tn_buf.data(); - }; - - // context for lora tensors without their data - struct ggml_init_params ctx_lora_params; - ctx_lora_params.mem_size = ggml_tensor_overhead()*2*(6 + n_layer*18); - ctx_lora_params.mem_buffer = NULL; - ctx_lora_params.no_alloc = true; - - struct ggml_context * ctx = ggml_init(ctx_lora_params); - lora->ctx = ctx; - - lora->tok_embeddings_a = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_tok_embeddings, n_embd); - lora->tok_embeddings_b = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_tok_embeddings, n_vocab); - lora->norm_a = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_norm, n_embd); - lora->norm_b = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_norm, 1); - lora->output_a = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_output, n_embd); - lora->output_b = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_output, n_vocab); - - ggml_set_name(lora->tok_embeddings_a, tn(LLM_TENSOR_TOKEN_EMBD, ".weight.lora_a")); - ggml_set_name(lora->tok_embeddings_b, tn(LLM_TENSOR_TOKEN_EMBD, ".weight.lora_b")); - ggml_set_name(lora->norm_a, tn(LLM_TENSOR_OUTPUT_NORM, ".weight.lora_a")); - ggml_set_name(lora->norm_b, tn(LLM_TENSOR_OUTPUT_NORM, ".weight.lora_b")); - ggml_set_name(lora->output_a, tn(LLM_TENSOR_OUTPUT, ".weight.lora_a")); - ggml_set_name(lora->output_b, tn(LLM_TENSOR_OUTPUT, ".weight.lora_b")); - - lora->layers.resize(n_layer); - for (uint32_t i = 0; i < n_layer; ++i) { - auto & layer = lora->layers[i]; - - layer.attention_norm_a = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_attention_norm, n_embd); - layer.attention_norm_b = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_attention_norm, 1); - - layer.wq_a = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_wq, n_embd); - layer.wq_b = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_wq, n_embd); - layer.wk_a = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_wk, n_embd); - layer.wk_b = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_wk, n_embd_gqa); - layer.wv_a = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_wv, n_embd); - layer.wv_b = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_wv, n_embd_gqa); - layer.wo_a = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_wo, n_embd); - layer.wo_b = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_wo, n_embd); - - layer.ffn_norm_a = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_ffn_norm, n_embd); - layer.ffn_norm_b = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_ffn_norm, 1); - - layer.w1_a = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_w1, n_embd); - layer.w1_b = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_w1, n_ff); - layer.w2_a = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_w2, n_ff); - layer.w2_b = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_w2, n_embd); - layer.w3_a = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_w3, n_embd); - layer.w3_b = ggml_new_tensor_2d(ctx, GGML_TYPE_F32, lparams.n_rank_w3, n_ff); - - ggml_set_name(layer.attention_norm_a, tni(LLM_TENSOR_ATTN_NORM, ".weight.lora_a", i)); - ggml_set_name(layer.attention_norm_b, tni(LLM_TENSOR_ATTN_NORM, ".weight.lora_b", i)); - ggml_set_name(layer.wq_a, tni(LLM_TENSOR_ATTN_Q, ".weight.lora_a", i)); - ggml_set_name(layer.wq_b, tni(LLM_TENSOR_ATTN_Q, ".weight.lora_b", i)); - ggml_set_name(layer.wk_a, tni(LLM_TENSOR_ATTN_K, ".weight.lora_a", i)); - ggml_set_name(layer.wk_b, tni(LLM_TENSOR_ATTN_K, ".weight.lora_b", i)); - ggml_set_name(layer.wv_a, tni(LLM_TENSOR_ATTN_V, ".weight.lora_a", i)); - ggml_set_name(layer.wv_b, tni(LLM_TENSOR_ATTN_V, ".weight.lora_b", i)); - ggml_set_name(layer.wo_a, tni(LLM_TENSOR_ATTN_OUT, ".weight.lora_a", i)); - ggml_set_name(layer.wo_b, tni(LLM_TENSOR_ATTN_OUT, ".weight.lora_b", i)); - ggml_set_name(layer.ffn_norm_a, tni(LLM_TENSOR_FFN_NORM, ".weight.lora_a", i)); - ggml_set_name(layer.ffn_norm_b, tni(LLM_TENSOR_FFN_NORM, ".weight.lora_b", i)); - ggml_set_name(layer.w1_a, tni(LLM_TENSOR_FFN_GATE, ".weight.lora_a", i)); - ggml_set_name(layer.w1_b, tni(LLM_TENSOR_FFN_GATE, ".weight.lora_b", i)); - ggml_set_name(layer.w2_a, tni(LLM_TENSOR_FFN_DOWN, ".weight.lora_a", i)); - ggml_set_name(layer.w2_b, tni(LLM_TENSOR_FFN_DOWN, ".weight.lora_b", i)); - ggml_set_name(layer.w3_a, tni(LLM_TENSOR_FFN_UP, ".weight.lora_a", i)); - ggml_set_name(layer.w3_b, tni(LLM_TENSOR_FFN_UP, ".weight.lora_b", i)); - } - - set_param_lora(lora); - - // measure data size - struct ggml_allocr * alloc = NULL; - alloc = ggml_allocr_new_measure(tensor_alignment); - alloc_lora(alloc, lora); - - // allocate data - lora->data.resize(ggml_allocr_max_size(alloc) + tensor_alignment); - ggml_allocr_free(alloc); - alloc = ggml_allocr_new(lora->data.data(), lora->data.size(), tensor_alignment); - alloc_lora(alloc, lora); - ggml_allocr_free(alloc); -} - -static void randomize_lora(struct my_llama_lora * lora, int seed, float mean, float std, float min, float max) { - const uint32_t n_layer = lora->layers.size(); - - struct random_normal_distribution * rnd = init_random_normal_distribution(seed, mean, std, min, max); - - randomize_tensor_normal(lora->tok_embeddings_a, rnd); - randomize_tensor_normal(lora->tok_embeddings_b, rnd); - randomize_tensor_normal(lora->norm_a, rnd); - randomize_tensor_normal(lora->norm_b, rnd); - randomize_tensor_normal(lora->output_a, rnd); - randomize_tensor_normal(lora->output_b, rnd); - - for (uint32_t i = 0; i < n_layer; ++i) { - auto & layer = lora->layers[i]; - randomize_tensor_normal(layer.attention_norm_a, rnd); - randomize_tensor_normal(layer.attention_norm_b, rnd); - - randomize_tensor_normal(layer.wq_a, rnd); - randomize_tensor_normal(layer.wq_b, rnd); - randomize_tensor_normal(layer.wk_a, rnd); - randomize_tensor_normal(layer.wk_b, rnd); - randomize_tensor_normal(layer.wv_a, rnd); - randomize_tensor_normal(layer.wv_b, rnd); - randomize_tensor_normal(layer.wo_a, rnd); - randomize_tensor_normal(layer.wo_b, rnd); - - randomize_tensor_normal(layer.ffn_norm_a, rnd); - randomize_tensor_normal(layer.ffn_norm_b, rnd); - - randomize_tensor_normal(layer.w1_a, rnd); - randomize_tensor_normal(layer.w1_b, rnd); - randomize_tensor_normal(layer.w2_a, rnd); - randomize_tensor_normal(layer.w2_b, rnd); - randomize_tensor_normal(layer.w3_a, rnd); - randomize_tensor_normal(layer.w3_b, rnd); - } - - free_random_normal_distribution(rnd); -} - -static struct ggml_tensor * llama_build_lora_finetune_graphs( - struct my_llama_model * model, - struct my_llama_lora * lora, - struct ggml_allocr * alloc, - struct ggml_context * ctx, - struct ggml_cgraph * gf, - struct ggml_cgraph * gb, - struct ggml_cgraph * gb_tmp, - struct ggml_tensor * * logits, - struct ggml_tensor * tokens_input, - struct ggml_tensor * targets, - const int n_tokens, - const int n_batch, - const bool enable_flash_attn, - const bool enable_checkpointing) { - - ggml_set_scratch(ctx, { 0, 0, nullptr, }); - const int n_past = 0; - const int N = n_tokens; - const auto & hparams = model->hparams; - const int n_ctx = hparams.n_ctx; - const int n_vocab = hparams.n_vocab; - const int n_embd = hparams.n_embd; - const int n_layer = hparams.n_layer; - const int n_head = hparams.n_head; - const int n_head_kv = hparams.n_head_kv; - const int n_ff = hparams.n_ff; - const int n_rot = hparams.n_embd_head(); - const int n_embd_head = hparams.n_embd_head(); - const int n_embd_gqa = hparams.n_embd_gqa(); - const float rms_norm_eps = hparams.f_norm_rms_eps; - const float rope_freq_base = hparams.rope_freq_base; - const float rope_freq_scale = hparams.rope_freq_scale; - - GGML_ASSERT((size_t) n_layer == lora->layers.size()); - - auto set_name = [](struct ggml_tensor * t, const char * n) { - ggml_set_name(t, n); - if (t->grad) { - ggml_format_name(t->grad, "%s->grad", n); - } - }; - - // KQ_pos - contains the positions - struct ggml_tensor * KQ_pos = ggml_new_tensor_1d(ctx, GGML_TYPE_I32, N); - ggml_allocr_alloc(alloc, KQ_pos); - if (!ggml_allocr_is_measure(alloc)) { - int * data = (int *) KQ_pos->data; - for (int i = 0; i < N; ++i) { - data[i] = n_past + i; - } - } - - // rope has so much parameters that we make a custom function for it - auto rope = [ctx, KQ_pos, n_rot, n_ctx, rope_freq_base, rope_freq_scale] - (struct ggml_tensor * t) -> struct ggml_tensor * { - // not capturing these, to silcence warnings - const int rope_mode = 0; - - return ggml_rope_custom(ctx, - t, KQ_pos, n_rot, rope_mode, n_ctx, - rope_freq_base, rope_freq_scale); - }; - - set_name(tokens_input, "tokens_input"); - set_name(targets, "targets"); - - GGML_ASSERT(tokens_input->type == GGML_TYPE_I32); - - auto add_to_f32 = [] (struct ggml_context * ctx, struct ggml_tensor * a, struct ggml_tensor * b) { - if (ggml_is_quantized(a->type)) { - return ggml_add_cast(ctx, a, b, GGML_TYPE_F32); - } else if (a->type == GGML_TYPE_F32) { - return ggml_add(ctx, a, b); - } else { - die_fmt("%s: Finetuning on tensors with type '%s' is not yet supported.\n", - __func__, ggml_type_name(a->type)); - } - }; - - struct ggml_tensor * tok_embeddings = add_to_f32(ctx, model->tok_embeddings, ggml_mul_mat(ctx, lora->tok_embeddings_a, lora->tok_embeddings_b)); - struct ggml_tensor * norm = add_to_f32(ctx, model->norm, ggml_mul_mat(ctx, lora->norm_a, lora->norm_b)); - struct ggml_tensor * output = add_to_f32(ctx, model->output, ggml_mul_mat(ctx, lora->output_a, lora->output_b)); - - struct ggml_tensor * t00 = ggml_reshape_1d(ctx, tokens_input, N*n_batch); set_name(t00, "t00"); assert_shape_1d(t00, N*n_batch); - struct ggml_tensor * t01 = ggml_get_rows(ctx, tok_embeddings, t00); set_name(t01, "t01"); assert_shape_2d(t01, n_embd, N*n_batch); - - struct ggml_tensor * cur = t01; - - std::vector checkpoints; - if (enable_checkpointing) { - checkpoints.push_back(tokens_input); - checkpoints.push_back(targets); - checkpoints.push_back(t00); - checkpoints.push_back(t01); - } - - struct ggml_tensor * kv_scale = NULL; - if (!enable_flash_attn) { - kv_scale = ggml_new_f32(ctx, 1.0f/sqrtf(float(n_embd)/n_head)); - } - - for (int il = 0; il < n_layer; ++il) { - struct my_llama_layer & layer = model->layers[il]; - struct my_llama_lora_layer & llayer = lora->layers[il]; - - struct ggml_tensor * attention_norm = add_to_f32(ctx, layer.attention_norm, ggml_mul_mat(ctx, llayer.attention_norm_a, llayer.attention_norm_b)); - struct ggml_tensor * ffn_norm = add_to_f32(ctx, layer.ffn_norm, ggml_mul_mat(ctx, llayer.ffn_norm_a, llayer.ffn_norm_b)); - struct ggml_tensor * wq = add_to_f32(ctx, layer.wq, ggml_mul_mat(ctx, llayer.wq_a, llayer.wq_b)); - struct ggml_tensor * wk = add_to_f32(ctx, layer.wk, ggml_mul_mat(ctx, llayer.wk_a, llayer.wk_b)); - struct ggml_tensor * wv = add_to_f32(ctx, layer.wv, ggml_mul_mat(ctx, llayer.wv_a, llayer.wv_b)); - struct ggml_tensor * wo = add_to_f32(ctx, layer.wo, ggml_mul_mat(ctx, llayer.wo_a, llayer.wo_b)); - struct ggml_tensor * w1 = add_to_f32(ctx, layer.w1, ggml_mul_mat(ctx, llayer.w1_a, llayer.w1_b)); - struct ggml_tensor * w2 = add_to_f32(ctx, layer.w2, ggml_mul_mat(ctx, llayer.w2_a, llayer.w2_b)); - struct ggml_tensor * w3 = add_to_f32(ctx, layer.w3, ggml_mul_mat(ctx, llayer.w3_a, llayer.w3_b)); - - struct ggml_tensor * t02 = ggml_rms_norm (ctx, cur, rms_norm_eps); set_name(t02, "t02"); assert_shape_2d(t02, n_embd, N*n_batch); - struct ggml_tensor * t03 = ggml_repeat (ctx, attention_norm, t02); set_name(t03, "t03"); assert_shape_2d(t03, n_embd, N*n_batch); - struct ggml_tensor * t04 = ggml_mul (ctx, t03, t02); set_name(t04, "t04"); assert_shape_2d(t04, n_embd, N*n_batch); - struct ggml_tensor * t05 = ggml_mul_mat (ctx, wq, t04); set_name(t05, "t05"); assert_shape_2d(t05, n_embd, N*n_batch); - struct ggml_tensor * t06 = ggml_reshape_4d (ctx, t05, n_embd_head, n_head, N, n_batch); set_name(t06, "t06"); assert_shape_4d(t06, n_embd_head, n_head, N, n_batch); - struct ggml_tensor * t07 = rope (t06); set_name(t07, "t07"); assert_shape_4d(t07, n_embd_head, n_head, N, n_batch); - struct ggml_tensor * t08 = ggml_mul_mat (ctx, wk, t04); set_name(t08, "t08"); assert_shape_2d(t08, n_embd_gqa, N*n_batch); - struct ggml_tensor * t09 = ggml_reshape_4d (ctx, t08, n_embd_head, n_head_kv, N, n_batch); set_name(t09, "t09"); assert_shape_4d(t09, n_embd_head, n_head_kv, N, n_batch); - struct ggml_tensor * t10 = rope (t09); set_name(t10, "t10"); assert_shape_4d(t10, n_embd_head, n_head_kv, N, n_batch); - - struct ggml_tensor * t11; - if (ggml_is_quantized(wv->type)) { - struct ggml_tensor * t11_1 = ggml_mul_mat (ctx, wv, t04); set_name(t11_1, "t11_1"); assert_shape_2d(t11_1, n_embd_gqa, N*n_batch); - struct ggml_tensor * t11_2 = ggml_transpose(ctx, t11_1); set_name(t11_2, "t11_2"); assert_shape_2d(t11_2, N*n_batch, n_embd_gqa); - t11 = ggml_cont (ctx, t11_2); set_name(t11, "t11"); assert_shape_2d(t11, N*n_batch, n_embd_gqa); - } else { - t11 = ggml_mul_mat (ctx, t04, wv); set_name(t11, "t11"); assert_shape_2d(t11, N*n_batch, n_embd_gqa); - } - - struct ggml_tensor * t12 = ggml_reshape_4d (ctx, t11, N, n_batch, n_embd_head, n_head_kv); set_name(t12, "t12"); assert_shape_4d(t12, N, n_batch, n_embd_head, n_head_kv); - struct ggml_tensor * t13 = ggml_permute (ctx, t07, 0, 2, 1, 3); set_name(t13, "t13"); assert_shape_4d(t13, n_embd_head, N, n_head, n_batch); - struct ggml_tensor * t14 = ggml_permute (ctx, t10, 0, 2, 1, 3); set_name(t14, "t14"); assert_shape_4d(t14, n_embd_head, N, n_head_kv, n_batch); - struct ggml_tensor * t15 = ggml_permute (ctx, t12, 0, 3, 1, 2); set_name(t15, "t15"); assert_shape_4d(t15, N, n_embd_head, n_head_kv, n_batch); - struct ggml_tensor * t16; - if (enable_flash_attn) { - t16 = ggml_flash_attn(ctx, t13, t14, t15, true); set_name(t16, "t16"); assert_shape_4d(t16, n_embd_head, N, n_head, n_batch); - } else { - struct ggml_tensor * t16_0 = ggml_mul_mat (ctx, t14, t13); set_name(t16_0, "t16_0"); assert_shape_4d(t16_0, N, N, n_head, n_batch); - struct ggml_tensor * t16_1 = ggml_scale_inplace (ctx, t16_0, kv_scale); set_name(t16_1, "t16_1"); assert_shape_4d(t16_1, N, N, n_head, n_batch); - struct ggml_tensor * t16_2 = ggml_diag_mask_inf_inplace(ctx, t16_1, n_past); set_name(t16_2, "t16_2"); assert_shape_4d(t16_2, N, N, n_head, n_batch); - struct ggml_tensor * t16_3 = ggml_soft_max_inplace (ctx, t16_2); set_name(t16_3, "t16_3"); assert_shape_4d(t16_3, N, N, n_head, n_batch); - t16 = ggml_mul_mat(ctx, t15, t16_3); set_name(t16, "t16"); assert_shape_4d(t16, n_embd_head, N, n_head, n_batch); - } - struct ggml_tensor * t17 = ggml_permute (ctx, t16, 0, 2, 1, 3); set_name(t17, "t17"); assert_shape_4d(t17, n_embd_head, n_head, N, n_batch); - struct ggml_tensor * t18 = ggml_cont (ctx, t17); set_name(t18, "t18"); assert_shape_4d(t18, n_embd_head, n_head, N, n_batch); - struct ggml_tensor * t19 = ggml_reshape_2d (ctx, t18, n_embd, N*n_batch); set_name(t19, "t19"); assert_shape_2d(t19, n_embd, N*n_batch); - struct ggml_tensor * t20 = ggml_mul_mat (ctx, wo, t19); set_name(t20, "t20"); assert_shape_2d(t20, n_embd, N*n_batch); - struct ggml_tensor * t21 = ggml_add (ctx, t20, cur); set_name(t21, "t21"); assert_shape_2d(t21, n_embd, N*n_batch); - struct ggml_tensor * t22 = ggml_rms_norm (ctx, t21, rms_norm_eps); set_name(t22, "t22"); assert_shape_2d(t22, n_embd, N*n_batch); - struct ggml_tensor * t23 = ggml_repeat (ctx, ffn_norm, t22); set_name(t23, "t23"); assert_shape_2d(t23, n_embd, N*n_batch); - struct ggml_tensor * t24 = ggml_mul (ctx, t23, t22); set_name(t24, "t24"); assert_shape_2d(t24, n_embd, N*n_batch); - struct ggml_tensor * t25 = ggml_mul_mat (ctx, w3, t24); set_name(t25, "t25"); assert_shape_2d(t25, n_ff, N*n_batch); - struct ggml_tensor * t26 = ggml_mul_mat (ctx, w1, t24); set_name(t26, "t26"); assert_shape_2d(t26, n_ff, N*n_batch); - struct ggml_tensor * t27 = ggml_silu (ctx, t26); set_name(t27, "t27"); assert_shape_2d(t27, n_ff, N*n_batch); - struct ggml_tensor * t28 = ggml_mul (ctx, t27, t25); set_name(t28, "t28"); assert_shape_2d(t28, n_ff, N*n_batch); - struct ggml_tensor * t29 = ggml_mul_mat (ctx, w2, t28); set_name(t29, "t29"); assert_shape_2d(t29, n_embd, N*n_batch); - struct ggml_tensor * t30 = ggml_add (ctx, t29, t21); set_name(t30, "t30"); assert_shape_2d(t30, n_embd, N*n_batch); - cur = t30; - if (enable_checkpointing) { - checkpoints.push_back(cur); - } - } - struct ggml_tensor * t31 = ggml_rms_norm (ctx, cur, rms_norm_eps); set_name(t31, "t31"); assert_shape_2d(t31, n_embd, N*n_batch); - struct ggml_tensor * t32 = ggml_repeat (ctx, norm, t31); set_name(t32, "t32"); assert_shape_2d(t32, n_embd, N*n_batch); - struct ggml_tensor * t33 = ggml_mul (ctx, t32, t31); set_name(t33, "t33"); assert_shape_2d(t33, n_embd, N*n_batch); - struct ggml_tensor * t34 = ggml_mul_mat (ctx, output, t33); set_name(t34, "t34"); assert_shape_2d(t34, n_vocab, N*n_batch); - struct ggml_tensor * t35 = ggml_reshape_3d (ctx, t34, n_vocab, N, n_batch); set_name(t35, "t35"); assert_shape_3d(t35, n_vocab, N, n_batch); - struct ggml_tensor * t36 = ggml_cross_entropy_loss(ctx, t35, targets); set_name(t36, "t36"); assert_shape_1d(t36, 1); - - if (enable_checkpointing) { - checkpoints.push_back(t31); - checkpoints.push_back(t32); - checkpoints.push_back(t33); - checkpoints.push_back(t34); - checkpoints.push_back(t35); - checkpoints.push_back(t36); - } - - ggml_build_forward_expand(gf, t36); - - if (enable_checkpointing) { - ggml_build_backward_gradient_checkpointing(ctx, gf, gb, gb_tmp, checkpoints.data(), (int) checkpoints.size()); - } else { - *gb = *gf; - ggml_build_backward_expand(ctx, gf, gb, true); - } - - GGML_ASSERT(alloc != NULL); - - // make sure some tensors are not reallocated by inserting new temporary nodes depending on them - int n_leafs_before = gb->n_leafs; - int n_nodes_before = gb->n_nodes; - struct ggml_tensor * one = ggml_new_f32(ctx, 1.0f); - // output tensors - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, t35, one)); - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, t36, one)); - // input gradient - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, t36->grad, one)); - GGML_ASSERT(t36->grad->data == NULL && t36->grad->view_src == NULL); - ggml_allocr_alloc(alloc, t36->grad); - // KQ_pos - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, KQ_pos, one)); - - // make sure base model tensors data cannot be used in viewable operations - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, model->tok_embeddings, one)); - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, model->norm, one)); - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, model->output, one)); - for (int il = 0; il < n_layer; ++il) { - struct my_llama_layer & layer = model->layers[il]; - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, layer.attention_norm, one)); - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, layer.ffn_norm, one)); - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, layer.wq, one)); - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, layer.wk, one)); - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, layer.wv, one)); - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, layer.wo, one)); - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, layer.w1, one)); - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, layer.w2, one)); - ggml_build_forward_expand(gb, ggml_scale_inplace(ctx, layer.w3, one)); - } - - // allocating checkpoints in one block to reduce memory fragmentation - // note: they will be freed in reverse order - for (unsigned int i = 0; i < checkpoints.size(); ++i) { - if (checkpoints[i]->data == NULL && checkpoints[i]->view_src == NULL) { - ggml_allocr_alloc(alloc, checkpoints[i]); - } - } - - ggml_allocr_alloc_graph(alloc, gb); - - // remove the additional nodes and leafs - for (int i = n_leafs_before; i < gb->n_leafs; ++i) { - gb->leafs[i] = NULL; - } - for (int i = n_nodes_before; i < gb->n_nodes; ++i) { - gb->nodes[i] = NULL; - } - gb->n_leafs = n_leafs_before; - gb->n_nodes = n_nodes_before; - - *logits = t35; - return t36; -} - -static void load_llama_lora_gguf(struct gguf_context * fctx, struct ggml_context * f_ggml_ctx, struct my_llama_model * model, struct my_llama_lora * lora) { - // NOTE: gguf_context must be initialized with f_ggml_ctx and no_alloc=false, otherwise tensor data can not be read - - std::string arch; - - std::vector keybuf; - keybuf.resize(512); - - GGUF_GET_KEY(fctx, arch, gguf_get_val_str, GGUF_TYPE_STRING, true, LLM_KV_GENERAL_ARCHITECTURE); - GGML_ASSERT(arch == "llama"); - - uint32_t ftype_u; - GGUF_GET_KEY(fctx, ftype_u, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_GENERAL_FILE_TYPE); - GGML_ASSERT((enum llama_ftype) ftype_u == LLAMA_FTYPE_ALL_F32); - - struct my_llama_hparams hparams; - load_model_hparams_gguf(fctx, &hparams, arch.c_str()); - - // parameters that define tensor shapes must match - GGML_ASSERT(hparams.n_embd == model->hparams.n_embd); - GGML_ASSERT(hparams.n_ff == model->hparams.n_ff); - GGML_ASSERT(hparams.n_head == model->hparams.n_head); - GGML_ASSERT(hparams.n_head_kv == model->hparams.n_head_kv); - GGML_ASSERT(hparams.n_layer == model->hparams.n_layer); - - GGUF_GET_KEY(fctx, lora->hparams.n_rank_tok_embeddings, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_TRAINING_LORA_RANK_TOKEN_EMBD); - GGUF_GET_KEY(fctx, lora->hparams.n_rank_norm, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_TRAINING_LORA_RANK_OUTPUT_NORM); - GGUF_GET_KEY(fctx, lora->hparams.n_rank_output, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_TRAINING_LORA_RANK_OUTPUT); - GGUF_GET_KEY(fctx, lora->hparams.n_rank_attention_norm, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_TRAINING_LORA_RANK_ATTN_NORM); - GGUF_GET_KEY(fctx, lora->hparams.n_rank_wq, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_TRAINING_LORA_RANK_ATTN_Q); - GGUF_GET_KEY(fctx, lora->hparams.n_rank_wk, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_TRAINING_LORA_RANK_ATTN_K); - GGUF_GET_KEY(fctx, lora->hparams.n_rank_wv, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_TRAINING_LORA_RANK_ATTN_V); - GGUF_GET_KEY(fctx, lora->hparams.n_rank_wo, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_TRAINING_LORA_RANK_ATTN_OUT); - GGUF_GET_KEY(fctx, lora->hparams.n_rank_ffn_norm, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_TRAINING_LORA_RANK_FFN_NORM); - GGUF_GET_KEY(fctx, lora->hparams.n_rank_w1, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_TRAINING_LORA_RANK_FFN_GATE); - GGUF_GET_KEY(fctx, lora->hparams.n_rank_w2, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_TRAINING_LORA_RANK_FFN_DOWN); - GGUF_GET_KEY(fctx, lora->hparams.n_rank_w3, gguf_get_val_u32, GGUF_TYPE_UINT32, true, LLM_KV_TRAINING_LORA_RANK_FFN_UP); - - init_lora(model, lora); - - copy_tensor_by_name(lora->tok_embeddings_a, f_ggml_ctx, ggml_get_name(lora->tok_embeddings_a)); - copy_tensor_by_name(lora->tok_embeddings_b, f_ggml_ctx, ggml_get_name(lora->tok_embeddings_b)); - copy_tensor_by_name(lora->norm_a, f_ggml_ctx, ggml_get_name(lora->norm_a)); - copy_tensor_by_name(lora->norm_b, f_ggml_ctx, ggml_get_name(lora->norm_b)); - copy_tensor_by_name(lora->output_a, f_ggml_ctx, ggml_get_name(lora->output_a)); - copy_tensor_by_name(lora->output_b, f_ggml_ctx, ggml_get_name(lora->output_b)); - - for (uint32_t i = 0; i < lora->layers.size(); ++i) { - auto & layer = lora->layers[i]; - copy_tensor_by_name(layer.attention_norm_a, f_ggml_ctx, ggml_get_name(layer.attention_norm_a)); - copy_tensor_by_name(layer.attention_norm_b, f_ggml_ctx, ggml_get_name(layer.attention_norm_b)); - copy_tensor_by_name(layer.wq_a, f_ggml_ctx, ggml_get_name(layer.wq_a)); - copy_tensor_by_name(layer.wq_b, f_ggml_ctx, ggml_get_name(layer.wq_b)); - copy_tensor_by_name(layer.wk_a, f_ggml_ctx, ggml_get_name(layer.wk_a)); - copy_tensor_by_name(layer.wk_b, f_ggml_ctx, ggml_get_name(layer.wk_b)); - copy_tensor_by_name(layer.wv_a, f_ggml_ctx, ggml_get_name(layer.wv_a)); - copy_tensor_by_name(layer.wv_b, f_ggml_ctx, ggml_get_name(layer.wv_b)); - copy_tensor_by_name(layer.wo_a, f_ggml_ctx, ggml_get_name(layer.wo_a)); - copy_tensor_by_name(layer.wo_b, f_ggml_ctx, ggml_get_name(layer.wo_b)); - copy_tensor_by_name(layer.ffn_norm_a, f_ggml_ctx, ggml_get_name(layer.ffn_norm_a)); - copy_tensor_by_name(layer.ffn_norm_b, f_ggml_ctx, ggml_get_name(layer.ffn_norm_b)); - copy_tensor_by_name(layer.w1_a, f_ggml_ctx, ggml_get_name(layer.w1_a)); - copy_tensor_by_name(layer.w1_b, f_ggml_ctx, ggml_get_name(layer.w1_b)); - copy_tensor_by_name(layer.w2_a, f_ggml_ctx, ggml_get_name(layer.w2_a)); - copy_tensor_by_name(layer.w2_b, f_ggml_ctx, ggml_get_name(layer.w2_b)); - copy_tensor_by_name(layer.w3_a, f_ggml_ctx, ggml_get_name(layer.w3_a)); - copy_tensor_by_name(layer.w3_b, f_ggml_ctx, ggml_get_name(layer.w3_b)); - } -} - -static void save_llama_lora_gguf(struct gguf_context * fctx, struct my_llama_model * model, struct my_llama_lora * lora) { - const char * arch = "llama"; - enum llama_ftype ftype = LLAMA_FTYPE_ALL_F32; - - std::vector keybuf; - keybuf.resize(512); - auto kv = [arch, &keybuf](const char * key) -> const char * { - snprintf(keybuf.data(), keybuf.size(), key, arch); - return keybuf.data(); - }; - - gguf_set_val_str(fctx, LLM_KV_GENERAL_ARCHITECTURE, arch); - gguf_set_val_u32(fctx, LLM_KV_GENERAL_FILE_TYPE, ftype); - - gguf_set_val_u32(fctx, kv(LLM_KV_CONTEXT_LENGTH), model->hparams.n_ctx); - gguf_set_val_u32(fctx, kv(LLM_KV_EMBEDDING_LENGTH), model->hparams.n_embd); - gguf_set_val_u32(fctx, kv(LLM_KV_FEED_FORWARD_LENGTH), model->hparams.n_ff); - gguf_set_val_u32(fctx, kv(LLM_KV_ATTENTION_HEAD_COUNT), model->hparams.n_head); - gguf_set_val_u32(fctx, kv(LLM_KV_ATTENTION_HEAD_COUNT_KV), model->hparams.n_head_kv); - gguf_set_val_u32(fctx, kv(LLM_KV_BLOCK_COUNT), model->hparams.n_layer); - gguf_set_val_u32(fctx, kv(LLM_KV_ROPE_DIMENSION_COUNT), model->hparams.n_embd_head()); - gguf_set_val_f32(fctx, kv(LLM_KV_ATTENTION_LAYERNORM_RMS_EPS), model->hparams.f_norm_rms_eps); - gguf_set_val_f32(fctx, kv(LLM_KV_ROPE_FREQ_BASE), model->hparams.rope_freq_base); - gguf_set_val_f32(fctx, kv(LLM_KV_ROPE_SCALE_LINEAR), model->hparams.rope_freq_scale); - - gguf_set_val_u32(fctx, LLM_KV_TRAINING_LORA_RANK_TOKEN_EMBD, lora->hparams.n_rank_tok_embeddings); - gguf_set_val_u32(fctx, LLM_KV_TRAINING_LORA_RANK_OUTPUT_NORM, lora->hparams.n_rank_norm); - gguf_set_val_u32(fctx, LLM_KV_TRAINING_LORA_RANK_OUTPUT, lora->hparams.n_rank_output); - gguf_set_val_u32(fctx, LLM_KV_TRAINING_LORA_RANK_ATTN_NORM, lora->hparams.n_rank_attention_norm); - gguf_set_val_u32(fctx, LLM_KV_TRAINING_LORA_RANK_ATTN_Q, lora->hparams.n_rank_wq); - gguf_set_val_u32(fctx, LLM_KV_TRAINING_LORA_RANK_ATTN_K, lora->hparams.n_rank_wk); - gguf_set_val_u32(fctx, LLM_KV_TRAINING_LORA_RANK_ATTN_V, lora->hparams.n_rank_wv); - gguf_set_val_u32(fctx, LLM_KV_TRAINING_LORA_RANK_ATTN_OUT, lora->hparams.n_rank_wo); - gguf_set_val_u32(fctx, LLM_KV_TRAINING_LORA_RANK_FFN_NORM, lora->hparams.n_rank_ffn_norm); - gguf_set_val_u32(fctx, LLM_KV_TRAINING_LORA_RANK_FFN_GATE, lora->hparams.n_rank_w1); - gguf_set_val_u32(fctx, LLM_KV_TRAINING_LORA_RANK_FFN_DOWN, lora->hparams.n_rank_w2); - gguf_set_val_u32(fctx, LLM_KV_TRAINING_LORA_RANK_FFN_UP, lora->hparams.n_rank_w3); - - gguf_add_tensor(fctx, lora->tok_embeddings_a); - gguf_add_tensor(fctx, lora->tok_embeddings_b); - gguf_add_tensor(fctx, lora->norm_a); - gguf_add_tensor(fctx, lora->norm_b); - gguf_add_tensor(fctx, lora->output_a); - gguf_add_tensor(fctx, lora->output_b); - - for (uint32_t i = 0; i < lora->layers.size(); ++i) { - auto & layer = lora->layers[i]; - - gguf_add_tensor(fctx, layer.attention_norm_a); - gguf_add_tensor(fctx, layer.attention_norm_b); - gguf_add_tensor(fctx, layer.wq_a); - gguf_add_tensor(fctx, layer.wq_b); - gguf_add_tensor(fctx, layer.wk_a); - gguf_add_tensor(fctx, layer.wk_b); - gguf_add_tensor(fctx, layer.wv_a); - gguf_add_tensor(fctx, layer.wv_b); - gguf_add_tensor(fctx, layer.wo_a); - gguf_add_tensor(fctx, layer.wo_b); - gguf_add_tensor(fctx, layer.ffn_norm_a); - gguf_add_tensor(fctx, layer.ffn_norm_b); - gguf_add_tensor(fctx, layer.w1_a); - gguf_add_tensor(fctx, layer.w1_b); - gguf_add_tensor(fctx, layer.w2_a); - gguf_add_tensor(fctx, layer.w2_b); - gguf_add_tensor(fctx, layer.w3_a); - gguf_add_tensor(fctx, layer.w3_b); - } -} - -static void load_checkpoint_lora_gguf(struct gguf_context * fctx, struct ggml_context * f_ggml_ctx, struct my_llama_model * model, struct my_llama_lora * lora, struct train_state * train) { - std::string train_type = LLM_KV_TRAINING_TYPE_FINETUNE_LORA; - GGUF_GET_KEY(fctx, train_type, gguf_get_val_str, GGUF_TYPE_STRING, false, LLM_KV_TRAINING_TYPE); - GGML_ASSERT(train_type == LLM_KV_TRAINING_TYPE_FINETUNE_LORA); - - load_train_state_gguf(fctx, f_ggml_ctx, train); - load_llama_lora_gguf(fctx, f_ggml_ctx, model, lora); -} - -static void save_checkpoint_lora_gguf(struct gguf_context * fctx, struct my_llama_model * model, struct my_llama_lora * lora, struct train_state * train) { - gguf_set_val_str(fctx, LLM_KV_TRAINING_TYPE, LLM_KV_TRAINING_TYPE_FINETUNE_LORA); - save_llama_lora_gguf(fctx, model, lora); - save_train_state_gguf(fctx, train); -} - -static bool load_checkpoint_lora_file(const char * filename, struct my_llama_model * model, struct my_llama_lora * lora, struct train_state * train) { - struct ggml_context * f_ggml_ctx; - struct gguf_init_params params; - params.no_alloc = false; - params.ctx = &f_ggml_ctx; - struct gguf_context * fctx = gguf_init_from_file(filename, params); - if (fctx == NULL) { - return false; - } - - load_checkpoint_lora_gguf(fctx, f_ggml_ctx, model, lora, train); - - gguf_free(fctx); - return true; -} - -static void save_checkpoint_lora_file(const char * filename, struct my_llama_model * model, struct my_llama_lora * lora, struct train_state * train) { - printf("%s: saving to %s\n", __func__, filename); - struct gguf_context * fctx = gguf_init_empty(); - - save_checkpoint_lora_gguf(fctx, model, lora, train); - - // write file - const bool only_meta = false; - gguf_write_to_file(fctx, filename, only_meta); - gguf_free(fctx); -} - -struct llama_file { - // use FILE * so we don't have to re-open the file to mmap - FILE * fp; - size_t size; - - llama_file(const char * fname, const char * mode) { - fp = std::fopen(fname, mode); - if (fp == NULL) { - size = 0; - } else { - seek(0, SEEK_END); - size = tell(); - seek(0, SEEK_SET); - } - } - - size_t tell() const { -#ifdef _WIN32 - __int64 ret = _ftelli64(fp); -#else - long ret = std::ftell(fp); -#endif - GGML_ASSERT(ret != -1); // this really shouldn't fail - return (size_t) ret; - } - - void seek(size_t offset, int whence) { -#ifdef _WIN32 - int ret = _fseeki64(fp, (__int64) offset, whence); -#else - int ret = std::fseek(fp, (long) offset, whence); -#endif - GGML_ASSERT(ret == 0); // same - } - - void read_raw(void * ptr, size_t size) { - if (size == 0) { - return; - } - errno = 0; - std::size_t ret = std::fread(ptr, size, 1, fp); - if (ferror(fp)) { - die_fmt("read error: %s", strerror(errno)); - } - if (ret != 1) { - die("unexpectedly reached end of file"); - } - } - - std::uint32_t read_u32() { - std::uint32_t ret; - read_raw(&ret, sizeof(ret)); - return ret; - } - - std::string read_string(std::uint32_t len) { - std::vector chars(len); - read_raw(chars.data(), len); - return std::string(chars.data(), len); - } - - void write_raw(const void * ptr, size_t size) { - if (size == 0) { - return; - } - errno = 0; - size_t ret = std::fwrite(ptr, size, 1, fp); - if (ret != 1) { - die_fmt("write error: %s", strerror(errno)); - } - } - - void write_u32(std::uint32_t val) { - write_raw(&val, sizeof(val)); - } - - ~llama_file() { - if (fp) { - std::fclose(fp); - } - } -}; - -static void write_tensor(struct llama_file * file, struct ggml_tensor * tensor, const char * name) { - if (tensor == NULL) { - file->write_u32(0); - file->write_u32(0); - file->write_u32(GGML_TYPE_F32); - file->seek((0-file->tell()) & 31, SEEK_CUR); - return; - } - if (name == NULL) { - name = ggml_get_name(tensor); - } - uint32_t name_len = strlen(name); - uint32_t nd = tensor->n_dims; - uint32_t ne[4] = { (uint32_t)tensor->ne[0], - (uint32_t)tensor->ne[1], - (uint32_t)tensor->ne[2], - (uint32_t)tensor->ne[3] }; - file->write_u32(nd); - file->write_u32(name_len); - file->write_u32(tensor->type); - file->write_raw(ne, sizeof(ne[0]) * nd); - file->write_raw(name, name_len); - file->seek((0-file->tell()) & 31, SEEK_CUR); - file->write_raw(tensor->data, ggml_nbytes(tensor)); -} - -static void save_as_llama_lora(const char * filename, struct my_llama_lora * lora) { - printf("%s: saving to %s\n", __func__, filename); - struct llama_file file(filename, "wb"); - if (file.fp == NULL) { - return; - } - - std::vector tn_buf; - tn_buf.resize(GGML_MAX_NAME); - - auto tn = [&tn_buf](const char * key, const char * suffix) -> const char * { - snprintf(tn_buf.data(), tn_buf.size(), "%s%s", key, suffix); - return tn_buf.data(); - }; - - auto tni = [&tn_buf](const char * key, int bid, const char * suffix) -> const char * { - snprintf(tn_buf.data(), tn_buf.size(), key, bid); - std::string s = tn_buf.data(); - snprintf(tn_buf.data(), tn_buf.size(), "%s%s", s.c_str(), suffix); - return tn_buf.data(); - }; - - uint32_t LLAMA_FILE_MAGIC_LORA = 0x67676C61; // 'ggla' - // write_magic - file.write_u32(LLAMA_FILE_MAGIC_LORA); // magic - file.write_u32(1); // version - // write_hparams - file.write_u32(lora->hparams.lora_r); - file.write_u32(lora->hparams.lora_alpha); - // write tensors - write_tensor(&file, lora->tok_embeddings_a, tn(LLM_TENSOR_TOKEN_EMBD, ".weight.loraA")); - write_tensor(&file, lora->tok_embeddings_b, tn(LLM_TENSOR_TOKEN_EMBD, ".weight.loraB")); - write_tensor(&file, lora->norm_a, tn(LLM_TENSOR_OUTPUT_NORM, ".weight.loraA")); - write_tensor(&file, lora->norm_b, tn(LLM_TENSOR_OUTPUT_NORM, ".weight.loraB")); - write_tensor(&file, lora->output_a, tn(LLM_TENSOR_OUTPUT, ".weight.loraA")); - write_tensor(&file, lora->output_b, tn(LLM_TENSOR_OUTPUT, ".weight.loraB")); - for (uint32_t i = 0; i < lora->layers.size(); ++i) { - auto & layer = lora->layers[i]; - write_tensor(&file, layer.attention_norm_a, tni(LLM_TENSOR_ATTN_NORM, i, ".weight.loraA")); - write_tensor(&file, layer.attention_norm_b, tni(LLM_TENSOR_ATTN_NORM, i, ".weight.loraB")); - write_tensor(&file, layer.wq_a, tni(LLM_TENSOR_ATTN_Q, i, ".weight.loraA")); - write_tensor(&file, layer.wq_b, tni(LLM_TENSOR_ATTN_Q, i, ".weight.loraB")); - write_tensor(&file, layer.wk_a, tni(LLM_TENSOR_ATTN_K, i, ".weight.loraA")); - write_tensor(&file, layer.wk_b, tni(LLM_TENSOR_ATTN_K, i, ".weight.loraB")); - write_tensor(&file, layer.wv_a, tni(LLM_TENSOR_ATTN_V, i, ".weight.loraA")); - write_tensor(&file, layer.wv_b, tni(LLM_TENSOR_ATTN_V, i, ".weight.loraB")); - write_tensor(&file, layer.wo_a, tni(LLM_TENSOR_ATTN_OUT, i, ".weight.loraA")); - write_tensor(&file, layer.wo_b, tni(LLM_TENSOR_ATTN_OUT, i, ".weight.loraB")); - write_tensor(&file, layer.ffn_norm_a, tni(LLM_TENSOR_FFN_NORM, i, ".weight.loraA")); - write_tensor(&file, layer.ffn_norm_b, tni(LLM_TENSOR_FFN_NORM, i, ".weight.loraB")); - write_tensor(&file, layer.w1_a, tni(LLM_TENSOR_FFN_GATE, i, ".weight.loraA")); - write_tensor(&file, layer.w1_b, tni(LLM_TENSOR_FFN_GATE, i, ".weight.loraB")); - write_tensor(&file, layer.w2_a, tni(LLM_TENSOR_FFN_DOWN, i, ".weight.loraA")); - write_tensor(&file, layer.w2_b, tni(LLM_TENSOR_FFN_DOWN, i, ".weight.loraB")); - write_tensor(&file, layer.w3_a, tni(LLM_TENSOR_FFN_UP, i, ".weight.loraA")); - write_tensor(&file, layer.w3_b, tni(LLM_TENSOR_FFN_UP, i, ".weight.loraB")); - } -} - -struct train_params { - struct train_params_common common; - - const char * fn_model_base; - const char * fn_lora_out; - - bool only_write_lora; - - float f_norm_rms_eps; - float rope_freq_base; - float rope_freq_scale; - - bool custom_f_norm_rms_eps; - bool custom_rope_freq_base; - bool custom_rope_freq_scale; - - int32_t lora_r; - int32_t lora_alpha; - bool custom_lora_alpha; - - uint32_t n_rank_attention_norm; - uint32_t n_rank_wq; - uint32_t n_rank_wk; - uint32_t n_rank_wv; - uint32_t n_rank_wo; - uint32_t n_rank_ffn_norm; - uint32_t n_rank_w1; - uint32_t n_rank_w2; - uint32_t n_rank_w3; - uint32_t n_rank_tok_embeddings; - uint32_t n_rank_norm; - uint32_t n_rank_output; - - bool custom_n_rank_attention_norm; - bool custom_n_rank_wq; - bool custom_n_rank_wk; - bool custom_n_rank_wv; - bool custom_n_rank_wo; - bool custom_n_rank_ffn_norm; - bool custom_n_rank_w1; - bool custom_n_rank_w2; - bool custom_n_rank_w3; - bool custom_n_rank_tok_embeddings; - bool custom_n_rank_norm; - bool custom_n_rank_output; -}; - -static struct train_params get_default_train_params() { - struct train_params params; - params.common = get_default_train_params_common(); - params.fn_model_base = ""; - params.fn_lora_out = "ggml-lora-ITERATION-f32.gguf"; - - params.only_write_lora = false; - - params.f_norm_rms_eps = 1e-5f; - params.rope_freq_base = 10000.0f; - params.rope_freq_scale = 1.0f; - - params.custom_f_norm_rms_eps = false; - params.custom_rope_freq_base = false; - params.custom_rope_freq_scale = false; - - params.lora_r = 4; - params.lora_alpha = 4; - params.custom_lora_alpha = false; - - params.n_rank_attention_norm = 1; - params.n_rank_wq = 4; - params.n_rank_wk = 4; - params.n_rank_wv = 4; - params.n_rank_wo = 4; - params.n_rank_ffn_norm = 1; - params.n_rank_w1 = 4; - params.n_rank_w2 = 4; - params.n_rank_w3 = 4; - params.n_rank_tok_embeddings = 4; - params.n_rank_norm = 1; - params.n_rank_output = 4; - - params.custom_n_rank_attention_norm = false; - params.custom_n_rank_wq = false; - params.custom_n_rank_wk = false; - params.custom_n_rank_wv = false; - params.custom_n_rank_wo = false; - params.custom_n_rank_ffn_norm = false; - params.custom_n_rank_w1 = false; - params.custom_n_rank_w2 = false; - params.custom_n_rank_w3 = false; - params.custom_n_rank_tok_embeddings = false; - params.custom_n_rank_norm = false; - params.custom_n_rank_output = false; - - return params; -} - -static void train_print_usage(int argc, char ** argv, const struct train_params * params) { - fprintf(stderr, "usage: %s [options]\n", argv[0]); - fprintf(stderr, "\n"); - fprintf(stderr, "options:\n"); - fprintf(stderr, " -h, --help show this help message and exit\n"); - - fprintf(stderr, " --model-base FNAME model path from which to load base model (default '%s')\n", params->fn_model_base); - fprintf(stderr, " --lora-out FNAME path to save llama lora (default '%s')\n", params->fn_lora_out); - fprintf(stderr, " --only-write-lora only save llama lora, don't do any training. use this if you only want to convert a checkpoint to a lora adapter.\n"); - fprintf(stderr, " --norm-rms-eps F RMS-Norm epsilon value (default %f)\n", params->f_norm_rms_eps); - fprintf(stderr, " --rope-freq-base F Frequency base for ROPE (default %f)\n", params->rope_freq_base); - fprintf(stderr, " --rope-freq-scale F Frequency scale for ROPE (default %f)\n", params->rope_freq_scale); - fprintf(stderr, " --lora-alpha N LORA alpha : resulting LORA scaling is alpha/r. (default %d)\n", params->lora_alpha); - fprintf(stderr, " --lora-r N LORA r: default rank. Also specifies resulting scaling together with lora-alpha. (default %d)\n", params->lora_r); - fprintf(stderr, " --rank-att-norm N LORA rank for attention norm tensor, overrides default rank. Norm tensors should generally have rank 1.\n"); - fprintf(stderr, " --rank-ffn-norm N LORA rank for feed-forward norm tensor, overrides default rank. Norm tensors should generally have rank 1.\n"); - fprintf(stderr, " --rank-out-norm N LORA rank for output norm tensor, overrides default rank. Norm tensors should generally have rank 1.\n"); - fprintf(stderr, " --rank-tok-embd N LORA rank for token embeddings tensor, overrides default rank.\n"); - fprintf(stderr, " --rank-out N LORA rank for output tensor, overrides default rank.\n"); - fprintf(stderr, " --rank-wq N LORA rank for wq tensor, overrides default rank.\n"); - fprintf(stderr, " --rank-wk N LORA rank for wk tensor, overrides default rank.\n"); - fprintf(stderr, " --rank-wv N LORA rank for wv tensor, overrides default rank.\n"); - fprintf(stderr, " --rank-wo N LORA rank for wo tensor, overrides default rank.\n"); - fprintf(stderr, " --rank-w1 N LORA rank for w1 tensor, overrides default rank.\n"); - fprintf(stderr, " --rank-w2 N LORA rank for w2 tensor, overrides default rank.\n"); - fprintf(stderr, " --rank-w3 N LORA rank for w3 tensor, overrides default rank.\n"); - - print_common_train_usage(argc, argv, ¶ms->common); -} - -static bool train_params_parse(int argc, char ** argv, struct train_params * params) { - bool invalid_param = false; - std::string arg; - struct train_params default_params = get_default_train_params(); - const std::string arg_prefix = "--"; - - for (int i = 1; i < argc; i++) { - arg = argv[i]; - if (arg.compare(0, arg_prefix.size(), arg_prefix) == 0) { - std::replace(arg.begin(), arg.end(), '_', '-'); - } - - if (consume_common_train_arg(argc, argv, &i, ¶ms->common, &invalid_param)) { - if (invalid_param) { - break; - } else if (params->common.print_usage) { - train_print_usage(argc, argv, &default_params); - exit(0); - } - } else if (arg == "--model-base") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->fn_model_base = argv[i]; - } else if (arg == "--lora-out") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->fn_lora_out = argv[i]; - } else if (arg == "--only-write-lora") { - params->only_write_lora = true; - } else if (arg == "--norm-rms-eps") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->f_norm_rms_eps = std::stof(argv[i]); - params->custom_f_norm_rms_eps = true; - } else if (arg == "--rope-freq-base") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->rope_freq_base = std::stof(argv[i]); - params->custom_rope_freq_base = true; - } else if (arg == "--rope-freq-scale") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->rope_freq_scale = std::stof(argv[i]); - params->custom_rope_freq_scale = true; - } else if (arg == "--lora-alpha") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->lora_alpha = std::stoi(argv[i]); - params->custom_lora_alpha = true; - } else if (arg == "--lora-r") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->lora_r = std::stoi(argv[i]); - } else if (arg == "--rank-att-norm") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->n_rank_attention_norm = std::stoi(argv[i]); - params->custom_n_rank_attention_norm = true; - } else if (arg == "--rank-ffn-norm") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->n_rank_ffn_norm = std::stoi(argv[i]); - params->custom_n_rank_ffn_norm = true; - } else if (arg == "--rank-out-norm") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->n_rank_norm = std::stoi(argv[i]); - params->custom_n_rank_norm = true; - } else if (arg == "--rank-tok-embd") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->n_rank_tok_embeddings = std::stoi(argv[i]); - params->custom_n_rank_tok_embeddings = true; - } else if (arg == "--rank-out") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->n_rank_output = std::stoi(argv[i]); - params->custom_n_rank_output = true; - } else if (arg == "--rank-wq") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->n_rank_wq = std::stoi(argv[i]); - params->custom_n_rank_wq = true; - } else if (arg == "--rank-wk") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->n_rank_wk = std::stoi(argv[i]); - params->custom_n_rank_wk = true; - } else if (arg == "--rank-wv") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->n_rank_wv = std::stoi(argv[i]); - params->custom_n_rank_wv = true; - } else if (arg == "--rank-wo") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->n_rank_wo = std::stoi(argv[i]); - params->custom_n_rank_wo = true; - } else if (arg == "--rank-w1") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->n_rank_w1 = std::stoi(argv[i]); - params->custom_n_rank_w1 = true; - } else if (arg == "--rank-w2") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->n_rank_w2 = std::stoi(argv[i]); - params->custom_n_rank_w2 = true; - } else if (arg == "--rank-w3") { - if (++i >= argc) { - invalid_param = true; - break; - } - params->n_rank_w3 = std::stoi(argv[i]); - params->custom_n_rank_w3 = true; - } else { - fprintf(stderr, "error: unknown argument: %s\n", arg.c_str()); - train_print_usage(argc, argv, &default_params); - exit(1); - } - } - if (invalid_param) { - fprintf(stderr, "error: invalid parameter for argument: %s\n", arg.c_str()); - train_print_usage(argc, argv, &default_params); - exit(1); - } - finish_processing_train_args(¶ms->common); - return true; -} - -struct save_train_files_data { - const char * fn_checkpoint_out; - const char * fn_lora_out; - const char * pattern_fn_it; - const char * fn_latest; - struct my_llama_model * model; - struct my_llama_lora * lora; -}; - -static void save_train_files(void * vdata, struct train_state * train) { - struct save_train_files_data * data = (struct save_train_files_data *) vdata; - - int64_t iter = train->opt->iter; - - if (strlen(data->fn_checkpoint_out) > 0) { - save_checkpoint_lora_file(get_train_filename(data->fn_checkpoint_out, data->pattern_fn_it, data->fn_latest, iter).c_str(), data->model, data->lora, train); - save_checkpoint_lora_file(get_train_filename(data->fn_checkpoint_out, data->pattern_fn_it, data->fn_latest, -1 ).c_str(), data->model, data->lora, train); - } - if (strlen(data->fn_lora_out) > 0) { - save_as_llama_lora(get_train_filename(data->fn_lora_out, data->pattern_fn_it, data->fn_latest, iter).c_str(), data->lora); - save_as_llama_lora(get_train_filename(data->fn_lora_out, data->pattern_fn_it, data->fn_latest, -1 ).c_str(), data->lora); - } -} - -static int64_t get_parameter_count(struct my_llama_lora* lora) { - int64_t nx = 0; - nx += ggml_nelements(lora->tok_embeddings_a); - nx += ggml_nelements(lora->tok_embeddings_b); - nx += ggml_nelements(lora->norm_a); - nx += ggml_nelements(lora->norm_b); - nx += ggml_nelements(lora->output_a); - nx += ggml_nelements(lora->output_b); - - for (uint32_t i = 0; i < lora->layers.size(); ++i) { - auto & layer = lora->layers[i]; - nx += ggml_nelements(layer.attention_norm_a); - nx += ggml_nelements(layer.attention_norm_b); - nx += ggml_nelements(layer.wq_a); - nx += ggml_nelements(layer.wq_b); - nx += ggml_nelements(layer.wk_a); - nx += ggml_nelements(layer.wk_b); - nx += ggml_nelements(layer.wv_a); - nx += ggml_nelements(layer.wv_b); - nx += ggml_nelements(layer.wo_a); - nx += ggml_nelements(layer.wo_b); - nx += ggml_nelements(layer.ffn_norm_a); - nx += ggml_nelements(layer.ffn_norm_b); - nx += ggml_nelements(layer.w1_a); - nx += ggml_nelements(layer.w1_b); - nx += ggml_nelements(layer.w2_a); - nx += ggml_nelements(layer.w2_b); - nx += ggml_nelements(layer.w3_a); - nx += ggml_nelements(layer.w3_b); - } - return nx; -} - -int main(int argc, char ** argv) { - struct train_params params = get_default_train_params(); - - if (!train_params_parse(argc, argv, ¶ms)) { - return 1; - } - - if (params.common.seed == LLAMA_DEFAULT_SEED) { - params.common.seed = time(NULL); - } - printf("%s: seed: %u\n", __func__, params.common.seed); - srand(params.common.seed); - - struct llama_model_params llama_mparams = llama_model_default_params(); - llama_mparams.vocab_only = false; - - printf("%s: model base = '%s'\n", __func__, params.fn_model_base); - struct llama_model * lmodel = llama_load_model_from_file(params.fn_model_base, llama_mparams); - - struct llama_context_params llama_cparams = llama_context_default_params(); - struct llama_context * lctx = llama_new_context_with_model(lmodel, llama_cparams); - - struct my_llama_model model; - init_model(lmodel, &model, params.fn_model_base, params.common.n_ctx); - - struct my_llama_lora lora; - - struct train_state * train = init_train_state(); - struct ggml_opt_context * opt = train->opt; - - // set params from command line - if (params.custom_f_norm_rms_eps) { - model.hparams.f_norm_rms_eps = params.f_norm_rms_eps; - } - if (params.custom_rope_freq_base) { - model.hparams.rope_freq_base = params.rope_freq_base; - } - if (params.custom_rope_freq_scale) { - model.hparams.rope_freq_scale = params.rope_freq_scale; - } - lora.hparams.lora_r = params.lora_r; - lora.hparams.lora_alpha = params.custom_lora_alpha ? params.lora_alpha : params.lora_r; - uint32_t n_rank_attention_norm = params.custom_n_rank_attention_norm ? params.n_rank_attention_norm : 1; - uint32_t n_rank_wq = params.custom_n_rank_wq ? params.n_rank_wq : params.lora_r; - uint32_t n_rank_wk = params.custom_n_rank_wk ? params.n_rank_wk : params.lora_r; - uint32_t n_rank_wv = params.custom_n_rank_wv ? params.n_rank_wv : params.lora_r; - uint32_t n_rank_wo = params.custom_n_rank_wo ? params.n_rank_wo : params.lora_r; - uint32_t n_rank_ffn_norm = params.custom_n_rank_ffn_norm ? params.n_rank_ffn_norm : 1; - uint32_t n_rank_w1 = params.custom_n_rank_w1 ? params.n_rank_w1 : params.lora_r; - uint32_t n_rank_w2 = params.custom_n_rank_w2 ? params.n_rank_w2 : params.lora_r; - uint32_t n_rank_w3 = params.custom_n_rank_w3 ? params.n_rank_w3 : params.lora_r; - uint32_t n_rank_tok_embeddings = params.custom_n_rank_tok_embeddings ? params.n_rank_tok_embeddings : params.lora_r; - uint32_t n_rank_norm = params.custom_n_rank_norm ? params.n_rank_norm : 1; - uint32_t n_rank_output = params.custom_n_rank_output ? params.n_rank_output : params.lora_r; - lora.hparams.n_rank_attention_norm = n_rank_attention_norm; - lora.hparams.n_rank_wq = n_rank_wq; - lora.hparams.n_rank_wk = n_rank_wk; - lora.hparams.n_rank_wv = n_rank_wv; - lora.hparams.n_rank_wo = n_rank_wo; - lora.hparams.n_rank_ffn_norm = n_rank_ffn_norm; - lora.hparams.n_rank_w1 = n_rank_w1; - lora.hparams.n_rank_w2 = n_rank_w2; - lora.hparams.n_rank_w3 = n_rank_w3; - lora.hparams.n_rank_tok_embeddings = n_rank_tok_embeddings; - lora.hparams.n_rank_norm = n_rank_norm; - lora.hparams.n_rank_output = n_rank_output; - - // set opt params from command line - opt->params = ggml_opt_default_params(GGML_OPT_ADAM); - opt->params.print_forward_graph = false; - opt->params.print_backward_graph = false; - opt->params.n_threads = params.common.n_threads; - opt->params.past = params.common.opt_past; - opt->params.delta = params.common.opt_delta; - opt->params.max_no_improvement = params.common.opt_max_no_improvement; - opt->params.n_gradient_accumulation = params.common.n_gradient_accumulation; - opt->params.adam.n_iter = params.common.adam_n_iter; - opt->params.adam.sched = 1.0f; - opt->params.adam.alpha = params.common.adam_alpha; - opt->params.adam.decay = params.common.adam_decay; - opt->params.adam.decay_min_ndim = params.common.adam_decay_min_ndim; - opt->params.adam.beta1 = params.common.adam_beta1; - opt->params.adam.beta2 = params.common.adam_beta2; - opt->params.adam.gclip = params.common.adam_gclip; - opt->params.adam.eps_f = params.common.adam_eps_f; - - ggml_allocr * alloc = NULL; - - printf("%s: init model\n", __func__); - bool existed = load_checkpoint_lora_file(params.common.fn_checkpoint_in, &model, &lora, train); - - if (existed) { - // overwrite last n_ctx with user provided n_ctx - if (params.common.custom_n_ctx) { - model.hparams.n_ctx = params.common.n_ctx; - } - - const bool opt_param_count_changed = ( - (lora.hparams.n_rank_attention_norm != n_rank_attention_norm) - || (lora.hparams.n_rank_wq != n_rank_wq) - || (lora.hparams.n_rank_wk != n_rank_wk) - || (lora.hparams.n_rank_wv != n_rank_wv) - || (lora.hparams.n_rank_wo != n_rank_wo) - || (lora.hparams.n_rank_ffn_norm != n_rank_ffn_norm) - || (lora.hparams.n_rank_w1 != n_rank_w1) - || (lora.hparams.n_rank_w2 != n_rank_w2) - || (lora.hparams.n_rank_w3 != n_rank_w3) - || (lora.hparams.n_rank_tok_embeddings != n_rank_tok_embeddings) - || (lora.hparams.n_rank_norm != n_rank_norm) - || (lora.hparams.n_rank_output != n_rank_output) - ); - - const bool opt_past_changed = opt->params.past != params.common.opt_past; - - if (opt_param_count_changed) { - print_lora_params(&lora.hparams); - die("Provided rank differs from checkpoint file. To use different rank start finetune from scratch with empty input checkpoint, e.g --checkpoint-in ''. Aborting."); - // need to discard previous optimizer gradient statistics and opt_init with new shapes - // TODO - } - if (opt_past_changed) { - die("Optimizer parameter '--opt-past N' differs from checkpoint file. To use different value finetune from scratch with empty input checkpoint, e.g --checkpoint-in ''. Aborting"); - // need to discard previous optimizer past function value statistics and opt_init with new shapes - // TODO - } - } else { // existed == false - init_lora(&model, &lora); - randomize_lora(&lora, params.common.seed, 0.0f, 1.0f, -1.0f, +1.0f); - if (!params.only_write_lora) { - ggml_opt_init(opt->ctx, opt, opt->params, get_parameter_count(&lora)); - } - } - opt->iter = train->train_its; - - print_params(&model.hparams); - print_lora_params(&lora.hparams); - printf("%s: total train_iterations %llu\n", __func__, (long long unsigned) train->train_its); - printf("%s: seen train_samples %llu\n", __func__, (long long unsigned) train->train_samples); - printf("%s: seen train_tokens %llu\n", __func__, (long long unsigned) train->train_tokens); - printf("%s: completed train_epochs %llu\n", __func__, (long long unsigned) train->train_epochs); - printf("%s: lora_size = %zu bytes (%.1f MB)\n", __func__, (ggml_used_mem(lora.ctx) + lora.data.size()), (float) (ggml_used_mem(lora.ctx) + lora.data.size()) / (1024.0f*1024.0f)); - - if (params.only_write_lora) { - save_train_files_data save_data; - save_data.fn_checkpoint_out = ""; - save_data.fn_lora_out = params.fn_lora_out; - save_data.pattern_fn_it = params.common.pattern_fn_it; - save_data.fn_latest = params.common.fn_latest; - save_data.model = &model; - save_data.lora = &lora; - - save_train_files(&save_data, train); - - free_train_state(train); - ggml_free(lora.ctx); - llama_free(lctx); - llama_free_model(lmodel); - return 0; - } - - printf("%s: opt_size = %zu bytes (%.1f MB)\n", __func__, ggml_get_mem_size(opt->ctx), (float) ggml_get_mem_size(opt->ctx) / (1024.0f*1024.0f)); - printf("%s: opt iter %d\n", __func__, opt->iter); - - int n_tokens = model.hparams.n_ctx; - int n_vocab = model.hparams.n_vocab; - int n_batch = params.common.n_batch; - - - std::vector mem_input_data; - std::vector mem_compute_data; - - // context for input tensors without their data - struct ggml_init_params ctx_input_params = { - ggml_tensor_overhead() * 2, // mem_size - NULL, // mem_buffer - true, // no_alloc - }; - struct ggml_context * ctx_input = ggml_init(ctx_input_params); - - // the input tensors - struct ggml_tensor * tokens_input = ggml_new_tensor_2d(ctx_input, GGML_TYPE_I32, n_tokens, n_batch); - struct ggml_tensor * target_probs = ggml_new_tensor_3d(ctx_input, GGML_TYPE_F32, n_vocab, n_tokens, n_batch); - - // measure required memory for input tensors - alloc = ggml_allocr_new_measure(tensor_alignment); - ggml_allocr_alloc(alloc, tokens_input); - ggml_allocr_alloc(alloc, target_probs); - size_t max_input_size = ggml_allocr_max_size(alloc) + tensor_alignment; - ggml_allocr_free(alloc); - printf("%s: input_size = %zu bytes (%.1f MB)\n", __func__, max_input_size, (float) max_input_size / (1024.0f*1024.0f)); - - // allocate input tensors - mem_input_data.resize(max_input_size); - alloc = ggml_allocr_new(mem_input_data.data(), mem_input_data.size(), tensor_alignment); - ggml_allocr_alloc(alloc, tokens_input); - ggml_allocr_alloc(alloc, target_probs); - ggml_allocr_free(alloc); - - // context for compute tensors without their data - size_t estimated_compute_size_wo_data = ( - ggml_tensor_overhead()*GGML_MAX_NODES*2 - + (GGML_OBJECT_SIZE+GGML_GRAPH_SIZE)*( - params.common.use_checkpointing ? 3 : 2 - ) - ); - struct ggml_init_params ctx_compute_params = { - estimated_compute_size_wo_data, // mem_size - NULL, // mem_buffer - true, // no_alloc - }; - struct ggml_context * ctx_compute = NULL; - - struct ggml_tensor * loss = NULL; - struct ggml_tensor * logits = NULL; - - struct ggml_cgraph * gf = NULL; - struct ggml_cgraph * gb = NULL; - struct ggml_cgraph * gb_tmp = NULL; - - // measure required memory for compute tensors - size_t best_compute_size = SIZE_MAX; - enum ggml_cgraph_eval_order best_order = GGML_CGRAPH_EVAL_ORDER_COUNT; - // find best evaluation order - for (unsigned order = 0; order < (unsigned) GGML_CGRAPH_EVAL_ORDER_COUNT; ++order) { - ctx_compute = ggml_init(ctx_compute_params); - alloc = ggml_allocr_new_measure(tensor_alignment); - gf = ggml_new_graph(ctx_compute); - gf->order = (enum ggml_cgraph_eval_order) order; - gb = ggml_new_graph(ctx_compute); - gb_tmp = params.common.use_checkpointing - ? ggml_new_graph(ctx_compute) - : NULL; - loss = llama_build_lora_finetune_graphs( - &model, &lora, alloc, ctx_compute, - gf, gb, gb_tmp, - &logits, tokens_input, target_probs, - n_tokens, n_batch, - params.common.use_flash, - params.common.use_checkpointing - ); - size_t max_compute_size = ggml_allocr_max_size(alloc) + tensor_alignment; - if (max_compute_size < best_compute_size) { - best_compute_size = max_compute_size; - best_order = gf->order; - } - ggml_allocr_free(alloc); - ggml_free(ctx_compute); - } - size_t max_compute_size = best_compute_size; - printf("%s: compute_size = %zu bytes (%.1f MB)\n", __func__, max_compute_size, (float) max_compute_size / (1024.0f*1024.0f)); - printf("%s: evaluation order = %s\n", __func__, - (best_order == GGML_CGRAPH_EVAL_ORDER_LEFT_TO_RIGHT) ? "LEFT_TO_RIGHT" : - (best_order == GGML_CGRAPH_EVAL_ORDER_RIGHT_TO_LEFT) ? "RIGHT_TO_LEFT" : - "invalid"); - - // allocate compute tensors - mem_compute_data.resize(max_compute_size); - ctx_compute = ggml_init(ctx_compute_params); - alloc = ggml_allocr_new(mem_compute_data.data(), mem_compute_data.size(), tensor_alignment); - gf = ggml_new_graph(ctx_compute); - gf->order = best_order; - gb = ggml_new_graph(ctx_compute); - gb_tmp = params.common.use_checkpointing - ? ggml_new_graph(ctx_compute) - : NULL; - loss = llama_build_lora_finetune_graphs( - &model, &lora, alloc, ctx_compute, - gf, gb, gb_tmp, - &logits, tokens_input, target_probs, - n_tokens, n_batch, - params.common.use_flash, - params.common.use_checkpointing - ); - ggml_allocr_free(alloc); - - // tokenize data - std::vector train_tokens; - std::vector train_samples_begin; - std::vector train_samples_size; - printf("%s: tokenize training data\n", __func__); - tokenize_file(lctx, - params.common.fn_train_data, - params.common.sample_start, - params.common.include_sample_start, - params.common.overlapping_samples, - n_tokens, - train_tokens, - train_samples_begin, - train_samples_size); - GGML_ASSERT(train_samples_begin.size() == train_samples_size.size()); - - printf("%s: number of training tokens: %zu\n", __func__, train_tokens.size()); - - std::vector token_noccurs; - token_noccurs.resize(model.hparams.n_vocab, 0); - for (unsigned int i = 0; i < train_tokens.size(); ++i) { - ++token_noccurs[train_tokens[i]]; - } - int n_unique_tokens = 0; - for (unsigned int i = 0; i < token_noccurs.size(); ++i) { - if (token_noccurs[i] == 0) continue; - ++n_unique_tokens; - } - printf("%s: number of unique tokens: %d\n", __func__, n_unique_tokens); - - size_t shuffle_samples_hash = compute_samples_hash(params.common.fn_train_data, train_samples_begin.data(), train_samples_size.data(), train_samples_size.size()); - const bool changed_train_data = (shuffle_samples_hash != train->shuffle_samples_hash) || (train->shuffle_sample_count != train_samples_size.size()); - if (changed_train_data) { - printf("%s: train data seems to have changed. restarting shuffled epoch.\n", __func__); - } - if (params.common.force_reshuffle) { - printf("%s: forced reshuffling of data. restarting with newly shuffled epoch.\n", __func__); - } - if ((train->shuffle_rng_state_current == "") || changed_train_data || params.common.force_reshuffle) { - train->shuffle_rng_state_current = mt19937_seed_to_state(params.common.seed); - train->shuffle_sample_count = train_samples_size.size(); - train->shuffle_next_sample = 0; - train->shuffle_samples_hash = shuffle_samples_hash; - } - std::vector train_shuffled_samples_offs; - std::vector train_shuffled_samples_begin; - std::vector train_shuffled_samples_size; - train_shuffled_samples_offs.resize(train_samples_begin.size()); - train_shuffled_samples_begin.resize(train_samples_begin.size()); - train_shuffled_samples_size.resize(train_samples_size.size()); - train->shuffle_rng_state_next = shuffle_samples( - train->shuffle_rng_state_current, - train_shuffled_samples_offs.data(), - train_shuffled_samples_begin.data(), - train_shuffled_samples_size.data(), - train_samples_begin.data(), - train_samples_size.data(), - train_samples_size.size()); - - printf("%s: begin training\n", __func__); - - save_train_files_data save_data; - save_data.fn_checkpoint_out = params.common.fn_checkpoint_out; - save_data.fn_lora_out = params.fn_lora_out; - save_data.pattern_fn_it = params.common.pattern_fn_it; - save_data.fn_latest = params.common.fn_latest; - save_data.model = &model; - save_data.lora = &lora; - - struct train_opt_callback_data opt_cb_data; - opt_cb_data.params = ¶ms.common; - opt_cb_data.train = train; - opt_cb_data.save_cb = &save_train_files; - opt_cb_data.save_data = &save_data; - opt_cb_data.lctx = lctx; - opt_cb_data.last_save_iter = opt->iter; - opt_cb_data.tokens_data = train_tokens.data(); - opt_cb_data.tokens_size = train_tokens.size(); - opt_cb_data.samples_begin = train_samples_begin.data(); - opt_cb_data.samples_size = train_samples_size.data(); - opt_cb_data.shuffled_samples_offs = train_shuffled_samples_offs.data(); - opt_cb_data.shuffled_samples_begin = train_shuffled_samples_begin.data(); - opt_cb_data.shuffled_samples_size = train_shuffled_samples_size.data(); - opt_cb_data.samples_count = train_samples_size.size(); - opt_cb_data.tokens_input = tokens_input; - opt_cb_data.target_probs = target_probs; - opt_cb_data.first_iter = opt->iter; - opt_cb_data.first_epoch = train->train_epochs; - opt_cb_data.iter_at_last_epoch = -1; - opt_cb_data.last_time = ggml_time_ms(); - opt_cb_data.millis_per_iter = 0.0; - - // measure required memory for work buffer - size_t max_work_size = ggml_graph_plan(gb, params.common.n_threads).work_size + GGML_OBJECT_SIZE; - printf("%s: work_size = %zu bytes (%.1f MB)\n", __func__, max_work_size, (float) max_work_size / (1024.0f*1024.0f)); - - // context for work buffer - struct ggml_init_params ctx_work_params = { - max_work_size, // mem_size - NULL, // mem_buffer - false, // no_alloc - }; - struct ggml_context * ctx_work = ggml_init(ctx_work_params); - - int64_t t0 = ggml_time_ms(); - - ggml_opt_resume_g(ctx_work, opt, loss, gf, gb, &train_opt_callback, (void *) &opt_cb_data); - - ggml_free(ctx_work); - ggml_free(ctx_compute); - ggml_free(ctx_input); - - int64_t t1 = ggml_time_ms(); - printf("%s: total training time: ", __func__); - print_duration((double) (t1 - t0)); - printf("\n"); - - int new_iters = opt->iter - opt_cb_data.last_save_iter; - if (new_iters > 0) { - train->train_its += new_iters; - train->train_tokens += new_iters * opt->params.n_gradient_accumulation * n_batch * n_tokens; - - save_train_files(&save_data, train); - opt_cb_data.last_save_iter = opt->iter; - } - - ggml_free(opt->ctx); - free_train_state(train); - ggml_free(lora.ctx); - llama_free(lctx); - llama_free_model(lmodel); - return 0; -} diff --git a/spaces/Isotonic/image-generator/app.py b/spaces/Isotonic/image-generator/app.py deleted file mode 100644 index 262436d8b50f87b0953c645576cc3184b3b27b43..0000000000000000000000000000000000000000 --- a/spaces/Isotonic/image-generator/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Joeythemonster/anything-midjourney-v-4-1").launch() \ No newline at end of file diff --git a/spaces/Jacks2003/3D_Photo_Inpainting/bilateral_filtering.py b/spaces/Jacks2003/3D_Photo_Inpainting/bilateral_filtering.py deleted file mode 100644 index 28cc7dc79cc2f3c0b9065d6a1eb290b9554af879..0000000000000000000000000000000000000000 --- a/spaces/Jacks2003/3D_Photo_Inpainting/bilateral_filtering.py +++ /dev/null @@ -1,215 +0,0 @@ -import numpy as np -from functools import reduce - -def sparse_bilateral_filtering( - depth, image, config, HR=False, mask=None, gsHR=True, edge_id=None, num_iter=None, num_gs_iter=None, spdb=False -): - """ - config: - - filter_size - """ - import time - - save_images = [] - save_depths = [] - save_discontinuities = [] - vis_depth = depth.copy() - backup_vis_depth = vis_depth.copy() - - depth_max = vis_depth.max() - depth_min = vis_depth.min() - vis_image = image.copy() - for i in range(num_iter): - if isinstance(config["filter_size"], list): - window_size = config["filter_size"][i] - else: - window_size = config["filter_size"] - vis_image = image.copy() - save_images.append(vis_image) - save_depths.append(vis_depth) - u_over, b_over, l_over, r_over = vis_depth_discontinuity(vis_depth, config, mask=mask) - vis_image[u_over > 0] = np.array([0, 0, 0]) - vis_image[b_over > 0] = np.array([0, 0, 0]) - vis_image[l_over > 0] = np.array([0, 0, 0]) - vis_image[r_over > 0] = np.array([0, 0, 0]) - - discontinuity_map = (u_over + b_over + l_over + r_over).clip(0.0, 1.0) - discontinuity_map[depth == 0] = 1 - save_discontinuities.append(discontinuity_map) - if mask is not None: - discontinuity_map[mask == 0] = 0 - vis_depth = bilateral_filter( - vis_depth, config, discontinuity_map=discontinuity_map, HR=HR, mask=mask, window_size=window_size - ) - - return save_images, save_depths - - -def vis_depth_discontinuity(depth, config, vis_diff=False, label=False, mask=None): - """ - config: - - - """ - if label == False: - disp = 1./depth - u_diff = (disp[1:, :] - disp[:-1, :])[:-1, 1:-1] - b_diff = (disp[:-1, :] - disp[1:, :])[1:, 1:-1] - l_diff = (disp[:, 1:] - disp[:, :-1])[1:-1, :-1] - r_diff = (disp[:, :-1] - disp[:, 1:])[1:-1, 1:] - if mask is not None: - u_mask = (mask[1:, :] * mask[:-1, :])[:-1, 1:-1] - b_mask = (mask[:-1, :] * mask[1:, :])[1:, 1:-1] - l_mask = (mask[:, 1:] * mask[:, :-1])[1:-1, :-1] - r_mask = (mask[:, :-1] * mask[:, 1:])[1:-1, 1:] - u_diff = u_diff * u_mask - b_diff = b_diff * b_mask - l_diff = l_diff * l_mask - r_diff = r_diff * r_mask - u_over = (np.abs(u_diff) > config['depth_threshold']).astype(np.float32) - b_over = (np.abs(b_diff) > config['depth_threshold']).astype(np.float32) - l_over = (np.abs(l_diff) > config['depth_threshold']).astype(np.float32) - r_over = (np.abs(r_diff) > config['depth_threshold']).astype(np.float32) - else: - disp = depth - u_diff = (disp[1:, :] * disp[:-1, :])[:-1, 1:-1] - b_diff = (disp[:-1, :] * disp[1:, :])[1:, 1:-1] - l_diff = (disp[:, 1:] * disp[:, :-1])[1:-1, :-1] - r_diff = (disp[:, :-1] * disp[:, 1:])[1:-1, 1:] - if mask is not None: - u_mask = (mask[1:, :] * mask[:-1, :])[:-1, 1:-1] - b_mask = (mask[:-1, :] * mask[1:, :])[1:, 1:-1] - l_mask = (mask[:, 1:] * mask[:, :-1])[1:-1, :-1] - r_mask = (mask[:, :-1] * mask[:, 1:])[1:-1, 1:] - u_diff = u_diff * u_mask - b_diff = b_diff * b_mask - l_diff = l_diff * l_mask - r_diff = r_diff * r_mask - u_over = (np.abs(u_diff) > 0).astype(np.float32) - b_over = (np.abs(b_diff) > 0).astype(np.float32) - l_over = (np.abs(l_diff) > 0).astype(np.float32) - r_over = (np.abs(r_diff) > 0).astype(np.float32) - u_over = np.pad(u_over, 1, mode='constant') - b_over = np.pad(b_over, 1, mode='constant') - l_over = np.pad(l_over, 1, mode='constant') - r_over = np.pad(r_over, 1, mode='constant') - u_diff = np.pad(u_diff, 1, mode='constant') - b_diff = np.pad(b_diff, 1, mode='constant') - l_diff = np.pad(l_diff, 1, mode='constant') - r_diff = np.pad(r_diff, 1, mode='constant') - - if vis_diff: - return [u_over, b_over, l_over, r_over], [u_diff, b_diff, l_diff, r_diff] - else: - return [u_over, b_over, l_over, r_over] - -def bilateral_filter(depth, config, discontinuity_map=None, HR=False, mask=None, window_size=False): - sort_time = 0 - replace_time = 0 - filter_time = 0 - init_time = 0 - filtering_time = 0 - sigma_s = config['sigma_s'] - sigma_r = config['sigma_r'] - if window_size == False: - window_size = config['filter_size'] - midpt = window_size//2 - ax = np.arange(-midpt, midpt+1.) - xx, yy = np.meshgrid(ax, ax) - if discontinuity_map is not None: - spatial_term = np.exp(-(xx**2 + yy**2) / (2. * sigma_s**2)) - - # padding - depth = depth[1:-1, 1:-1] - depth = np.pad(depth, ((1,1), (1,1)), 'edge') - pad_depth = np.pad(depth, (midpt,midpt), 'edge') - if discontinuity_map is not None: - discontinuity_map = discontinuity_map[1:-1, 1:-1] - discontinuity_map = np.pad(discontinuity_map, ((1,1), (1,1)), 'edge') - pad_discontinuity_map = np.pad(discontinuity_map, (midpt,midpt), 'edge') - pad_discontinuity_hole = 1 - pad_discontinuity_map - # filtering - output = depth.copy() - pad_depth_patches = rolling_window(pad_depth, [window_size, window_size], [1,1]) - if discontinuity_map is not None: - pad_discontinuity_patches = rolling_window(pad_discontinuity_map, [window_size, window_size], [1,1]) - pad_discontinuity_hole_patches = rolling_window(pad_discontinuity_hole, [window_size, window_size], [1,1]) - - if mask is not None: - pad_mask = np.pad(mask, (midpt,midpt), 'constant') - pad_mask_patches = rolling_window(pad_mask, [window_size, window_size], [1,1]) - from itertools import product - if discontinuity_map is not None: - pH, pW = pad_depth_patches.shape[:2] - for pi in range(pH): - for pj in range(pW): - if mask is not None and mask[pi, pj] == 0: - continue - if discontinuity_map is not None: - if bool(pad_discontinuity_patches[pi, pj].any()) is False: - continue - discontinuity_patch = pad_discontinuity_patches[pi, pj] - discontinuity_holes = pad_discontinuity_hole_patches[pi, pj] - depth_patch = pad_depth_patches[pi, pj] - depth_order = depth_patch.ravel().argsort() - patch_midpt = depth_patch[window_size//2, window_size//2] - if discontinuity_map is not None: - coef = discontinuity_holes.astype(np.float32) - if mask is not None: - coef = coef * pad_mask_patches[pi, pj] - else: - range_term = np.exp(-(depth_patch-patch_midpt)**2 / (2. * sigma_r**2)) - coef = spatial_term * range_term - if coef.max() == 0: - output[pi, pj] = patch_midpt - continue - if discontinuity_map is not None and (coef.max() == 0): - output[pi, pj] = patch_midpt - else: - coef = coef/(coef.sum()) - coef_order = coef.ravel()[depth_order] - cum_coef = np.cumsum(coef_order) - ind = np.digitize(0.5, cum_coef) - output[pi, pj] = depth_patch.ravel()[depth_order][ind] - else: - pH, pW = pad_depth_patches.shape[:2] - for pi in range(pH): - for pj in range(pW): - if discontinuity_map is not None: - if pad_discontinuity_patches[pi, pj][window_size//2, window_size//2] == 1: - continue - discontinuity_patch = pad_discontinuity_patches[pi, pj] - discontinuity_holes = (1. - discontinuity_patch) - depth_patch = pad_depth_patches[pi, pj] - depth_order = depth_patch.ravel().argsort() - patch_midpt = depth_patch[window_size//2, window_size//2] - range_term = np.exp(-(depth_patch-patch_midpt)**2 / (2. * sigma_r**2)) - if discontinuity_map is not None: - coef = spatial_term * range_term * discontinuity_holes - else: - coef = spatial_term * range_term - if coef.sum() == 0: - output[pi, pj] = patch_midpt - continue - if discontinuity_map is not None and (coef.sum() == 0): - output[pi, pj] = patch_midpt - else: - coef = coef/(coef.sum()) - coef_order = coef.ravel()[depth_order] - cum_coef = np.cumsum(coef_order) - ind = np.digitize(0.5, cum_coef) - output[pi, pj] = depth_patch.ravel()[depth_order][ind] - - return output - -def rolling_window(a, window, strides): - assert len(a.shape)==len(window)==len(strides), "\'a\', \'window\', \'strides\' dimension mismatch" - shape_fn = lambda i,w,s: (a.shape[i]-w)//s + 1 - shape = [shape_fn(i,w,s) for i,(w,s) in enumerate(zip(window, strides))] + list(window) - def acc_shape(i): - if i+1>=len(a.shape): - return 1 - else: - return reduce(lambda x,y:x*y, a.shape[i+1:]) - _strides = [acc_shape(i)*s*a.itemsize for i,s in enumerate(strides)] + list(a.strides) - - return np.lib.stride_tricks.as_strided(a, shape=shape, strides=_strides) diff --git a/spaces/Juli08/janitorai/Dockerfile b/spaces/Juli08/janitorai/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/Juli08/janitorai/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/detection.py b/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/detection.py deleted file mode 100644 index dbdbc8b525747ffc2bd494f8ab0e93c035730ce7..0000000000000000000000000000000000000000 --- a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/detection.py +++ /dev/null @@ -1,49 +0,0 @@ -# vim: expandtab:ts=4:sw=4 -import numpy as np - - -class Detection(object): - """ - This class represents a bounding box detection in a single image. - - Parameters - ---------- - tlwh : array_like - Bounding box in format `(top left x, top left y, width, height)`. - confidence : float - Detector confidence score. - feature : array_like - A feature vector that describes the object contained in this image. - - Attributes - ---------- - tlwh : ndarray - Bounding box in format `(top left x, top left y, width, height)`. - confidence : ndarray - Detector confidence score. - feature : ndarray | NoneType - A feature vector that describes the object contained in this image. - - """ - - def __init__(self, tlwh, confidence, feature): - self.tlwh = np.asarray(tlwh, dtype=np.float32) - self.confidence = float(confidence) - self.feature = np.asarray(feature, dtype=np.float32) - - def to_tlbr(self): - """Convert bounding box to format `(min x, min y, max x, max y)`, i.e., - `(top left, bottom right)`. - """ - ret = self.tlwh.copy() - ret[2:] += ret[:2] - return ret - - def to_xyah(self): - """Convert bounding box to format `(center x, center y, aspect ratio, - height)`, where the aspect ratio is `width / height`. - """ - ret = self.tlwh.copy() - ret[:2] += ret[2:] / 2 - ret[2] /= ret[3] - return ret diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/atss_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/atss_head.py deleted file mode 100644 index 2e702547f3a40f97af067d2493a41a63665c0866..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/atss_head.py +++ /dev/null @@ -1,524 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Optional, Sequence, Tuple - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, Scale -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.utils import (ConfigType, InstanceList, MultiConfig, OptConfigType, - OptInstanceList, reduce_mean) -from ..task_modules.prior_generators import anchor_inside_flags -from ..utils import images_to_levels, multi_apply, unmap -from .anchor_head import AnchorHead - - -@MODELS.register_module() -class ATSSHead(AnchorHead): - """Detection Head of `ATSS `_. - - ATSS head structure is similar with FCOS, however ATSS use anchor boxes - and assign label by Adaptive Training Sample Selection instead max-iou. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - pred_kernel_size (int): Kernel size of ``nn.Conv2d`` - stacked_convs (int): Number of stacking convs of the head. - conv_cfg (:obj:`ConfigDict` or dict, optional): Config dict for - convolution layer. Defaults to None. - norm_cfg (:obj:`ConfigDict` or dict): Config dict for normalization - layer. Defaults to ``dict(type='GN', num_groups=32, - requires_grad=True)``. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Defaults to False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - loss_centerness (:obj:`ConfigDict` or dict): Config of centerness loss. - Defaults to ``dict(type='CrossEntropyLoss', use_sigmoid=True, - loss_weight=1.0)``. - init_cfg (:obj:`ConfigDict` or dict or list[dict] or - list[:obj:`ConfigDict`]): Initialization config dict. - """ - - def __init__(self, - num_classes: int, - in_channels: int, - pred_kernel_size: int = 3, - stacked_convs: int = 4, - conv_cfg: OptConfigType = None, - norm_cfg: ConfigType = dict( - type='GN', num_groups=32, requires_grad=True), - reg_decoded_bbox: bool = True, - loss_centerness: ConfigType = dict( - type='mmdet.CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - init_cfg: MultiConfig = dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='atss_cls', - std=0.01, - bias_prob=0.01)), - **kwargs) -> None: - self.pred_kernel_size = pred_kernel_size - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super().__init__( - num_classes=num_classes, - in_channels=in_channels, - reg_decoded_bbox=reg_decoded_bbox, - init_cfg=init_cfg, - **kwargs) - - self.sampling = False - self.loss_centerness = MODELS.build(loss_centerness) - - def _init_layers(self) -> None: - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - pred_pad_size = self.pred_kernel_size // 2 - self.atss_cls = nn.Conv2d( - self.feat_channels, - self.num_anchors * self.cls_out_channels, - self.pred_kernel_size, - padding=pred_pad_size) - self.atss_reg = nn.Conv2d( - self.feat_channels, - self.num_base_priors * 4, - self.pred_kernel_size, - padding=pred_pad_size) - self.atss_centerness = nn.Conv2d( - self.feat_channels, - self.num_base_priors * 1, - self.pred_kernel_size, - padding=pred_pad_size) - self.scales = nn.ModuleList( - [Scale(1.0) for _ in self.prior_generator.strides]) - - def forward(self, x: Tuple[Tensor]) -> Tuple[List[Tensor]]: - """Forward features from the upstream network. - - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * 4. - """ - return multi_apply(self.forward_single, x, self.scales) - - def forward_single(self, x: Tensor, scale: Scale) -> Sequence[Tensor]: - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale - level, the channels number is num_anchors * 4. - centerness (Tensor): Centerness for a single scale level, the - channel number is (N, num_anchors * 1, H, W). - """ - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.atss_cls(cls_feat) - # we just follow atss, not apply exp in bbox_pred - bbox_pred = scale(self.atss_reg(reg_feat)).float() - centerness = self.atss_centerness(reg_feat) - return cls_score, bbox_pred, centerness - - def loss_by_feat_single(self, anchors: Tensor, cls_score: Tensor, - bbox_pred: Tensor, centerness: Tensor, - labels: Tensor, label_weights: Tensor, - bbox_targets: Tensor, avg_factor: float) -> dict: - """Calculate the loss of a single scale level based on the features - extracted by the detection head. - - Args: - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (N, num_total_anchors, 4). - avg_factor (float): Average factor that is used to average - the loss. When using sampling method, avg_factor is usually - the sum of positive and negative priors. When using - `PseudoSampler`, `avg_factor` is usually equal to the number - of positive priors. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, 1).reshape( - -1, self.cls_out_channels).contiguous() - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - centerness = centerness.permute(0, 2, 3, 1).reshape(-1) - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # classification loss - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=avg_factor) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_centerness = centerness[pos_inds] - - centerness_targets = self.centerness_target( - pos_anchors, pos_bbox_targets) - pos_decode_bbox_pred = self.bbox_coder.decode( - pos_anchors, pos_bbox_pred) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_bbox_targets, - weight=centerness_targets, - avg_factor=1.0) - - # centerness loss - loss_centerness = self.loss_centerness( - pos_centerness, centerness_targets, avg_factor=avg_factor) - - else: - loss_bbox = bbox_pred.sum() * 0 - loss_centerness = centerness.sum() * 0 - centerness_targets = bbox_targets.new_tensor(0.) - - return loss_cls, loss_bbox, loss_centerness, centerness_targets.sum() - - def loss_by_feat( - self, - cls_scores: List[Tensor], - bbox_preds: List[Tensor], - centernesses: List[Tensor], - batch_gt_instances: InstanceList, - batch_img_metas: List[dict], - batch_gt_instances_ignore: OptInstanceList = None) -> dict: - """Calculate the loss based on the features extracted by the detection - head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - centernesses (list[Tensor]): Centerness for each scale - level with shape (N, num_anchors * 1, H, W) - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes`` and ``labels`` - attributes. - batch_img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - batch_gt_instances_ignore (list[:obj:`InstanceData`], Optional): - Batch of gt_instances_ignore. It includes ``bboxes`` attribute - data that is ignored during training and testing. - Defaults to None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, batch_img_metas, device=device) - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - batch_gt_instances, - batch_img_metas, - batch_gt_instances_ignore=batch_gt_instances_ignore) - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, avg_factor) = cls_reg_targets - avg_factor = reduce_mean( - torch.tensor(avg_factor, dtype=torch.float, device=device)).item() - - losses_cls, losses_bbox, loss_centerness, \ - bbox_avg_factor = multi_apply( - self.loss_by_feat_single, - anchor_list, - cls_scores, - bbox_preds, - centernesses, - labels_list, - label_weights_list, - bbox_targets_list, - avg_factor=avg_factor) - - bbox_avg_factor = sum(bbox_avg_factor) - bbox_avg_factor = reduce_mean(bbox_avg_factor).clamp_(min=1).item() - losses_bbox = list(map(lambda x: x / bbox_avg_factor, losses_bbox)) - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_centerness=loss_centerness) - - def centerness_target(self, anchors: Tensor, gts: Tensor) -> Tensor: - """Calculate the centerness between anchors and gts. - - Only calculate pos centerness targets, otherwise there may be nan. - - Args: - anchors (Tensor): Anchors with shape (N, 4), "xyxy" format. - gts (Tensor): Ground truth bboxes with shape (N, 4), "xyxy" format. - - Returns: - Tensor: Centerness between anchors and gts. - """ - anchors_cx = (anchors[:, 2] + anchors[:, 0]) / 2 - anchors_cy = (anchors[:, 3] + anchors[:, 1]) / 2 - l_ = anchors_cx - gts[:, 0] - t_ = anchors_cy - gts[:, 1] - r_ = gts[:, 2] - anchors_cx - b_ = gts[:, 3] - anchors_cy - - left_right = torch.stack([l_, r_], dim=1) - top_bottom = torch.stack([t_, b_], dim=1) - centerness = torch.sqrt( - (left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * - (top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0])) - assert not torch.isnan(centerness).any() - return centerness - - def get_targets(self, - anchor_list: List[List[Tensor]], - valid_flag_list: List[List[Tensor]], - batch_gt_instances: InstanceList, - batch_img_metas: List[dict], - batch_gt_instances_ignore: OptInstanceList = None, - unmap_outputs: bool = True) -> tuple: - """Get targets for ATSS head. - - This method is almost the same as `AnchorHead.get_targets()`. Besides - returning the targets as the parent method does, it also returns the - anchors as the first element of the returned tuple. - """ - num_imgs = len(batch_img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - num_level_anchors_list = [num_level_anchors] * num_imgs - - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - anchor_list[i] = torch.cat(anchor_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if batch_gt_instances_ignore is None: - batch_gt_instances_ignore = [None] * num_imgs - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_bbox_weights, pos_inds_list, neg_inds_list, - sampling_results_list) = multi_apply( - self._get_targets_single, - anchor_list, - valid_flag_list, - num_level_anchors_list, - batch_gt_instances, - batch_img_metas, - batch_gt_instances_ignore, - unmap_outputs=unmap_outputs) - # Get `avg_factor` of all images, which calculate in `SamplingResult`. - # When using sampling method, avg_factor is usually the sum of - # positive and negative priors. When using `PseudoSampler`, - # `avg_factor` is usually equal to the number of positive priors. - avg_factor = sum( - [results.avg_factor for results in sampling_results_list]) - # split targets to a list w.r.t. multiple levels - anchors_list = images_to_levels(all_anchors, num_level_anchors) - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - return (anchors_list, labels_list, label_weights_list, - bbox_targets_list, bbox_weights_list, avg_factor) - - def _get_targets_single(self, - flat_anchors: Tensor, - valid_flags: Tensor, - num_level_anchors: List[int], - gt_instances: InstanceData, - img_meta: dict, - gt_instances_ignore: Optional[InstanceData] = None, - unmap_outputs: bool = True) -> tuple: - """Compute regression, classification targets for anchors in a single - image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - num_level_anchors (List[int]): Number of anchors of each scale - level. - gt_instances (:obj:`InstanceData`): Ground truth of instance - annotations. It usually includes ``bboxes`` and ``labels`` - attributes. - img_meta (dict): Meta information for current image. - gt_instances_ignore (:obj:`InstanceData`, optional): Instances - to be ignored during training. It includes ``bboxes`` attribute - data that is ignored during training and testing. - Defaults to None. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: N is the number of total anchors in the image. - labels (Tensor): Labels of all anchors in the image with shape - (N,). - label_weights (Tensor): Label weights of all anchor in the - image with shape (N,). - bbox_targets (Tensor): BBox targets of all anchors in the - image with shape (N, 4). - bbox_weights (Tensor): BBox weights of all anchors in the - image with shape (N, 4) - pos_inds (Tensor): Indices of positive anchor with shape - (num_pos,). - neg_inds (Tensor): Indices of negative anchor with shape - (num_neg,). - sampling_result (:obj:`SamplingResult`): Sampling results. - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg['allowed_border']) - if not inside_flags.any(): - raise ValueError( - 'There is no valid anchor inside the image boundary. Please ' - 'check the image size and anchor sizes, or set ' - '``allowed_border`` to -1 to skip the condition.') - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - num_level_anchors_inside = self.get_num_level_anchors_inside( - num_level_anchors, inside_flags) - pred_instances = InstanceData(priors=anchors) - assign_result = self.assigner.assign(pred_instances, - num_level_anchors_inside, - gt_instances, gt_instances_ignore) - - sampling_result = self.sampler.sample(assign_result, pred_instances, - gt_instances) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if self.reg_decoded_bbox: - pos_bbox_targets = sampling_result.pos_gt_bboxes - else: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_priors, sampling_result.pos_gt_bboxes) - - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - - labels[pos_inds] = sampling_result.pos_gt_labels - if self.train_cfg['pos_weight'] <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg['pos_weight'] - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - anchors = unmap(anchors, num_total_anchors, inside_flags) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (anchors, labels, label_weights, bbox_targets, bbox_weights, - pos_inds, neg_inds, sampling_result) - - def get_num_level_anchors_inside(self, num_level_anchors, inside_flags): - """Get the number of valid anchors in every level.""" - - split_inside_flags = torch.split(inside_flags, num_level_anchors) - num_level_anchors_inside = [ - int(flags.sum()) for flags in split_inside_flags - ] - return num_level_anchors_inside diff --git a/spaces/KyanChen/RSPrompter/mmdet/registry.py b/spaces/KyanChen/RSPrompter/mmdet/registry.py deleted file mode 100644 index 3a5b2b28a4f80a488994b48a99043a20c604e55e..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/registry.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""MMDetection provides 17 registry nodes to support using modules across -projects. Each node is a child of the root registry in MMEngine. - -More details can be found at -https://mmengine.readthedocs.io/en/latest/tutorials/registry.html. -""" - -from mmengine.registry import DATA_SAMPLERS as MMENGINE_DATA_SAMPLERS -from mmengine.registry import DATASETS as MMENGINE_DATASETS -from mmengine.registry import EVALUATOR as MMENGINE_EVALUATOR -from mmengine.registry import HOOKS as MMENGINE_HOOKS -from mmengine.registry import LOG_PROCESSORS as MMENGINE_LOG_PROCESSORS -from mmengine.registry import LOOPS as MMENGINE_LOOPS -from mmengine.registry import METRICS as MMENGINE_METRICS -from mmengine.registry import MODEL_WRAPPERS as MMENGINE_MODEL_WRAPPERS -from mmengine.registry import MODELS as MMENGINE_MODELS -from mmengine.registry import \ - OPTIM_WRAPPER_CONSTRUCTORS as MMENGINE_OPTIM_WRAPPER_CONSTRUCTORS -from mmengine.registry import OPTIM_WRAPPERS as MMENGINE_OPTIM_WRAPPERS -from mmengine.registry import OPTIMIZERS as MMENGINE_OPTIMIZERS -from mmengine.registry import PARAM_SCHEDULERS as MMENGINE_PARAM_SCHEDULERS -from mmengine.registry import \ - RUNNER_CONSTRUCTORS as MMENGINE_RUNNER_CONSTRUCTORS -from mmengine.registry import RUNNERS as MMENGINE_RUNNERS -from mmengine.registry import TASK_UTILS as MMENGINE_TASK_UTILS -from mmengine.registry import TRANSFORMS as MMENGINE_TRANSFORMS -from mmengine.registry import VISBACKENDS as MMENGINE_VISBACKENDS -from mmengine.registry import VISUALIZERS as MMENGINE_VISUALIZERS -from mmengine.registry import \ - WEIGHT_INITIALIZERS as MMENGINE_WEIGHT_INITIALIZERS -from mmengine.registry import Registry - -# manage all kinds of runners like `EpochBasedRunner` and `IterBasedRunner` -RUNNERS = Registry( - 'runner', parent=MMENGINE_RUNNERS, locations=['mmdet.engine.runner']) -# manage runner constructors that define how to initialize runners -RUNNER_CONSTRUCTORS = Registry( - 'runner constructor', - parent=MMENGINE_RUNNER_CONSTRUCTORS, - locations=['mmdet.engine.runner']) -# manage all kinds of loops like `EpochBasedTrainLoop` -LOOPS = Registry( - 'loop', parent=MMENGINE_LOOPS, locations=['mmdet.engine.runner']) -# manage all kinds of hooks like `CheckpointHook` -HOOKS = Registry( - 'hook', parent=MMENGINE_HOOKS, locations=['mmdet.engine.hooks']) - -# manage data-related modules -DATASETS = Registry( - 'dataset', parent=MMENGINE_DATASETS, locations=['mmdet.datasets']) -DATA_SAMPLERS = Registry( - 'data sampler', - parent=MMENGINE_DATA_SAMPLERS, - locations=['mmdet.datasets.samplers']) -TRANSFORMS = Registry( - 'transform', - parent=MMENGINE_TRANSFORMS, - locations=['mmdet.datasets.transforms']) - -# manage all kinds of modules inheriting `nn.Module` -MODELS = Registry('model', parent=MMENGINE_MODELS, locations=['mmdet.models']) -# manage all kinds of model wrappers like 'MMDistributedDataParallel' -MODEL_WRAPPERS = Registry( - 'model_wrapper', - parent=MMENGINE_MODEL_WRAPPERS, - locations=['mmdet.models']) -# manage all kinds of weight initialization modules like `Uniform` -WEIGHT_INITIALIZERS = Registry( - 'weight initializer', - parent=MMENGINE_WEIGHT_INITIALIZERS, - locations=['mmdet.models']) - -# manage all kinds of optimizers like `SGD` and `Adam` -OPTIMIZERS = Registry( - 'optimizer', - parent=MMENGINE_OPTIMIZERS, - locations=['mmdet.engine.optimizers']) -# manage optimizer wrapper -OPTIM_WRAPPERS = Registry( - 'optim_wrapper', - parent=MMENGINE_OPTIM_WRAPPERS, - locations=['mmdet.engine.optimizers']) -# manage constructors that customize the optimization hyperparameters. -OPTIM_WRAPPER_CONSTRUCTORS = Registry( - 'optimizer constructor', - parent=MMENGINE_OPTIM_WRAPPER_CONSTRUCTORS, - locations=['mmdet.engine.optimizers']) -# manage all kinds of parameter schedulers like `MultiStepLR` -PARAM_SCHEDULERS = Registry( - 'parameter scheduler', - parent=MMENGINE_PARAM_SCHEDULERS, - locations=['mmdet.engine.schedulers']) -# manage all kinds of metrics -METRICS = Registry( - 'metric', parent=MMENGINE_METRICS, locations=['mmdet.evaluation']) -# manage evaluator -EVALUATOR = Registry( - 'evaluator', parent=MMENGINE_EVALUATOR, locations=['mmdet.evaluation']) - -# manage task-specific modules like anchor generators and box coders -TASK_UTILS = Registry( - 'task util', parent=MMENGINE_TASK_UTILS, locations=['mmdet.models']) - -# manage visualizer -VISUALIZERS = Registry( - 'visualizer', - parent=MMENGINE_VISUALIZERS, - locations=['mmdet.visualization']) -# manage visualizer backend -VISBACKENDS = Registry( - 'vis_backend', - parent=MMENGINE_VISBACKENDS, - locations=['mmdet.visualization']) - -# manage logprocessor -LOG_PROCESSORS = Registry( - 'log_processor', - parent=MMENGINE_LOG_PROCESSORS, - # TODO: update the location when mmdet has its own log processor - locations=['mmdet.engine']) diff --git a/spaces/Laihiujin/OneFormer/oneformer/evaluation/cityscapes_evaluation.py b/spaces/Laihiujin/OneFormer/oneformer/evaluation/cityscapes_evaluation.py deleted file mode 100644 index 4e06ab8cbe6b43a355c0a9cfb3f2d688438d2c64..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/oneformer/evaluation/cityscapes_evaluation.py +++ /dev/null @@ -1,201 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/detectron2/blob/main/detectron2/evaluation/cityscapes_evaluation.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import glob -import logging -import numpy as np -import os -import tempfile -from collections import OrderedDict -import torch -from PIL import Image - -from detectron2.data import MetadataCatalog -from detectron2.utils import comm -from detectron2.utils.file_io import PathManager - -from .evaluator import DatasetEvaluator - - -class CityscapesEvaluator(DatasetEvaluator): - """ - Base class for evaluation using cityscapes API. - """ - - def __init__(self, dataset_name): - """ - Args: - dataset_name (str): the name of the dataset. - It must have the following metadata associated with it: - "thing_classes", "gt_dir". - """ - self._metadata = MetadataCatalog.get(dataset_name) - self._cpu_device = torch.device("cpu") - self._logger = logging.getLogger(__name__) - - def reset(self): - self._working_dir = tempfile.TemporaryDirectory(prefix="cityscapes_eval_") - self._temp_dir = self._working_dir.name - # All workers will write to the same results directory - # TODO this does not work in distributed training - assert ( - comm.get_local_size() == comm.get_world_size() - ), "CityscapesEvaluator currently do not work with multiple machines." - self._temp_dir = comm.all_gather(self._temp_dir)[0] - if self._temp_dir != self._working_dir.name: - self._working_dir.cleanup() - self._logger.info( - "Writing cityscapes results to temporary directory {} ...".format(self._temp_dir) - ) - - -class CityscapesInstanceEvaluator(CityscapesEvaluator): - """ - Evaluate instance segmentation results on cityscapes dataset using cityscapes API. - - Note: - * It does not work in multi-machine distributed training. - * It contains a synchronization, therefore has to be used on all ranks. - * Only the main process runs evaluation. - """ - - def process(self, inputs, outputs): - from cityscapesscripts.helpers.labels import name2label - - for input, output in zip(inputs, outputs): - file_name = input["file_name"] - basename = os.path.splitext(os.path.basename(file_name))[0] - pred_txt = os.path.join(self._temp_dir, basename + "_pred.txt") - - if "instances" in output: - output = output["instances"].to(self._cpu_device) - num_instances = len(output) - with open(pred_txt, "w") as fout: - for i in range(num_instances): - pred_class = output.pred_classes[i] - classes = self._metadata.stuff_classes[pred_class] - class_id = name2label[classes].id - score = output.scores[i] - mask = output.pred_masks[i].numpy().astype("uint8") - png_filename = os.path.join( - self._temp_dir, basename + "_{}_{}.png".format(i, classes) - ) - - Image.fromarray(mask * 255).save(png_filename) - fout.write( - "{} {} {}\n".format(os.path.basename(png_filename), class_id, score) - ) - else: - # Cityscapes requires a prediction file for every ground truth image. - with open(pred_txt, "w") as fout: - pass - - def evaluate(self): - """ - Returns: - dict: has a key "segm", whose value is a dict of "AP" and "AP50". - """ - comm.synchronize() - if comm.get_rank() > 0: - return - import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as cityscapes_eval - - self._logger.info("Evaluating results under {} ...".format(self._temp_dir)) - - # set some global states in cityscapes evaluation API, before evaluating - cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir) - cityscapes_eval.args.predictionWalk = None - cityscapes_eval.args.JSONOutput = False - cityscapes_eval.args.colorized = False - cityscapes_eval.args.gtInstancesFile = os.path.join(self._temp_dir, "gtInstances.json") - - # These lines are adopted from - # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa - gt_dir = PathManager.get_local_path(self._metadata.gt_dir) - groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_instanceIds.png")) - assert len( - groundTruthImgList - ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format( - cityscapes_eval.args.groundTruthSearch - ) - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(cityscapes_eval.getPrediction(gt, cityscapes_eval.args)) - results = cityscapes_eval.evaluateImgLists( - predictionImgList, groundTruthImgList, cityscapes_eval.args - )["averages"] - - ret = OrderedDict() - ret["segm"] = {"AP": results["allAp"] * 100, "AP50": results["allAp50%"] * 100} - self._working_dir.cleanup() - return ret - - -class CityscapesSemSegEvaluator(CityscapesEvaluator): - """ - Evaluate semantic segmentation results on cityscapes dataset using cityscapes API. - - Note: - * It does not work in multi-machine distributed training. - * It contains a synchronization, therefore has to be used on all ranks. - * Only the main process runs evaluation. - """ - - def process(self, inputs, outputs): - from cityscapesscripts.helpers.labels import trainId2label - - for input, output in zip(inputs, outputs): - file_name = input["file_name"] - basename = os.path.splitext(os.path.basename(file_name))[0] - pred_filename = os.path.join(self._temp_dir, basename + "_pred.png") - - output = output["sem_seg"].argmax(dim=0).to(self._cpu_device).numpy() - pred = 255 * np.ones(output.shape, dtype=np.uint8) - for train_id, label in trainId2label.items(): - if label.ignoreInEval: - continue - pred[output == train_id] = label.id - Image.fromarray(pred).save(pred_filename) - - def evaluate(self): - comm.synchronize() - if comm.get_rank() > 0: - return - # Load the Cityscapes eval script *after* setting the required env var, - # since the script reads CITYSCAPES_DATASET into global variables at load time. - import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as cityscapes_eval - - self._logger.info("Evaluating results under {} ...".format(self._temp_dir)) - - # set some global states in cityscapes evaluation API, before evaluating - cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir) - cityscapes_eval.args.predictionWalk = None - cityscapes_eval.args.JSONOutput = False - cityscapes_eval.args.colorized = False - - # These lines are adopted from - # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py # noqa - gt_dir = PathManager.get_local_path(self._metadata.gt_dir) - groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_labelIds.png")) - assert len( - groundTruthImgList - ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format( - cityscapes_eval.args.groundTruthSearch - ) - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(cityscapes_eval.getPrediction(cityscapes_eval.args, gt)) - results = cityscapes_eval.evaluateImgLists( - predictionImgList, groundTruthImgList, cityscapes_eval.args - ) - ret = OrderedDict() - ret["sem_seg"] = { - "IoU": 100.0 * results["averageScoreClasses"], - "iIoU": 100.0 * results["averageScoreInstClasses"], - "IoU_sup": 100.0 * results["averageScoreCategories"], - "iIoU_sup": 100.0 * results["averageScoreInstCategories"], - } - self._working_dir.cleanup() - return ret diff --git a/spaces/Laihiujin/OneFormer/oneformer/modeling/pixel_decoder/__init__.py b/spaces/Laihiujin/OneFormer/oneformer/modeling/pixel_decoder/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/oneformer/modeling/pixel_decoder/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/Lavanya30/hiddenhunger/pages/plant.py b/spaces/Lavanya30/hiddenhunger/pages/plant.py deleted file mode 100644 index 815705242aca65f471c6134b2a8dc288fec98046..0000000000000000000000000000000000000000 --- a/spaces/Lavanya30/hiddenhunger/pages/plant.py +++ /dev/null @@ -1,65 +0,0 @@ -import cv2 -import numpy as np -import streamlit as st -import tensorflow as tf -import requests -from streamlit_lottie import st_lottie -from tensorflow.keras.preprocessing import image -from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2,preprocess_input as mobilenet_v2_preprocess_input -st.header("Hidden Hunger in plants") -st.write("To find micronutrient deficiency of banana leaf") - -st.markdown(""" - -""", unsafe_allow_html=True) -uploaded_file = st.file_uploader("Choose a image file", type="jpg") -model = tf.keras.models.load_model(r"models/resnet152v2.h5") -map_dict = {0: 'Boron deficiency', - 1: 'Healthy', - 2: 'Iron deficiency', - 3: 'Manganese deficiency', - 4: 'Zinc deficiency'} - - - -if uploaded_file is not None: - # Convert the file to an opencv image. - file_bytes = np.asarray(bytearray(uploaded_file.read()), dtype=np.uint8) - opencv_image = cv2.imdecode(file_bytes, 1) - opencv_image = cv2.cvtColor(opencv_image, cv2.COLOR_BGR2RGB) - resized = cv2.resize(opencv_image,(224,224)) - # Now do something with the image! For example, let's display it: - st.image(opencv_image, channels="RGB") - - resized = mobilenet_v2_preprocess_input(resized) - img_reshape = resized[np.newaxis,...] - - Genrate_pred = st.button("Generate Prediction") - if Genrate_pred: - prediction = model.predict(img_reshape).argmax() - st.title("Predicted Label for the image is {}".format(map_dict [prediction])) - diff --git a/spaces/Lerdweg/Energie-NRW/app.py b/spaces/Lerdweg/Energie-NRW/app.py deleted file mode 100644 index 765f2b36bc251bf3d747bb3aed7201256aa1588d..0000000000000000000000000000000000000000 --- a/spaces/Lerdweg/Energie-NRW/app.py +++ /dev/null @@ -1,914 +0,0 @@ -import os -from supabase import Client, create_client -from langchain import OpenAI, LLMCheckerChain -from dotenv import find_dotenv, load_dotenv -from langchain.chat_models import ChatOpenAI -from langchain.agents.agent_types import AgentType -from langchain.document_loaders.csv_loader import CSVLoader -from langchain.agents import AgentExecutor -from langchain.chat_models import ChatOpenAI -import pandas as pd -import time -import numpy as np -import matplotlib.pyplot as plt -import plotly.express as px -import geopandas as gpd -import folium -import streamlit as st -from streamlit_folium import st_folium -import time -import geopandas as gpd -from sqlalchemy import create_engine - -import openai -from langchain.vectorstores import SupabaseVectorStore -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder -from langchain.schema import SystemMessage -from langchain.chains.conversation.memory import ConversationBufferMemory -from langchain import OpenAI, LLMChain - -load_dotenv(find_dotenv()) -embeddings = OpenAIEmbeddings() - -#openai.api_key = os.getenv('OPENAI_API_KEY') -os.environ["VERBOSE"] = "True" -openai.api_key = os.getenv('OPENAI_API_KEY') -SUPABASE_URL = os.getenv('SUPABASE_URL') -SUPABASE_KEY = os.getenv('SUPABASE_KEY') -supabase = create_client(SUPABASE_URL, SUPABASE_KEY) - -# Set Streamlit page config -st.set_page_config( - page_title="Energieversorgung der Gemeinden - Status Quo und erneuerbares Potential", - page_icon="🌍", - layout="wide" -) - -# Select tabel from supabase - -ee_data = supabase.table('energie-nrw').select("*").execute() - -# Create dataframe based on table data -df_ee_data = pd.DataFrame(ee_data.data) - -# Get the list of unique values from the 'gemeinde' column -gemeinde_list = df_ee_data['gemeinde'].unique() -@st.cache_data -def table_status_quo(gemeinde): - # Filter the dataframe for the selected gemeinde - filtered_df = df_ee_data.loc[df_ee_data['gemeinde'] == gemeinde].copy() - - # Calculate the sum of 'waermebedarf-[GWh/a]' and 'stromverbrauch-[GWh/a]' - gesamt = filtered_df['waermebedarf-[GWh/a]'] + filtered_df['stromverbrauch-[GWh/a]'] - - # Create a new dataframe with the required columns and first row - table_status_quo = pd.DataFrame({ - "Energieverbrauch": ["Aktueller Verbrauch"], - "Strom [GWh/a]": [filtered_df['stromverbrauch-[GWh/a]']], - "Wärme [GWh/a]": [filtered_df['waermebedarf-[GWh/a]']], - "Gesamt [GWh/a]": [gesamt] - }) - - # Calculate the sum of all renewable energy generated for each "Strom" and "Wärme" - sum_renewable_strom = sum([ - filtered_df["strombereitstellung-biomasse-[GWh/a]"], - filtered_df["strombereitstellung-deponiegas-[GWh/a]"], - filtered_df["strombereitstellung-grubengas-[GWh/a]"], - filtered_df["strombereitstellung-klaergas-[GWh/a]"], - filtered_df["strombereitstellung-pv-freiflaechen-[GWh/a]"], - filtered_df["strombereitstellung-pv-dachflaechen-[GWh/a]"], - filtered_df["strombereitstellung-wasserkraft-[GWh/a]"], - filtered_df["strombereitstellung-wind-[GWh/a]"], - filtered_df["strombereitstellung-mva-[GWh/a]"] - ]) - - sum_renewable_wärme = sum([ - filtered_df["waermebereitstellung-biomasse-[GWh/a]"], - filtered_df["waermebereitstellung-deponiegas-[GWh/a]"], - filtered_df["waermebereitstellung-grubengas-[GWh/a]"], - filtered_df["waermebereitstellung-klaergas-[GWh/a]"], - filtered_df["waermebereitstellung-solarthermie-[GWh/a]"], - filtered_df["waermebereitstellung-geothermie-[GWh/a]"], - filtered_df["waermebereitstellung-grubenwasser-[GWh/a]"], - filtered_df["waermebereitstellung-abwaerme-industrie-[GWh/a]"] - ]) - - # Calculate the sum of fossil based energy generated for "Strom" and "Wärme" - sum_fossil_strom = filtered_df['stromverbrauch-[GWh/a]'] - sum_renewable_strom - sum_fossil_wärme = filtered_df['waermebedarf-[GWh/a]'] - sum_renewable_wärme - - # Calculate the percentage of each of them from the total value - renewable_strom_percentage = (sum_renewable_strom / filtered_df['stromverbrauch-[GWh/a]']) * 100 - fossil_strom_percentage = (sum_fossil_strom / filtered_df['stromverbrauch-[GWh/a]']) * 100 - # Check if fossil_strom_percentage is less than 0, if so, set it to zero - if fossil_strom_percentage.values[0] < 0: - fossil_strom_percentage.values[0] = 0 - renewable_wärme_percentage = (sum_renewable_wärme / filtered_df['waermebedarf-[GWh/a]']) * 100 - fossil_wärme_percentage = (sum_fossil_wärme / filtered_df['waermebedarf-[GWh/a]']) * 100 - # If fossil_wärme_percentage is negative, set it to 0 - if fossil_wärme_percentage.values[0] < 0: - fossil_wärme_percentage.values[0] = 0 - # Calculate the percentage for "Gesamt" Strom and Wärme - gesamt_renewable_percentage = ((sum_renewable_strom + sum_renewable_wärme) / gesamt) * 100 - gesamt_fossil_percentage = ((sum_fossil_strom + sum_fossil_wärme) / gesamt) * 100 - - # Create a list of calculated percentages - percentages = [ - renewable_strom_percentage.values[0], - fossil_strom_percentage.values[0], - renewable_wärme_percentage.values[0], - fossil_wärme_percentage.values[0], - gesamt_renewable_percentage.values[0], - gesamt_fossil_percentage.values[0] - ] - # Create a new DataFrame and append the rows - df_status_quo = pd.DataFrame({ - "Energieverbrauch": ["Aktueller Verbrauch - [GWh/a]", - "THG-Emissionen - [tCO2e]", - "Anteil erneuerbarer Energien - [%]", - "Anteil konventioneller Energieträger - [%]"], - "Strom": [round(filtered_df['stromverbrauch-[GWh/a]'].values[0], 1), - round(filtered_df['gesamt-strom-bestand-thg-[tCO2e]'].values[0], 1), - round(renewable_strom_percentage.values[0], 1), - round(fossil_strom_percentage.values[0], 1)], - "Wärme": [round(filtered_df['waermebedarf-[GWh/a]'].values[0], 1), - round(filtered_df['gesamt-waerme-bestand-thg-[tCO2e]'].values[0], 1), - round(renewable_wärme_percentage.values[0], 1), - round(fossil_wärme_percentage.values[0], 1)], - "Gesamt": [round(gesamt.values[0], 1), - round(filtered_df['gesamt-thg-[tCO2e]'].values[0], 1), - round(gesamt_renewable_percentage.values[0], 1), - round(gesamt_fossil_percentage.values[0], 1)] - }) - - return df_status_quo, percentages - -@st.cache_resource -def pie_status_quo(percentages): - # Format the percentage values - renewable_strom_percentage = percentages[0] - fossil_strom_percentage = percentages[1] - renewable_wärme_percentage = percentages[2] - fossil_wärme_percentage = percentages[3] - gesamt_renewable_percentage = percentages[4] - gesamt_fossil_percentage = percentages[5] - -# Create a DataFrame for pie chart - pie_data = { - "Energieverbrauch": ["erneuerbar", "konventionell"], - "Percentage": [renewable_strom_percentage, fossil_strom_percentage] - } - df_pie_strom = pd.DataFrame(pie_data) - - pie_data = { - "Energieverbrauch": ["erneuerbar", "konventionell"], - "Percentage": [renewable_wärme_percentage, fossil_wärme_percentage] - } - df_pie_wärme = pd.DataFrame(pie_data) - - pie_data = { - "Energieverbrauch": ["erneuerbar", "konventionell"], - "Percentage": [gesamt_renewable_percentage, gesamt_fossil_percentage] - } - df_pie_gesamt = pd.DataFrame(pie_data) - - # Define custom color palette - custom_colors = ['#E57373', '#81C784'] # Red for erneuerbar, Green for konventionell - - # Create pie chart for Strom - pie_chart_strom_status_quo = px.pie( - df_pie_strom, - values="Percentage", - names="Energieverbrauch", - labels={"Percentage"}, - title="Strom", - height=330, - #color="Energieverbrauch", # Use the Energieverbrauch column for color coding - #color_discrete_map=custom_colors # Map colors to specific values - color_discrete_sequence=custom_colors # Apply custom color palette - ) - - # Create pie chart for Wärme - pie_chart_waerme_status_quo = px.pie( - df_pie_wärme, - values="Percentage", - names="Energieverbrauch", - labels={"Percentage"}, - title="Wärme", - height=330, - color_discrete_sequence=custom_colors # Apply custom color palette - ) - - # Create pie chart for Gesamt - pie_chart_gesamt_status_quo = px.pie( - df_pie_gesamt, - values="Percentage", - names="Energieverbrauch", - labels={"Percentage"}, - title="Gesamt", - height=330, - color_discrete_sequence=custom_colors # Apply custom color palette - ) - - return pie_chart_strom_status_quo, pie_chart_waerme_status_quo, pie_chart_gesamt_status_quo - -@st.cache_data -def energie_bestand(gemeinde): - filtered_df_bestand = df_ee_data.loc[df_ee_data['gemeinde'] == gemeinde].copy() - # Round the numeric columns to 1 decimal place - numeric_columns = filtered_df_bestand.select_dtypes(include='number').columns - filtered_df_bestand[numeric_columns] = filtered_df_bestand[numeric_columns].round(1) - - data = [] - - # Iterate through the rows of the filtered DataFrame - for index, row in filtered_df_bestand.iterrows(): - # Add row for "Wind" with Strom and Wärme values - data.append({"Energieträger": "Biomasse", "Strom [GWh/a]": row["strombereitstellung-biomasse-[GWh/a]"]}) - data.append({"Energieträger": "Deponiegas", "Strom [GWh/a]": row["strombereitstellung-deponiegas-[GWh/a]"]}) - data.append({"Energieträger": "Grubengas", "Strom [GWh/a]": row["strombereitstellung-grubengas-[GWh/a]"]}) - data.append({"Energieträger": "Klärgas", "Strom [GWh/a]": row["strombereitstellung-klaergas-[GWh/a]"]}) - data.append({"Energieträger": "Solar-Freiflächen", "Strom [GWh/a]": row["strombereitstellung-pv-freiflaechen-[GWh/a]"]}) - data.append({"Energieträger": "Solar-Dachflächen", "Strom [GWh/a]": row["strombereitstellung-pv-dachflaechen-[GWh/a]"]}) - data.append({"Energieträger": "Wasserkraft", "Strom [GWh/a]": row["strombereitstellung-wasserkraft-[GWh/a]"]}) - data.append({"Energieträger": "Wind", "Strom [GWh/a]": row["strombereitstellung-wind-[GWh/a]"]}) - data.append({"Energieträger": "Müllverbrennung", "Strom [GWh/a]": row["strombereitstellung-mva-[GWh/a]"]}) - data.append({"Energieträger": "Steinkohle", "Strom [GWh/a]": row["strombereitstellung-steinkohle-[GWh/a]"]}) - data.append({"Energieträger": "Braunkohle", "Strom [GWh/a]": row["strombereitstellung-braunkohle-[GWh/a]"]}) - data.append({"Energieträger": "Erdgas", "Strom [GWh/a]": row["strombereitstellung-erdgas-[GWh/a]"]}) - data.append({"Energieträger": "Mineralöle", "Strom [GWh/a]": row["strombereitstellung-oel-[GWh/a]"]}) - data.append({"Energieträger": "Sonstige", "Strom [GWh/a]": row["strombereitstellung-sonstige-[GWh/a]"]}) - data.append({"Energieträger": "EE-Gesamt", "Strom [GWh/a]": row["strombereitstellung-erneuerbar-gesamt-[GWh/a]"]}) - data.append({"Energieträger": "Fossil-Gesamt", "Strom [GWh/a]": row["schaetzung-strombereitstellung-konv-gesamt-[GWh/a]"]}) - - # Create a new DataFrame using the data list - energiemix_bestand = pd.DataFrame(data) - - return energiemix_bestand - -@st.cache_resource -def pie_charts_bestand(gemeinde): - energiemix_bestand = energie_bestand(gemeinde) - - # Exclude "EE-Gesamt" and "Fossil-Gesamt" and values that are zero - filtered_df = energiemix_bestand[~energiemix_bestand['Energieträger'].isin(['EE-Gesamt', 'Fossil-Gesamt'])] - filtered_df = filtered_df[filtered_df['Strom [GWh/a]'] != 0] - - # Calculate total for percentage calculation - total = filtered_df['Strom [GWh/a]'].sum() - filtered_df['Percentage'] = (filtered_df['Strom [GWh/a]'] / total) * 100 - - # Create first pie chart with specific color spectrum for each category - pie_bestand_all = px.pie(filtered_df, values='Percentage', names='Energieträger', title='Gesamt', color='Energieträger', color_discrete_sequence=px.colors.sequential.RdBu) - pie_bestand_all.update_traces(texttemplate='%{value:.1f}%', textposition='inside') - - # Filter for second pie chart - second_chart_energies = ['Biomasse', 'Deponiegas', 'Grubengas', 'Klärgas', 'Solar-Freiflächen', 'Solar-Dachflächen', 'Wasserkraft', 'Wind'] - filtered_df_second = filtered_df[filtered_df['Energieträger'].isin(second_chart_energies)] - total_second = filtered_df_second['Strom [GWh/a]'].sum() - filtered_df_second['Percentage'] = (filtered_df_second['Strom [GWh/a]'] / total_second) * 100 - - # Create second pie chart with darker green color spectrum - pie_bestand_ee = px.pie(filtered_df_second, values='Percentage', names='Energieträger', title='Erneuerbar', color_discrete_sequence=px.colors.sequential.Greens_r) - pie_bestand_ee.update_traces(texttemplate='%{value:.1f}%', textposition='inside') - - # Filter for third pie chart - third_chart_energies = ['Müllverbrennung', 'Steinkohle', 'Braunkohle', 'Erdgas', 'Mineralöle', 'Sonstige'] - filtered_df_third = filtered_df[filtered_df['Energieträger'].isin(third_chart_energies)] - total_third = filtered_df_third['Strom [GWh/a]'].sum() - filtered_df_third['Percentage'] = (filtered_df_third['Strom [GWh/a]'] / total_third) * 100 - - # Create third pie chart with darker red color spectrum - pie_bestand_ko = px.pie(filtered_df_third, values='Percentage', names='Energieträger', title='Konventionell', color_discrete_sequence=px.colors.sequential.Reds_r) - pie_bestand_ko.update_traces(texttemplate='%{value:.1f}%', textposition='inside') - - return pie_bestand_all, pie_bestand_ee, pie_bestand_ko - -#gemeinde_list = [{"label": gemeinde, "value": gemeinde} for gemeinde in df_ee_data['gemeinde'].unique()] -@st.cache_data -def ee_potenziale(gemeinde): - - #selected_gemeinde = input("Enter the gemeinde to filter by: ") - filtered_df_ee_data = df_ee_data.loc[df_ee_data['gemeinde'] == gemeinde].copy() - - # Round the numeric columns to 1 decimal place - numeric_columns = filtered_df_ee_data.select_dtypes(include='number').columns - filtered_df_ee_data[numeric_columns] = filtered_df_ee_data[numeric_columns].round(1) - - # Initialize an empty list to hold the data for the new DataFrame - data = [] - - # Iterate through the rows of the filtered DataFrame - for index, row in filtered_df_ee_data.iterrows(): - # Add row for "Wind" with Strom and Wärme values - data.append({"Energieträger": "Wind", "Strom [GWh/a]": row["wind-flaechenpotenzial-[GWh/a]"], "Wärme [GWh/a]": 0}) - data.append({"Energieträger": "Solar-Dachflächen", "Strom [GWh/a]": row["pv-dach-stromertragspotenzial-[GWh/a]"], "Wärme [GWh/a]": row["solarthermie-dach-warmwasser-[GWh/a]"]}) - data.append({"Energieträger": "Solar-Freiflächen", "Strom [GWh/a]": row["pv-freiflaeche-stromertragspotenzial-[GWh/a]"], "Wärme [GWh/a]": 0}) - data.append({"Energieträger": "Wasser", "Strom [GWh/a]": row["wasserkraftpotenzial-[GWh/a]"], "Wärme [GWh/a]": 0}) - data.append({"Energieträger": "Biomasse", "Strom [GWh/a]": row["biomasse-potenzial-strom-[GWh/a]"], "Wärme [GWh/a]": row["biomasse-potenzial-waerme-[GWh/a]"]}) - data.append({"Energieträger": "Fernwärme", "Strom [GWh/a]": 0, "Wärme [GWh/a]": row["ferwaerme-potenzial-2030-[GWh/a]"]}) - data.append({"Energieträger": "Industrieabwärme", "Strom [GWh/a]": 0, "Wärme [GWh/a]": row["industrieabwaerme-einspeisung-[GWh/a]"]}) - data.append({"Energieträger": "Grubenwasser", "Strom [GWh/a]": 0, "Wärme [GWh/a]": row["warmes-grubenwasser-[GWh/a]"]}) - data.append({"Energieträger": "Geothermie", "Strom [GWh/a]": 0, "Wärme [GWh/a]": row["geothermie-potenzial-[GWh/a]"]}) - - # Create a new DataFrame using the data list - new_df = pd.DataFrame(data) - - # Calculate the sum per column "Strom" and "Wärme" - sum_strom = new_df["Strom [GWh/a]"].sum() - sum_wärme = new_df["Wärme [GWh/a]"].sum() - - # Round the sum values to 1 digit after the decimal point - sum_strom = round(sum_strom, 1) - sum_wärme = round(sum_wärme, 1) - - # Add the row with sum values to the DataFrame - new_df.loc[len(new_df)] = ["Gesamt in GWh/a", sum_strom, sum_wärme] - - # Add the row with sum values to the DataFrame - data.append({"Energieträger": "Gesamtpotential EE in GWh/a", "Strom [GWh/a]": sum_strom, "Wärme [GWh/a]": sum_wärme}) - # Get the sum of renewable energy for "Strom" and "Wärme" from table_status_quo function - df_status_quo, _ = table_status_quo(gemeinde) - # Recalculate the sum of renewable energy for "Strom" and "Wärme" using the provided code snippets - sum_renewable_strom = sum([ - filtered_df_ee_data["strombereitstellung-biomasse-[GWh/a]"], - filtered_df_ee_data["strombereitstellung-deponiegas-[GWh/a]"], - filtered_df_ee_data["strombereitstellung-grubengas-[GWh/a]"], - filtered_df_ee_data["strombereitstellung-klaergas-[GWh/a]"], - filtered_df_ee_data["strombereitstellung-pv-freiflaechen-[GWh/a]"], - filtered_df_ee_data["strombereitstellung-pv-dachflaechen-[GWh/a]"], - filtered_df_ee_data["strombereitstellung-wasserkraft-[GWh/a]"], - filtered_df_ee_data["strombereitstellung-wind-[GWh/a]"], - filtered_df_ee_data["strombereitstellung-mva-[GWh/a]"] - ]) - sum_renewable_strom = float(sum_renewable_strom.values[0]) - - sum_renewable_wärme = sum([ - filtered_df_ee_data["waermebereitstellung-biomasse-[GWh/a]"], - filtered_df_ee_data["waermebereitstellung-deponiegas-[GWh/a]"], - filtered_df_ee_data["waermebereitstellung-grubengas-[GWh/a]"], - filtered_df_ee_data["waermebereitstellung-klaergas-[GWh/a]"], - filtered_df_ee_data["waermebereitstellung-solarthermie-[GWh/a]"], - filtered_df_ee_data["waermebereitstellung-geothermie-[GWh/a]"], - filtered_df_ee_data["waermebereitstellung-grubenwasser-[GWh/a]"], - filtered_df_ee_data["waermebereitstellung-abwaerme-industrie-[GWh/a]"] - ]) - sum_renewable_wärme = float(sum_renewable_wärme.values[0]) - # Add row with the values sum_renewable_strom for strom and sum_renewable_wärme for wärme to the dataframe data - data.append({"Energieträger": "Bestand EE in GWh/a", "Strom [GWh/a]": sum_renewable_strom, "Wärme [GWh/a]": sum_renewable_wärme}) - # Add row for "EE Bestand+Potential" - data.append({"Energieträger": "EE Bestand+Potential", "Strom [GWh/a]": sum_renewable_strom+sum_strom, "Wärme [GWh/a]": sum_renewable_wärme+sum_wärme}) - # Add row for "Verbrauch/Bedarf" - data.append({"Energieträger": "Verbrauch/Bedarf in GWh/a", "Strom [GWh/a]": row["stromverbrauch-[GWh/a]"], "Wärme [GWh/a]": row["waermebedarf-[GWh/a]"]}) - # Calculate the percentage values for "Deckungsanteil" - deckungsanteil_strom = round(((sum_renewable_strom + sum_strom) / row["stromverbrauch-[GWh/a]"]) * 100, 1) - deckungsanteil_wärme = round(((sum_renewable_wärme + sum_wärme) / row["waermebedarf-[GWh/a]"]) * 100, 1) - - # Add row for "Deckungsanteil" with percentage values - data.append({"Energieträger": "Deckungsanteil EE in %", "Strom [GWh/a]": deckungsanteil_strom, "Wärme [GWh/a]": deckungsanteil_wärme}) - - # Create a new DataFrame using the data list - new_df = pd.DataFrame(data) - - # Print the new DataFrame - # print(new_df) - - return new_df -@st.cache_data -def ee_potential_strom_waerme(gemeinde): - filtered_df_ee_data = df_ee_data.loc[df_ee_data['gemeinde'] == gemeinde].copy() - # Round the numeric columns to 1 decimal place - numeric_columns = filtered_df_ee_data.select_dtypes(include='number').columns - filtered_df_ee_data[numeric_columns] = filtered_df_ee_data[numeric_columns].round(1) - # Initialize an empty list to hold the data for the new DataFrame - - data_strom = [] - # Iterate through the rows of the filtered DataFrame - for index, row in filtered_df_ee_data.iterrows(): - # Add row for "Wind" with Strom and Wärme values - data_strom.append({"Energieträger": "Wind", "Bestand EE [GWh/a]": row["strombereitstellung-wind-[GWh/a]"], "Potential EE [GWh/a]": row["wind-flaechenpotenzial-[GWh/a]"], "Gesamt EE [GWh/a]": row["strombereitstellung-wind-[GWh/a]"] +row["wind-flaechenpotenzial-[GWh/a]"], "Steigerungspotential %": round(((row["strombereitstellung-wind-[GWh/a]"] +row["wind-flaechenpotenzial-[GWh/a]"])/row["strombereitstellung-wind-[GWh/a]"]-1)*100,1) if row["strombereitstellung-wind-[GWh/a]"] != 0 else 0}) - data_strom.append({"Energieträger": "Solar-Dachflächen", "Bestand EE [GWh/a]": row["strombereitstellung-pv-dachflaechen-[GWh/a]"], "Potential EE [GWh/a]": row["pv-dach-stromertragspotenzial-[GWh/a]"], "Gesamt EE [GWh/a]": row["strombereitstellung-pv-dachflaechen-[GWh/a]"] +row["pv-dach-stromertragspotenzial-[GWh/a]"], "Steigerungspotential %": round(((row["strombereitstellung-pv-dachflaechen-[GWh/a]"] +row["pv-dach-stromertragspotenzial-[GWh/a]"])/row["strombereitstellung-pv-dachflaechen-[GWh/a]"]-1)*100,1) if row["strombereitstellung-pv-dachflaechen-[GWh/a]"] != 0 else 0}) - data_strom.append({"Energieträger": "Solar-Freiflächen", "Bestand EE [GWh/a]": row["strombereitstellung-pv-freiflaechen-[GWh/a]"], "Potential EE [GWh/a]": row["pv-freiflaeche-stromertragspotenzial-[GWh/a]"], "Gesamt EE [GWh/a]": row["strombereitstellung-pv-freiflaechen-[GWh/a]"] +row["pv-freiflaeche-stromertragspotenzial-[GWh/a]"], "Steigerungspotential %": round(((row["strombereitstellung-pv-freiflaechen-[GWh/a]"] +row["pv-freiflaeche-stromertragspotenzial-[GWh/a]"])/row["strombereitstellung-pv-freiflaechen-[GWh/a]"]-1)*100,1) if row["strombereitstellung-pv-freiflaechen-[GWh/a]"] != 0 else 0}) - data_strom.append({"Energieträger": "Wasser", "Bestand EE [GWh/a]": row["strombereitstellung-wasserkraft-[GWh/a]"], "Potential EE [GWh/a]": row["wasserkraftpotenzial-[GWh/a]"], "Gesamt EE [GWh/a]": row["strombereitstellung-wasserkraft-[GWh/a]"] +row["wasserkraftpotenzial-[GWh/a]"], "Steigerungspotential %": round(((row["strombereitstellung-wasserkraft-[GWh/a]"] +row["wasserkraftpotenzial-[GWh/a]"])/row["strombereitstellung-wasserkraft-[GWh/a]"]-1)*100,1) if row["strombereitstellung-wasserkraft-[GWh/a]"] != 0 else 0}) - data_strom.append({"Energieträger": "Biomasse", "Bestand EE [GWh/a]": row["waermebereitstellung-biomasse-[GWh/a]"], "Potential EE [GWh/a]": row["biomasse-potenzial-strom-[GWh/a]"], "Gesamt EE [GWh/a]": row["strombereitstellung-biomasse-[GWh/a]"] +row["biomasse-potenzial-strom-[GWh/a]"], "Steigerungspotential %": round(((row["strombereitstellung-biomasse-[GWh/a]"] +row["biomasse-potenzial-strom-[GWh/a]"])/row["strombereitstellung-biomasse-[GWh/a]"]-1)*100,1) if row["strombereitstellung-biomasse-[GWh/a]"] != 0 else 0}) - data_strom.append({"Energieträger": "Deponiegas", "Bestand EE [GWh/a]": row["strombereitstellung-deponiegas-[GWh/a]"], "Potential EE [GWh/a]": 0, "Gesamt EE [GWh/a]": row["strombereitstellung-deponiegas-[GWh/a]"] +row["biomasse-potenzial-strom-[GWh/a]"], "Steigerungspotential %": 0}) - data_strom.append({"Energieträger": "Grubengas", "Bestand EE [GWh/a]": row["strombereitstellung-grubengas-[GWh/a]"], "Potential EE [GWh/a]": 0, "Gesamt EE [GWh/a]": row["strombereitstellung-grubengas-[GWh/a]"] +row["biomasse-potenzial-strom-[GWh/a]"], "Steigerungspotential %": 0}) - data_strom.append({"Energieträger": "Klärgas", "Bestand EE [GWh/a]": row["strombereitstellung-klaergas-[GWh/a]"], "Potential EE [GWh/a]": 0, "Gesamt EE [GWh/a]": row["strombereitstellung-klaergas-[GWh/a]"] +row["biomasse-potenzial-strom-[GWh/a]"], "Steigerungspotential %": 0}) - total_bestand = sum([row["Bestand EE [GWh/a]"] for row in data_strom]) - total_potential = sum([row["Potential EE [GWh/a]"] for row in data_strom]) - total_gesamt = sum([row["Gesamt EE [GWh/a]"] for row in data_strom]) - data_strom.append({"Energieträger": "Gesamt", "Bestand EE [GWh/a]": total_bestand, "Potential EE [GWh/a]": total_potential, "Gesamt EE [GWh/a]": total_gesamt, "Steigerungspotential %": round(((total_potential+total_bestand)/total_bestand-1)*100,1) if total_bestand != 0 else 0}) - data_strom.append({"Energieträger": "Bedarf", "Bestand EE [GWh/a]": row["stromverbrauch-[GWh/a]"], "Potential EE [GWh/a]": row["stromverbrauch-[GWh/a]"], "Gesamt EE [GWh/a]": row["stromverbrauch-[GWh/a]"], "Steigerungspotential %": 0}) - data_strom.append({"Energieträger": "Deckungsrate %", "Bestand EE [GWh/a]": round((total_bestand/row["stromverbrauch-[GWh/a]"])*100,1), "Potential EE [GWh/a]": round((total_potential/row["stromverbrauch-[GWh/a]"])*100,1), "Gesamt EE [GWh/a]": round((total_gesamt/row["stromverbrauch-[GWh/a]"])*100,1), "Steigerungspotential %": 0}) - - df_energy_strom = pd.DataFrame(data_strom) - - data_waerme = [] - for index, row in filtered_df_ee_data.iterrows(): - # Add row for "Wind" with Strom and Wärme values - data_waerme.append({"Energieträger": "Solarthermie", "Bestand EE [GWh/a]": row["waermebereitstellung-solarthermie-[GWh/a]"], "Potential EE [GWh/a]": row["solarthermie-dach-warmwasser-[GWh/a]"], "Gesamt EE [GWh/a]": row["waermebereitstellung-solarthermie-[GWh/a]"] +row["solarthermie-dach-warmwasser-[GWh/a]"], "Steigerungspotential %": round(((row["waermebereitstellung-solarthermie-[GWh/a]"] +row["solarthermie-dach-warmwasser-[GWh/a]"])/row["waermebereitstellung-solarthermie-[GWh/a]"]-1)*100,1) if row["waermebereitstellung-solarthermie-[GWh/a]"] != 0 else 0}) - data_waerme.append({"Energieträger": "Geothermie", "Bestand EE [GWh/a]": row["waermebereitstellung-geothermie-[GWh/a]"], "Potential EE [GWh/a]": row["geothermie-potenzial-[GWh/a]"], "Gesamt EE [GWh/a]": row["waermebereitstellung-geothermie-[GWh/a]"] + row["geothermie-potenzial-[GWh/a]"], "Steigerungspotential %": round(((row["waermebereitstellung-geothermie-[GWh/a]"] + row["geothermie-potenzial-[GWh/a]"])/row["waermebereitstellung-geothermie-[GWh/a]"]-1)*100,1) if row["waermebereitstellung-geothermie-[GWh/a]"] != 0 else 0}) - data_waerme.append({"Energieträger": "Biomasse", "Bestand EE [GWh/a]": row["waermebereitstellung-biomasse-[GWh/a]"], "Potential EE [GWh/a]": row["biomasse-potenzial-waerme-[GWh/a]"], "Gesamt EE [GWh/a]": row["waermebereitstellung-biomasse-[GWh/a]"] + row["biomasse-potenzial-waerme-[GWh/a]"], "Steigerungspotential %": round(((row["waermebereitstellung-biomasse-[GWh/a]"] + row["biomasse-potenzial-waerme-[GWh/a]"])/row["waermebereitstellung-biomasse-[GWh/a]"]-1)*100,1) if row["waermebereitstellung-biomasse-[GWh/a]"] != 0 else 0}) - data_waerme.append({"Energieträger": "Industrieabwärme", "Bestand EE [GWh/a]": row["waermebereitstellung-abwaerme-industrie-[GWh/a]"], "Potential EE [GWh/a]": row["industrieabwaerme-einspeisung-[GWh/a]"], "Gesamt EE [GWh/a]": row["waermebereitstellung-abwaerme-industrie-[GWh/a]"] + row["industrieabwaerme-einspeisung-[GWh/a]"], "Steigerungspotential %": round(((row["waermebereitstellung-abwaerme-industrie-[GWh/a]"] + row["industrieabwaerme-einspeisung-[GWh/a]"])/row["waermebereitstellung-abwaerme-industrie-[GWh/a]"]-1)*100,1) if row["waermebereitstellung-abwaerme-industrie-[GWh/a]"] != 0 else 0}) - data_waerme.append({"Energieträger": "Grubenwasser", "Bestand EE [GWh/a]": row["waermebereitstellung-grubenwasser-[GWh/a]"], "Potential EE [GWh/a]": row["warmes-grubenwasser-[GWh/a]"], "Gesamt EE [GWh/a]": row["waermebereitstellung-grubenwasser-[GWh/a]"] + row["warmes-grubenwasser-[GWh/a]"], "Steigerungspotential %": round(((row["waermebereitstellung-grubenwasser-[GWh/a]"] + row["warmes-grubenwasser-[GWh/a]"])/row["waermebereitstellung-grubenwasser-[GWh/a]"]-1)*100,1) if row["waermebereitstellung-grubenwasser-[GWh/a]"] != 0 else 0}) - data_waerme.append({"Energieträger": "Fernwärme", "Bestand EE [GWh/a]": 0, "Potential EE [GWh/a]": row["ferwaerme-potenzial-2030-[GWh/a]"], "Gesamt EE [GWh/a]": 0+row["ferwaerme-potenzial-2030-[GWh/a]"], "Steigerungspotential %": float('inf') if row["ferwaerme-potenzial-2030-[GWh/a]"] != 0 else 1}) - data_waerme.append({"Energieträger": "Deponiegas", "Bestand EE [GWh/a]": row["waermebereitstellung-grubengas-[GWh/a]"], "Potential EE [GWh/a]": 0, "Gesamt EE [GWh/a]": row["waermebereitstellung-grubengas-[GWh/a]"] +0, "Steigerungspotential %": 0}) - data_waerme.append({"Energieträger": "Grubengas", "Bestand EE [GWh/a]": row["waermebereitstellung-grubengas-[GWh/a]"], "Potential EE [GWh/a]": 0, "Gesamt EE [GWh/a]": row["waermebereitstellung-grubengas-[GWh/a]"] +0, "Steigerungspotential %": 0}) - data_waerme.append({"Energieträger": "Klärgas", "Bestand EE [GWh/a]": row["waermebereitstellung-klaergas-[GWh/a]"], "Potential EE [GWh/a]": 0, "Gesamt EE [GWh/a]": row["waermebereitstellung-klaergas-[GWh/a]"]+0, "Steigerungspotential %": 0}) - total_bestand = sum([row["Bestand EE [GWh/a]"] for row in data_waerme]) - total_potential = sum([row["Potential EE [GWh/a]"] for row in data_waerme]) - total_gesamt = sum([row["Gesamt EE [GWh/a]"] for row in data_waerme]) - data_waerme.append({"Energieträger": "Gesamt", "Bestand EE [GWh/a]": total_bestand, "Potential EE [GWh/a]": total_potential, "Gesamt EE [GWh/a]": total_gesamt, "Steigerungspotential %": round(((total_potential+total_bestand)/total_bestand-1)*100,1) if total_bestand != 0 else 0}) - data_waerme.append({"Energieträger": "Bedarf", "Bestand EE [GWh/a]": row["waermebedarf-[GWh/a]"], "Potential EE [GWh/a]": row["waermebedarf-[GWh/a]"], "Gesamt EE [GWh/a]": row["waermebedarf-[GWh/a]"], "Steigerungspotential %": 0}) - data_waerme.append({"Energieträger": "Deckungsrate %", "Bestand EE [GWh/a]": round((total_bestand/row["waermebedarf-[GWh/a]"])*100,1), "Potential EE [GWh/a]": round((total_potential/row["waermebedarf-[GWh/a]"])*100,1), "Gesamt EE [GWh/a]": round((total_gesamt/row["waermebedarf-[GWh/a]"])*100,1), "Steigerungspotential %": 0}) - - df_energy_wärme = pd.DataFrame(data_waerme) - - return df_energy_strom, df_energy_wärme -@st.cache_resource -def plot_stacked_strom_wärme(df_energy_strom, df_energy_wärme): - - # Exclude 'Gesamt', 'Bedarf', 'Deckungsrate %' from the dataframes - df_energy_strom = df_energy_strom[~df_energy_strom['Energieträger'].isin(['Gesamt', 'Bedarf', 'Deckungsrate %'])] - df_energy_wärme = df_energy_wärme[~df_energy_wärme['Energieträger'].isin(['Gesamt', 'Bedarf', 'Deckungsrate %'])] - - # Exclude categories when both Bestand EE [GWh/a] and Potential EE [GWh/a] values are zero - df_energy_strom = df_energy_strom[~((df_energy_strom['Bestand EE [GWh/a]'] == 0) & (df_energy_strom['Potential EE [GWh/a]'] == 0))] - df_energy_wärme = df_energy_wärme[~((df_energy_wärme['Bestand EE [GWh/a]'] == 0) & (df_energy_wärme['Potential EE [GWh/a]'] == 0))] - - # Sort the dataframes from high to low - df_energy_strom = df_energy_strom.sort_values(by=['Bestand EE [GWh/a]', 'Potential EE [GWh/a]'], ascending=False) - df_energy_wärme = df_energy_wärme.sort_values(by=['Bestand EE [GWh/a]', 'Potential EE [GWh/a]'], ascending=False) - - # Plotting stacked bar chart for df_energy_strom using plotly express - stacked_strom_ee = px.bar(df_energy_strom, x='Energieträger', y=['Bestand EE [GWh/a]', 'Potential EE [GWh/a]'], color_discrete_sequence=['darkgreen', 'lightgreen'], labels={'value':'EE [GWh/a]'}, title='Strom') - stacked_strom_ee.update_layout(xaxis={'categoryorder':'total descending'}) # Order bars from left (highest value) to right (lowest value) - stacked_strom_ee.update_traces(hovertemplate='%{y}') # Update hover label to show only the value - - # Plotting stacked bar chart for df_energy_wärme using plotly express - stacked_wärme_ee = px.bar(df_energy_wärme, x='Energieträger', y=['Bestand EE [GWh/a]', 'Potential EE [GWh/a]'], color_discrete_sequence=['darkgreen', 'lightgreen'], labels={'value':'EE [GWh/a]'}, title='Wärme') - stacked_wärme_ee.update_layout(xaxis={'categoryorder':'total descending'}) # Order bars from left (highest value) to right (lowest value) - stacked_wärme_ee.update_traces(hovertemplate='%{y}') # Update hover label to show only the value - - return stacked_strom_ee, stacked_wärme_ee - -@st.cache_resource -def thg_energie(gemeinde): - # Mapping of energy source names to desired format - energy_source_mapping = { - "strombereitstellung-biomasse-[tCO2e]": "Biomasse", - "strombereitstellung-deponiegas-[tCO2e]": "Deponiegas", - "strombereitstellung-grubengas-[tCO2e]": "Grubengas", - "strombereitstellung-klaergas-[tCO2e]": "Klärgas", - "strombereitstellung-pv-freiflaechen-[tCO2e]": "PV-Anlagen (Freiflächen)", - "strombereitstellung-pv-dachflaechen-[tCO2e]": "PV-Anlagen (Dachflächen)", - "strombereitstellung-wasserkraft-[tCO2e]": "Wasserkraft", - "strombereitstellung-wind-[tCO2e]": "Windenergie", - "strombereitstellung-mva-[tCO2e]": "MVA", - "strombereitstellung-steinkohle-[tCO2e]": "Steinkohle", - "strombereitstellung-braunkohle-[tCO2e]": "Braunkohle", - "strombereitstellung-erdgas-[tCO2e]": "Erdgas", - "strombereitstellung-oel-[tCO2e]": "Heizöl", - "strombereitstellung-sonstige-[tCO2e]": "Sonstige erneuerbare Energieträger", - - "waermebereitstellung-biomasse-[tCO2e]": "Biomasse", - "waermebereitstellung-deponiegas-[tCO2e]": "Deponiegas", - "waermebereitstellung-grubengas-[tCO2e]": "Grubengas", - "waermebereitstellung-klaergas-[tCO2e]": "Klärgas", - "waermebereitstellung-solarthermie-[tCO2e]": "Solarthermie", - "waermebereitstellung-geothermie-[tCO2e]": "Tiefengeothermie", - "waermebereitstellung-grubenwasser-[tCO2e]": "Grubenwasser", - "waermebereitstellung-abwaerme-industrie-[tCO2e]": "Abwärme (Industrie)", - "waermebereitstellung-mva-[tCO2e]": "MVA", - "waermebereitstellung-braunkohle-[tCO2e]": "Braunkohle", - "waermebereitstellung-steinkohle-[tCO2e]": "Steinkohle", - "waermebereitstellung-erdgas-[tCO2e]": "Erdgas", - "waermebereitstellung-oel-[tCO2e]": "Heizöl", - "waermebereitstellung-sonstiges-[tCO2e]": "Sonstige erneuerbare Energieträger", - "waermebereitstellung-kwk-[tCO2e]": "KWK" - } - # Strom table and plot - filtered_df = df_ee_data.loc[df_ee_data['gemeinde'] == gemeinde].copy() - strom_columns = ["strombereitstellung-biomasse-[tCO2e]", "strombereitstellung-deponiegas-[tCO2e]", "strombereitstellung-grubengas-[tCO2e]", "strombereitstellung-klaergas-[tCO2e]", "strombereitstellung-pv-freiflaechen-[tCO2e]", "strombereitstellung-pv-dachflaechen-[tCO2e]", "strombereitstellung-wasserkraft-[tCO2e]", "strombereitstellung-wind-[tCO2e]", "strombereitstellung-mva-[tCO2e]", "strombereitstellung-steinkohle-[tCO2e]", "strombereitstellung-braunkohle-[tCO2e]", "strombereitstellung-erdgas-[tCO2e]", "strombereitstellung-oel-[tCO2e]", "strombereitstellung-sonstige-[tCO2e]"] - strom_data = [] - for column in strom_columns: - strom_data.append({"Energieträger": energy_source_mapping[column], "[t CO2e]": filtered_df[column].values[0]}) - strom_data.append({"Energieträger": "Gesamt", "[t CO2e]": sum([row["[t CO2e]"] for row in strom_data])}) - df_strom = pd.DataFrame(strom_data) - df_strom_thg = df_strom[df_strom["[t CO2e]"] != 0].sort_values(by="[t CO2e]", ascending=False) - strom_plot_thg = px.bar(df_strom_thg[df_strom_thg["Energieträger"] != "Gesamt"], x="Energieträger", y="[t CO2e]", title="Strom", labels={"[t CO2e]": "t CO2e"}) - strom_plot_thg.update_layout(xaxis={'categoryorder':'total descending'}) # Order bars from left (highest value) to right (lowest value) - - # Wärme table and plot - waerme_columns = ["waermebereitstellung-biomasse-[tCO2e]", "waermebereitstellung-deponiegas-[tCO2e]", "waermebereitstellung-grubengas-[tCO2e]", "waermebereitstellung-klaergas-[tCO2e]", "waermebereitstellung-solarthermie-[tCO2e]", "waermebereitstellung-geothermie-[tCO2e]", "waermebereitstellung-grubenwasser-[tCO2e]", "waermebereitstellung-abwaerme-industrie-[tCO2e]", "waermebereitstellung-mva-[tCO2e]", "waermebereitstellung-braunkohle-[tCO2e]", "waermebereitstellung-steinkohle-[tCO2e]", "waermebereitstellung-erdgas-[tCO2e]", "waermebereitstellung-oel-[tCO2e]", "waermebereitstellung-sonstiges-[tCO2e]", "waermebereitstellung-kwk-[tCO2e]"] - waerme_data = [] - for column in waerme_columns: - waerme_data.append({"Energieträger": energy_source_mapping[column], "[t CO2e]": filtered_df[column].values[0]}) - waerme_data.append({"Energieträger": "Gesamt", "[t CO2e]": sum([row["[t CO2e]"] for row in waerme_data])}) - df_waerme = pd.DataFrame(waerme_data) - df_waerme_thg = df_waerme[df_waerme["[t CO2e]"] != 0].sort_values(by="[t CO2e]", ascending=False) - waerme_plot_thg = px.bar(df_waerme_thg[df_waerme_thg["Energieträger"] != "Gesamt"], x="Energieträger", y="[t CO2e]", title="Wärme", labels={"[t CO2e]": "t CO2e"}) - waerme_plot_thg.update_layout(xaxis={'categoryorder':'total descending'}) # Order bars from left (highest value) to right (lowest value) - - return df_strom_thg, strom_plot_thg, df_waerme_thg, waerme_plot_thg - - -def plot_stacked_bar(df, column_name): - fig = px.bar( - df, - x="Energieträger", - y=column_name, - title=f"{column_name}verbrauch in GWh/a", - labels={"value": f"{column_name}verbrauch (EE)"}, - barmode="stack", - height=400, - ) - # Update layout labels and title - fig.update_layout( - xaxis_title="EE-Potentiale & Bedarf", # Update x-axis label - yaxis_title=f"{column_name}", # Update y-axis label - title_text=f"Potentiale erneuerbarer Energien vs. Bedarf", # Update title - ) - # Update hover label template - fig.update_traces( - hovertemplate="%{x}
    %{y}", # Display category and value only in hover label - ) - - return fig - -# Separate functions for generating Strom and Wärme plots -def strom_plot(gemeinde): - filtered_df_strom, _, _ = ee_potenziale(gemeinde) - filtered_df_strom = filtered_df_strom[ - (filtered_df_strom["Energieträger"] != "Verbrauch/Bedarf") - & (filtered_df_strom["Strom [GWh/a]"] != 0) - & (filtered_df_strom["Energieträger"] != "Deckungsanteil EE in %") # Exclude this value - ] - stacked_bar_plot_strom = plot_stacked_bar(filtered_df_strom, "Strom [GWh/a]") - return stacked_bar_plot_strom - -def wärme_plot(gemeinde): - filtered_df_wärme, _, _ = ee_potenziale(gemeinde) - filtered_df_wärme = filtered_df_wärme[ - (filtered_df_wärme["Energieträger"] != "Verbrauch/Bedarf") - & (filtered_df_wärme["Wärme [GWh/a]"] != 0) - & (filtered_df_wärme["Energieträger"] != "Deckungsanteil EE in %") # Exclude this value - ] - stacked_bar_plot_wärme = plot_stacked_bar(filtered_df_wärme, "Wärme [GWh/a]") - return stacked_bar_plot_wärme - -def energiebestand_plot(gemeinde): - - filtered_df_bestand = energie_bestand(gemeinde) - filtered_df_bestand = filtered_df_bestand[ - (filtered_df_bestand["Strom [GWh/a]"] != 0) - ] - stacked_bar_plot_bestand = plot_stacked_bar(filtered_df_bestand, "Strom [GWh/a]") - # Create the plot with custom settings - fig = px.bar( - filtered_df_bestand, - x="Energieträger", - y="Strom [GWh/a]", - title="Aktueller Energiemix der Stromerzeugung", - labels={"value": "Energiebestandverbrauch (EE)"}, - barmode="stack", - height=400, - ) - # Update layout labels and title - fig.update_layout( - xaxis_title="Energieträger", # Update x-axis label - yaxis_title="Strom [GWh/a]", # Update y-axis label - title_text="Aktueller Energiemix der Stromerzeugung", # Update title - ) - # Update hover label template - fig.update_traces( - hovertemplate="%{x}
    %{y}", # Display category and value only in hover label - ) - # Dynamically set Energieträger labels - fig.update_xaxes(categoryorder='total ascending') # Order by total - energieträger_labels = filtered_df_bestand["Energieträger"].tolist() - fig.update_xaxes(ticktext=energieträger_labels, tickvals=filtered_df_bestand.index) - return fig - -def energiebestand_pie(gemeinde): - - filtered_df_bestand = energie_bestand(gemeinde) - filtered_df_bestand = filtered_df_bestand[ - (filtered_df_bestand["Strom [GWh/a]"] != 0) - ] - - # Create the pie chart - pie_chart_bestand = px.pie( - filtered_df_bestand, - values="Strom [GWh/a]", - names="Energieträger", - title="Aktueller Energiemix der Stromerzeugung", - labels={"value": "Energiebestandverbrauch (EE)"}, - height=400, - ) - return pie_chart_bestand - -# Function to plot the heat density map -def plot_heat_density_map(gemeinde): - # Define your database connection URL - user = "postgres" - password = os.getenv('password') - host = os.getenv('host') - port = "5432" - database = "postgres" - - database_url = f"postgresql://{user}:{password}@{host}:{port}/{database}" - - # Define a function to style the GeoJson based on the RW_WW_WBED_normalized attribute - def style_function(feature): - wbed = feature['properties'].get('RW_WW_WBED', 0) # 0 is the default value - if wbed <= 200000: - return {'color': '#ffe6e6'} # rose color - elif wbed <= 500000: - return {'color': '#ff8080'} - elif wbed <= 1000000: - return {'color': '#ff3333'} - elif wbed <= 1500000: - return {'color': '#cc0000'} - else: - return {'color': '#800000'} # dark red color - - # Connect to the database and retrieve the data for the selected city as a GeoDataFrame - engine = create_engine(database_url) - sql = f"SELECT * FROM heat_density_data WHERE gn = '{gemeinde}'" - gdf = gpd.GeoDataFrame.from_postgis(sql, con=engine, geom_col='geometry') - - # Convert the CRS to WGS84 (epsg:4326) - gdf = gdf.to_crs("EPSG:4326") - - # Calculate the center of the GeoDataFrame - center = [gdf.geometry.centroid.y.mean(), gdf.geometry.centroid.x.mean()] - - # Create a map centered around the center of your data - m = folium.Map(location=center, zoom_start=13) - - # Add the data to the map with the style function - geojson = folium.GeoJson( - gdf.to_json(), - style_function=style_function, - tooltip=folium.GeoJsonTooltip( - fields=['RW_WW_WBED'], - aliases=['Wärmedichte:'], - localize=True - ) - ) - geojson.add_to(m) - - # Define the legend html - legend_html = """ -
    -  Wärmebedarf pro Straßenzug in kWh pro Jahr
    -   > 0 - 200,000:
    -   > 200,000 - 500,000:
    -   > 500,000 - 1,000,000:
    -   > 1,000,000 - 1,500,000:
    -   > 1,500,000: -
    - """ - # Add the legend to the map - legend = folium.Marker( - [center[0], center[1]], - icon=folium.DivIcon( - icon_size=(150,36), - icon_anchor=(0,200), - html='
    %s
    ' % legend_html, - ) - ) - m.add_child(legend) - - # Render the Folium map using st_folium - #st_folium(m, width=800, returned_objects=[]) - - #return None # Since no data is being returned - return m # Return the folium.Map object - -#@st.cache_resource(experimental_allow_widgets=True) -def energy_data_gemeinde(gemeinde): - df_status_quo, percentages = table_status_quo(gemeinde) - pie_chart_strom_status_quo, pie_chart_waerme_status_quo, pie_chart_gesamt_status_quo = pie_status_quo(percentages) - df_energy_strom, df_energy_wärme = ee_potential_strom_waerme(gemeinde) - stacked_strom_ee, stacked_wärme_ee = plot_stacked_strom_wärme(df_energy_strom, df_energy_wärme) - #filtered_df_strom, _, _ = ee_potenziale(gemeinde) - #filtered_df_wärme, _, _ = ee_potenziale(gemeinde) - energiemix_bestand = energie_bestand(gemeinde) - #stacked_bar_plot_strom = strom_plot(gemeinde) - #stacked_bar_plot_wärme = wärme_plot(gemeinde) - #stacked_bar_plot_bestand = energiebestand_plot(gemeinde) - pie_chart_bestand = energiebestand_pie(gemeinde) - pie_bestand_all, pie_bestand_ee, pie_bestand_ko = pie_charts_bestand(gemeinde) - heat_density_map = plot_heat_density_map(gemeinde) - df_strom_thg, strom_plot_thg, df_waerme_thg, waerme_plot_thg = thg_energie(gemeinde) - - return df_energy_strom, df_energy_wärme, energiemix_bestand, pie_chart_bestand, df_status_quo, pie_chart_strom_status_quo, pie_chart_waerme_status_quo, pie_chart_gesamt_status_quo, stacked_strom_ee, stacked_wärme_ee, pie_bestand_all, pie_bestand_ee, pie_bestand_ko, heat_density_map, df_strom_thg, strom_plot_thg, df_waerme_thg, waerme_plot_thg - -#perform similarity search within supabase pgvector -def similarity_search_with_supabase(input_text, k=3): - - table_name = "documents" - - vector_store = SupabaseVectorStore(client=supabase, embedding=embeddings, table_name=table_name, query_name="match_documents") - - matched_docs = vector_store.similarity_search(query=input_text, k=k) - - #print(matched_docs[2].page_contet) - - text_content = [doc.page_content for doc in matched_docs] - - text_content = vector_store.similarity_search(input_text) - - return text_content -#create query against all embeddings docs and compose contextual answer -def get_response_from_query(text_content, input_text): - - llm = ChatOpenAI(model_name="gpt-3.5-turbo-16k", temperature=0, max_tokens= 3000) - - prompt = ChatPromptTemplate.from_messages([ - SystemMessage(content=f""" - Du bist Experte für die kommunale Wärmeplanung in Deutschland. Als spezialisierter Berater bist du sehr hilfsbereit und erläuterst Fragen - bis in das kleinste Detail zum Thema kommunale Wärmeplanung und Klimaschutz für Kommunen. - - Falls erforderlich, erstelle bitte auch detaillierte Schritt für Schritt Anleitungen, um ein gewünschtes Ziel zu erreichen. - Du erhältst relevante Inhalte zur Nutzerfrage um die Frage bestmöglich beantworten zu können. Inhalte relvant zur Nutzerfrage: {text_content} - - Bitte befolge auch immer folgende Regeln bei der Beantwortung von Nutzerfragen: - 1. Strukturiere deine Antworten sehr gut. Gehe zunächst auf die Nutzerfrage ein, beschreibe was sie genau umfasst und wie du sie verstehst. - 2. Deine Antwort auf die Frage soll möglichst detailreich und umfassend sein. - 3. Sofern sinnvoll, ergänze die Antwort auf die Frage um weitere relevante Informationen zum gegebenen Thema. - 4. Du sollest eine verständliche und professionelle Sprache verwenden. - 5. Gebe dem Nutzer umfangreiche Arbeitsanweisungen sofern sinnvoll. - 6. Deine Erläuterungen soll auch ein 12-Jähriger verstehen. - 7. Wenn möglich verweise nametlich auf relevante Dokumente und Internetseiten zum Thema der Frage. - 8. Bei unklaren Fragen, stelle Nachfragen an den Nutzer, um die gewünschte Information besser zu verstehen und bessere Antowrten geben zu können. - 9. Wenn du die Frage nicht verstehst und keine Antwort weißt, sage das es dir leid tut, du aber leider keine Antwort weist. - """), - MessagesPlaceholder(variable_name="chat_history"), - HumanMessagePromptTemplate.from_template(f"{input_text}"), - ]) - memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) - - chain = LLMChain(llm=llm, prompt=prompt, verbose=True, memory=memory) - - response = chain.run(input_text=input_text) - #response = response.replace("\n", "") - - return response - -# Create a sidebar selectbox for selecting the Gemeinde -with st.sidebar: - gemeinde = st.sidebar.selectbox("Bitte Wählen Sie eine Stadt / Gemeinde", [""] + list(gemeinde_list)) - if gemeinde != "": - with st.spinner("Loading..."): - time.sleep(5) - success = st.success("Gemeinde ausgewählt") - success.empty() - with st.expander("Quellen und Annahmen"): - st.write(""" - Datenquellen: - - Klimaatlas NRW, Herausgeber: Landesamt für Natur, Umwelt und Verbraucherschutz NRW unter Verwendung von Daten von Raumwärmebedarfsmodell, Ausbaustand der wärmeerzeugenden Energien in NRW, Ausbaustand der stromerzeugenden Energien in NRW, Standorte der strom- und wärmeerzeugenden Anlagen in NRW, Ergebnissen der LANUV-Potenzialstudien. - - Umweltbundesamt: emissionsbilanz erneuerbarer Energieträger unter Verwendung von Daten der AGEE-Stat (2022). - - BISKO, Bilanzierungs-Systematik Kommunal. Empfehlungen zur Methodik der kommunalen Treibhausgasbilanzierung für den Energie- und Verkehrssektor in Deutschland (2019). - - Globales Emissions-Modell Integrierter Systeme: GEMIS Version 5.0 (2021). - - Annahmen: - - Strommix: Bei der Ermittlung des individuellen Stromixes der Gemeinden wird davon ausgegangen, dass erneuerbarer Strom, der vor Ort erzeugt wird, vollständig genutzt wird. Sofern der Anteil erneuerbarer Energieträger am Strombedarf vor Ort kleiner als 100 Prozenz ist, wird der restliche Strommix anhand des Stromixes für NRW abgeleitet. - - Windpotential: Je Hektar Windpotential wird 1 GWh/a Ertragspotential geschätzt. - """) - -st.title("Energieversorgung der Gemeinden - Status Quo und erneuerbares Potential 🌍") -st.write("") -if gemeinde != "": - df_energy_strom, df_energy_wärme, energiemix_bestand, pie_chart_bestand, df_status_quo, pie_chart_strom_status_quo, pie_chart_waerme_status_quo, pie_chart_gesamt_status_quo, stacked_strom_ee, stacked_wärme_ee, pie_bestand_all, pie_bestand_ee, pie_bestand_ko, heat_density_map, df_strom_thg, strom_plot_thg, df_waerme_thg, waerme_plot_thg = energy_data_gemeinde(gemeinde) - - # Create two columns for Streamlit interface - col1, col2 = st.columns([1,2], gap="medium") - - # First column shows the table df_status_quo - with col1: - st.subheader("Status Quo - Strom und Wärme") - st.write("") - st.dataframe(df_status_quo, hide_index=True, width=370, use_container_width=True) - - # Second column contains the three pie charts from pie_status_quo - with col2: - st.subheader("Verteilung erneuerbar und konventionell") - # Display the pie charts side by side - col2_1, col2_2, col2_3 = st.columns(3) # Three equal-width columns - with col2_1: - st.plotly_chart(pie_chart_strom_status_quo, use_container_width=True) - with col2_2: - st.plotly_chart(pie_chart_waerme_status_quo, use_container_width=True) - with col2_3: - st.plotly_chart(pie_chart_gesamt_status_quo, use_container_width=True) - - st.markdown("***") - #please add the heat map here: - st.subheader("Wärmebedarfsdichte") - #st_folium(heat_density_map, use_container_width=True) - st_folium(heat_density_map, width=800, returned_objects=[], use_container_width=True) - - st.markdown("***") - # Create two columns for Streamlit interface - col1, col2 = st.columns([0.8,1.2], gap="large") - with col1: - st.subheader("Übersicht erneuerbare Energien - Strom") - st.dataframe(df_energy_strom, hide_index=True, height=420, width=400, use_container_width=True) - #st.dataframe(filtered_df_strom, hide_index=True) - #st.markdown(filtered_df_strom.style.hide(axis="index").to_html(), unsafe_allow_html=True) - with col2: - st.subheader("Potential und Bestand erneuerbarer Strom") - st.plotly_chart(stacked_strom_ee, use_container_width=True) - - st.markdown("***") - - # Create two columns for Streamlit interface - col1, col2 = st.columns([0.8,1.2], gap="large") - with col1: - st.subheader("Übersicht erneuerbare Energien - Wärme") - st.dataframe(df_energy_wärme, hide_index=True, height=450, width=400, use_container_width=True) - #st.dataframe(filtered_df_strom, hide_index=True) - #st.markdown(filtered_df_strom.style.hide(axis="index").to_html(), unsafe_allow_html=True) - with col2: - st.subheader("Potential und Bestand erneuerbare Wärme") - st.plotly_chart(stacked_wärme_ee, use_container_width=True) - - st.markdown("***") - - # Create two columns for Streamlit interface - col1, col2 = st.columns([0.8,2.2], gap="large") - with col1: - st.subheader("Übersicht Energiemix - Strom - Bestand") - st.dataframe(energiemix_bestand, hide_index=True, height=550, width=100, use_container_width=True) - #st.dataframe(filtered_df_strom, hide_index=True) - #st.markdown(filtered_df_strom.style.hide(axis="index").to_html(), unsafe_allow_html=True) - with col2: - st.subheader("Zusammensetzung Energiemix - Strom") - col2_1, col2_2, col2_3 = st.columns(3) # Three equal-width columns - with col2_1: - st.plotly_chart(pie_bestand_all, use_container_width=True) - with col2_2: - st.plotly_chart(pie_bestand_ee, use_container_width=True) - with col2_3: - st.plotly_chart(pie_bestand_ko, use_container_width=True) - - st.markdown("***") - - # Create two columns for Streamlit interface - col1, col2 = st.columns([0.8,2.2], gap="large") - with col1: - st.subheader("Übersicht THG-Emissionen - Strom - Bestand") - st.dataframe(df_strom_thg, hide_index=True, height=400, width=100, use_container_width=True) - - with col2: - st.subheader("THG-Emissionen - Strom") - st.plotly_chart(strom_plot_thg, use_container_width=True) - - st.markdown("***") - - # Create two columns for Streamlit interface - col1, col2 = st.columns([0.8,2.2], gap="large") - with col1: - st.subheader("Übersicht THG-Emissionen - Wärme - Bestand") - st.dataframe(df_waerme_thg, hide_index=True, height=300, width=100, use_container_width=True) - - with col2: - st.subheader("THG-Emissionen - Wärme") - st.plotly_chart(waerme_plot_thg, use_container_width=True) - -else: - # Display the map of Germany when no city name is provided - germany_map = folium.Map(location=[51.1657, 10.4515], zoom_start=6) - st_folium(germany_map, use_container_width=True) - -st.markdown("***") - -st.title("Assistent zum Thema kommunale Wärmeplanung") - -if "openai_model" not in st.session_state: - st.session_state["openai_model"] = "gpt-3.5-turbo" - -if "messages" not in st.session_state: - st.session_state.messages = [] - -for message in st.session_state.messages: - with st.chat_message(message["role"]): - st.markdown(message["content"]) - -if prompt := st.chat_input("Stelle mir eine Frage"): - text_content = similarity_search_with_supabase(prompt) - with st.chat_message("user"): - st.markdown(prompt) - - with st.chat_message("assistant"): - message_placeholder = st.empty() - full_response = get_response_from_query(text_content, prompt) - message_placeholder.markdown(full_response) - st.session_state.messages.append({"role": "assistant", "content": full_response}) - diff --git a/spaces/Li6699/myChat/app.py b/spaces/Li6699/myChat/app.py deleted file mode 100644 index 192d2daf06c7128ae48c3ce7a47768de710b7b31..0000000000000000000000000000000000000000 --- a/spaces/Li6699/myChat/app.py +++ /dev/null @@ -1,353 +0,0 @@ -import gradio as gr -import openai -import datetime -#import tiktoken -#encoding = tiktoken.encoding_for_model("gpt-3.5-turbo") - - -#个人账号信息 -message_info = {} -#个人历史记录 -user_history = {} -#记录的总tokens数,避免超过最大限度 -total_tokens = 0 - -#用户是否开启群聊 -user_group = {} -#群组管理,用于记录群聊信息 -groups_manage = {"Dormitory Group":[{"role":"assistant","content":"欢迎你来到Dormitory Group!"},{"role":"assistant","content":"让我们开始聊天吧!"}], - "Family Group":[{"role":"assistant","content":"欢迎你来到Family Group!"},{"role":"assistant","content":"让我们开始聊天吧!"}], - "Class Group":[{"role":"assistant","content":"欢迎你来到Class Group!"},{"role":"assistant","content":"让我们开始聊天吧!"}]} - -def input_key(key,username): - global message_info - global user_history - message_history = [] - user_record = [] - #用户当前加入的群聊 - groups = "" - key = key - username = username - openai.api_key = key - if username in message_info.keys(): - return username + "用户登陆成功!" - else: - message_info.update({username:message_history}) - user_history.update({username:user_record}) - user_group.update({username:groups}) - return (username + "用户不存在,已为您创建该新用户,下次请用该用户名登录!") - - -def remove_exceed(messages): - #tokens超出删除 - global total_tokens - count_list = [] - for i in range(len(messages)-1,-1,-1): - total_tokens += len(messages[i]["content"]) * 2 - count_list.append(total_tokens) - - for c in count_list: - if(c > 1200): - messages.pop(0) - messages.pop(0) - total_tokens -= c - break - - - print("当前messages的总tokens数:",total_tokens) - - -def predict(input,username): - global message_info - global user_history - global groups_manage - #AI身份选择标识 - flag = 0 - - input = input - username = username - if input == None: - input = "hello!" - - elif input == "Translator": - flag = 0 - input = "我希望你扮演一个中英翻译的角色,自动检测输入的文本语言信息。如果文本是英文,\ - 则翻译成中文,如果文本是中文,则翻译成英文;我希望你只进行翻译,不做任何其他改进,不做解释。现在你准备接受句子开始翻译" - elif input == "Code Developer": - flag = 0 - input = "我希望你扮演一个代码架构师,当我提出有关的代码问题之后,你使用专业合理的方式回答我的问题。现在你收到后直接回答/'你好,我是一个架构师,请输入你的问题。/'" - - elif input == "English Tutor": - flag = 0 - input = "我希望你扮演一个专业的英语老师,当我提出有关学习英语的所有有关问题时,你提供专业的建议或者答案。现在你收到后直接回答/'你好,我是一个英语教师,请输入你的问题。/'" - - elif input == "Writing Assistant": - flag = 0 - input = "我希望你扮演一个专业的写作助手,当我提出有关写作的所有有关问题或者让你根据我的需求进行写作时时,你提供专业的建议或者生成文章。现在你收到后直接回答/'你好,我是一个写作助手,请输入你的问题。/'" - - elif input == "Businessman": - flag = 0 - input = "我希望你扮演一个专业的商业咨询师,当我提出有关经济、金融以及商业等的有关问题时,你提供专业的答案或者相关见解。现在你收到后直接回答/'你好,我是一个商业咨询师,请输入你的问题。/'" - - elif input == "Free Chat": - flag = 0 - input = "现在停止角色扮演,请你跟我正常对话。现在你收到后直接回答/'你好,请你说点什么/'" - - else: - flag = 1 - print(username + ": ",input) - print() - - #当前用户加入的组名 - groupname = user_group[username] - #临时指向数组 - messages = [] - reply_content = "" - try: - if groupname != "": - #用户在群聊时前加‘@’可以触发AI - if input[0] == "@": - #input = input.replace("@","") - groups_manage[groupname].append({"role":"user","content":input}) - messages = groups_manage[groupname] - completion = openai.ChatCompletion.create( - model = "gpt-3.5-turbo", - messages =messages, - temperature=0.7, - top_p = 1, - frequency_penalty = 0.0, - presence_penalty = 0.0 - ) - reply_content = completion.choices[0].message['content'] - - else: - #不加‘@’就是人与人之间正常聊天 - groups_manage[groupname].append({"role":"user","content":input}) - - messages = groups_manage[groupname] - print("在这里") - reply_content = "" - - - else: - #不群聊则需要AI及时回复,一问一答 - message_info[username].append({"role":"user","content":input}) - user_history[username].append({"role":"user","content":input}) - messages = message_info[username] - - completion = openai.ChatCompletion.create( - model = "gpt-3.5-turbo", - messages =messages, - temperature=0.7, - top_p = 1, - frequency_penalty = 0.0, - presence_penalty = 0.0 - ) - reply_content = completion.choices[0].message['content'] - messages = user_history[username] #呈现对话,仅可视,没有上下文 - - except Exception as e: - - print(e) - reply_content = "抱歉,您登录提交的数据有误或网络连接失败...请重新操作或提问" - - - - print("ChatGPT: ",reply_content) - print() - - #添加用户标识 - if flag == 0: - messages[-1].update({"content":"{}AI身份转换".format(username + ": ")}) - - # if groupname !="" and flag == 0: - - # messages[-1].update({"content":"{}AI身份转换".format(username + ": ")}) - - # elif flag == 0: - # user_history[username][-1].update({"content":"{}AI身份转换".format(username + ": ")}) - - else: - print(len(messages)) - origin = messages[-1]["content"] - messages[-1].update({"content":username + ": " + origin }) - - - - - #message_info[username].append({"role":"assistant","content":reply_content}) - - #用户之间聊天则会显示发送时间,其他情况则显示AI回复内容 - if reply_content != "": - - messages.append({"role":"assistant","content":reply_content}) - else: - messages.append({"role":"assistant","content":datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}) - - response = [] - - if groupname == "": - #及时去除一定的记录,避免message过长出错 - remove_exceed(message_info[username]) - #user_history[username].append({"role":"assistant","content":reply_content}) - response = [((user_history[username][i]["content"]),(user_history[username][i+1]["content"])) for i in range(0,len(user_history[username])-1,2)] - - else: - remove_exceed(messages) - - response = [((messages[i]["content"]),(messages[i+1]["content"])) for i in range(0,len(messages)-1,2)] - - return response - -#根据设定的清理条数,清理用户个人历史聊天信息 -def clear_info(num,username): - - username = username - num = int(num) - try: - if len(user_history[username]) < num: - (user_history[username]).clear() - else: - del user_history[username][0:num] - except Exception as e: - print(e) - - return "Clear Successfully!" - -#加入群聊 -def join_group(groupname,username): - global user_group - username = username - groupname = groupname - user_group.update({username:groupname}) - return "Groups Now: " + user_group[username] - -#离开群聊 -def leave_group(groupname,username): - global user_group - global user_history - username = username - groupname = groupname - try: - user_group.update({username:""}) - messages = user_history[username] - except Exception as e: - print(e) - - - response = [((messages[i]["content"],(messages[i+1]["content"]))) for i in range(0,len(messages)-1,2)] - return response - -#刷新界面 -def reflesh_chat(groupname,username): - global groups_manage - try: - messages = groups_manage[groupname] - except Exception as e: - print(e) - - response = [((messages[i]["content"],(messages[i+1]["content"]))) for i in range(0,len(messages)-1,2)] - return response - - - - - -with gr.Blocks(css=".gradio-container {background-image: linear-gradient(to top, #dfe9f3 0%, white 100%);} #chatbot1{height:70vh;background-image: linear-gradient(to top, #a8edea 0%, #fed6e3 100%);} #chatbot2{height:70vh;background-image: linear-gradient(to top, #f3e7e9 0%, #e3eeff 99%, #e3eeff 100%);} .overflow-y-auto{height:70vh}") as demo: - #标题 - gr.Markdown("Have a quick Chat with ChatGPT (designed by Li)") - - with gr.Tabs(): - with gr.TabItem("Login"): - - - # 设置输入key - key = gr.Textbox(label="API_KEY",placeholder="Your API_KEY") - #输入账号 - username = gr.Textbox(label="Username",placeholder="Your Username") - # 设置按钮 - greet_btn = gr.Button("Submit") - #校验 - ouput = gr.Textbox(show_label=False,placeholder="Result") - - - list_info = [] - list_info.append(key) - list_info.append(username) - - - # 设置按钮点击事件 - greet_btn.click(fn=input_key, inputs=list_info,outputs=ouput) - with gr.TabItem("Chat"): - - - with gr.Row(): - - with gr.Column(): - chatbot = gr.Chatbot(elem_id="chatbot2",label="Chatbot") - # groupname = user_group[username] - # if groupname == "": - # chatbot.elem_id = "chatbot1" - # chatbot.label = "Chatbot" - - # else: - # chatbot.elem_id = "chatbot2" - # chatbot.label = groupname - - - txt = gr.Textbox(label="User", placeholder="Message:").style(container=False) - - - input_list1 = [] - input_list1.append(txt) - input_list1.append(username) - - txt.submit(predict, input_list1, chatbot) - #这里可以优化等问题回复才可以再次发问题 - txt.submit(None, None, txt, _js="() => {''}") - - with gr.Column(): - - #num = gr.Textbox(show_label=False,placeholder="清除会话记录数(从最早记录开始)").style(container=False) - num = gr.Slider(0, 20, value=2,step=1,label="Count",info="Choose betwen 0 and 20") #滑动条 - - del_btn = gr.Button("Clear") - ouput = gr.Textbox(label="Status",placeholder="Result").style(container=False) - clear_list = [] - clear_list.append(num) - clear_list.append(username) - - del_btn.click(fn=clear_info,inputs=clear_list,outputs=ouput) - - - role = gr.Radio(["Translator", "Code Developer", "English Tutor","Writing Assistant","Businessman","Free Chat"], - label="Play A Role")#单选 - button = gr.Button("Run") - input_list2 = [] - input_list2.append(role) - input_list2.append(username) - - button.click(predict,inputs=input_list2,outputs=chatbot) - with gr.Row(): - - groupname = gr.Dropdown(["Dormitory Group", "Family Group", "Class Group"],label="Chat Group") - result = gr.Textbox(label="Your Groups",placeholder="Groups").style(container=False) - with gr.Row(): - - input_list3 = [] - input_list3.append(groupname) - input_list3.append(username) - - join = gr.Button("Join") - join.click(join_group,inputs=input_list3,outputs=result) - - leave = gr.Button("Leave") - leave.click(leave_group,inputs=input_list3,outputs=chatbot) - - reflesh = gr.Button("Refresh") - reflesh.click(reflesh_chat,inputs=input_list3,outputs=chatbot) - - - - -demo.launch() diff --git a/spaces/LuxOAI/ChatGpt-Web/public/serviceWorkerRegister.js b/spaces/LuxOAI/ChatGpt-Web/public/serviceWorkerRegister.js deleted file mode 100644 index 8405f21aaab9ddec0cff867cfe1dfff67ea01ccd..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/public/serviceWorkerRegister.js +++ /dev/null @@ -1,9 +0,0 @@ -if ('serviceWorker' in navigator) { - window.addEventListener('load', function () { - navigator.serviceWorker.register('/serviceWorker.js').then(function (registration) { - console.log('ServiceWorker registration successful with scope: ', registration.scope); - }, function (err) { - console.error('ServiceWorker registration failed: ', err); - }); - }); -} \ No newline at end of file diff --git a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/commons.py b/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/commons.py deleted file mode 100644 index 33ec83a7986a12b237d28d5e610222881d6b42ae..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math - -import torch -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - pad_shape = [item for sublist in reversed(pad_shape) for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + - ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * - ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, - channels, - min_timescale=1.0, - max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * - -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, - max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, - max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0] - ]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item()**norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm**(1. / norm_type) - return total_norm diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/resource_manager.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/resource_manager.py deleted file mode 100644 index b0f28af2e35a3ea29958e5eee4e19b26f1fa010b..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/resource_manager.py +++ /dev/null @@ -1,206 +0,0 @@ -import os -from os import path -import shutil -import collections - -import cv2 -from PIL import Image -if not hasattr(Image, 'Resampling'): # Pillow<9.0 - Image.Resampling = Image -import numpy as np - -from util.palette import davis_palette -import progressbar - - -# https://bugs.python.org/issue28178 -# ah python ah why -class LRU: - def __init__(self, func, maxsize=128): - self.cache = collections.OrderedDict() - self.func = func - self.maxsize = maxsize - - def __call__(self, *args): - cache = self.cache - if args in cache: - cache.move_to_end(args) - return cache[args] - result = self.func(*args) - cache[args] = result - if len(cache) > self.maxsize: - cache.popitem(last=False) - return result - - def invalidate(self, key): - self.cache.pop(key, None) - - -class ResourceManager: - def __init__(self, config): - # determine inputs - images = config['images'] - video = config['video'] - self.workspace = config['workspace'] - self.size = config['size'] - self.palette = davis_palette - - # create temporary workspace if not specified - if self.workspace is None: - if images is not None: - basename = path.basename(images) - elif video is not None: - basename = path.basename(video)[:-4] - else: - raise NotImplementedError( - 'Either images, video, or workspace has to be specified') - - self.workspace = path.join('./workspace', basename) - - print(f'Workspace is in: {self.workspace}') - - # determine the location of input images - need_decoding = False - need_resizing = False - if path.exists(path.join(self.workspace, 'images')): - pass - elif images is not None: - need_resizing = True - elif video is not None: - # will decode video into frames later - need_decoding = True - - # create workspace subdirectories - self.image_dir = path.join(self.workspace, 'images') - self.mask_dir = path.join(self.workspace, 'masks') - os.makedirs(self.image_dir, exist_ok=True) - os.makedirs(self.mask_dir, exist_ok=True) - - # convert read functions to be buffered - self.get_image = LRU(self._get_image_unbuffered, maxsize=config['buffer_size']) - self.get_mask = LRU(self._get_mask_unbuffered, maxsize=config['buffer_size']) - - # extract frames from video - if need_decoding: - self._extract_frames(video) - - # copy/resize existing images to the workspace - if need_resizing: - self._copy_resize_frames(images) - - # read all frame names - self.names = sorted(os.listdir(self.image_dir)) - self.names = [f[:-4] for f in self.names] # remove extensions - self.length = len(self.names) - - assert self.length > 0, f'No images found! Check {self.workspace}/images. Remove folder if necessary.' - - print(f'{self.length} images found.') - - self.height, self.width = self.get_image(0).shape[:2] - self.visualization_init = False - - def _extract_frames(self, video): - cap = cv2.VideoCapture(video) - frame_index = 0 - print(f'Extracting frames from {video} into {self.image_dir}...') - bar = progressbar.ProgressBar(max_value=progressbar.UnknownLength) - while(cap.isOpened()): - _, frame = cap.read() - if frame is None: - break - if self.size > 0: - h, w = frame.shape[:2] - new_w = (w*self.size//min(w, h)) - new_h = (h*self.size//min(w, h)) - if new_w != w or new_h != h: - frame = cv2.resize(frame,dsize=(new_w,new_h),interpolation=cv2.INTER_AREA) - cv2.imwrite(path.join(self.image_dir, f'{frame_index:07d}.jpg'), frame) - frame_index += 1 - bar.update(frame_index) - bar.finish() - print('Done!') - - def _copy_resize_frames(self, images): - image_list = os.listdir(images) - print(f'Copying/resizing frames into {self.image_dir}...') - for image_name in progressbar.progressbar(image_list): - if self.size < 0: - # just copy - shutil.copy2(path.join(images, image_name), self.image_dir) - else: - frame = cv2.imread(path.join(images, image_name)) - h, w = frame.shape[:2] - new_w = (w*self.size//min(w, h)) - new_h = (h*self.size//min(w, h)) - if new_w != w or new_h != h: - frame = cv2.resize(frame,dsize=(new_w,new_h),interpolation=cv2.INTER_AREA) - cv2.imwrite(path.join(self.image_dir, image_name), frame) - print('Done!') - - def save_mask(self, ti, mask): - # mask should be uint8 H*W without channels - assert 0 <= ti < self.length - assert isinstance(mask, np.ndarray) - - mask = Image.fromarray(mask) - mask.putpalette(self.palette) - mask.save(path.join(self.mask_dir, self.names[ti]+'.png')) - self.invalidate(ti) - - def save_visualization(self, ti, image): - # image should be uint8 3*H*W - assert 0 <= ti < self.length - assert isinstance(image, np.ndarray) - if not self.visualization_init: - self.visualization_dir = path.join(self.workspace, 'visualization') - os.makedirs(self.visualization_dir, exist_ok=True) - self.visualization_init = True - - image = Image.fromarray(image) - image.save(path.join(self.visualization_dir, self.names[ti]+'.jpg')) - - def _get_image_unbuffered(self, ti): - # returns H*W*3 uint8 array - assert 0 <= ti < self.length - - image = Image.open(path.join(self.image_dir, self.names[ti]+'.jpg')) - image = np.array(image) - return image - - def _get_mask_unbuffered(self, ti): - # returns H*W uint8 array - assert 0 <= ti < self.length - - mask_path = path.join(self.mask_dir, self.names[ti]+'.png') - if path.exists(mask_path): - mask = Image.open(mask_path) - mask = np.array(mask) - return mask - else: - return None - - def read_external_image(self, file_name, size=None): - image = Image.open(file_name) - is_mask = image.mode in ['L', 'P'] - if size is not None: - # PIL uses (width, height) - image = image.resize((size[1], size[0]), - resample=Image.Resampling.NEAREST if is_mask else Image.Resampling.BICUBIC) - image = np.array(image) - return image - - def invalidate(self, ti): - # the image buffer is never invalidated - self.get_mask.invalidate((ti,)) - - def __len__(self): - return self.length - - @property - def h(self): - return self.height - - @property - def w(self): - return self.width diff --git a/spaces/Makiing/coolb-in-gtest/tailwind.config.js b/spaces/Makiing/coolb-in-gtest/tailwind.config.js deleted file mode 100644 index 03da3c3c45be6983b9f5ffa6df5f1fd0870e9636..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/tailwind.config.js +++ /dev/null @@ -1,48 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: [ - './src/pages/**/*.{js,ts,jsx,tsx,mdx}', - './src/components/**/*.{js,ts,jsx,tsx,mdx}', - './src/app/**/*.{js,ts,jsx,tsx,mdx}', - './src/ui/**/*.{js,ts,jsx,tsx,mdx}', - ], - "darkMode": "class", - theme: { - extend: { - colors: { - 'primary-blue': 'rgb(var(--color-primary-blue) / )', - secondary: 'rgb(var(--color-secondary) / )', - 'primary-background': 'rgb(var(--primary-background) / )', - 'primary-text': 'rgb(var(--primary-text) / )', - 'secondary-text': 'rgb(var(--secondary-text) / )', - 'light-text': 'rgb(var(--light-text) / )', - 'primary-border': 'rgb(var(--primary-border) / )', - }, - keyframes: { - slideDownAndFade: { - from: { opacity: 0, transform: 'translateY(-2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideLeftAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - slideUpAndFade: { - from: { opacity: 0, transform: 'translateY(2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideRightAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - }, - animation: { - slideDownAndFade: 'slideDownAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideLeftAndFade: 'slideLeftAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideUpAndFade: 'slideUpAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideRightAndFade: 'slideRightAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - }, - }, - }, - plugins: [require('@headlessui/tailwindcss'), require('tailwind-scrollbar')], -} diff --git a/spaces/Malmika/Osana-WEB-GPT/README.md b/spaces/Malmika/Osana-WEB-GPT/README.md deleted file mode 100644 index 5aff4eb35d38640222142fd5ebba1419ebf9b9af..0000000000000000000000000000000000000000 --- a/spaces/Malmika/Osana-WEB-GPT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Osana WEB GPT -emoji: 💻 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/modules/commons.py b/spaces/MashiroSA/sovits-emu-voice-transform/modules/commons.py deleted file mode 100644 index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000 --- a/spaces/MashiroSA/sovits-emu-voice-transform/modules/commons.py +++ /dev/null @@ -1,188 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -def slice_pitch_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - -def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size) - return ret, ret_pitch, ids_str - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/Mattdoc99/ElonYTsearch/styles.css b/spaces/Mattdoc99/ElonYTsearch/styles.css deleted file mode 100644 index 970ab28ddbd5d54967af3739c504b1912390c5a4..0000000000000000000000000000000000000000 --- a/spaces/Mattdoc99/ElonYTsearch/styles.css +++ /dev/null @@ -1,160 +0,0 @@ -@import url("https://fonts.googleapis.com/css?family=Arimo:400,700"); - -section.main[tabindex="0"] { - overflow: scroll; -} - -body { - height: 100%; - width: 100%; - background: #e9e9e9; - font-family: 'Arimo', Arial, sans-serif; - font-weight: 400; - font-size: 14px; - color: #000000; -} - -* { - -webkit-transition: 300ms; - transition: 300ms; -} - -.intro { - text-align: center; -} - -ul { - list-style-type: none; -} - -h1, -h2, -h3, -h4, -h5, -p { - font-weight: 400; -} - -a { - text-decoration: none; - color: inherit; -} - -a:hover { - color: #6ABCEA; -} - -.container { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -ms-flex-wrap: wrap; - flex-wrap: wrap; - max-width: 100%; - margin-top: 10vh; - margin-left: auto; - margin-right: auto; - -webkit-box-pack: center; - -ms-flex-pack: center; - justify-content: center; -} - -.movie-card { - background: #ffffff; - box-shadow: 0px 6px 18px rgba(0, 0, 0, 0.1); - width: 100%; - max-width: 315px; - margin: 2em; - border-radius: 10px; - display: inline-block; -} - -.movie-header { - padding: 0; - margin: 0; - height: 367px; - width: 100%; - display: block; - border-top-left-radius: 10px; - border-top-right-radius: 10px; -} - -.header-icon-container { - position: relative; -} - -.header-icon { - width: 100%; - height: 367px; - line-height: 367px; - text-align: center; - vertical-align: middle; - margin: 0 auto; - color: #ffffff; - font-size: 54px; - text-shadow: 0px 0px 20px #6abcea, 0px 5px 20px #6ABCEA; - opacity: .85; -} - -.header-icon:hover { - background: rgba(0, 0, 0, 0.15); - font-size: 74px; - text-shadow: 0px 0px 20px #6abcea, 0px 5px 30px #6ABCEA; - border-top-left-radius: 10px; - border-top-right-radius: 10px; - opacity: 1; -} - -.movie-card:hover { - -webkit-transform: scale(1.03); - transform: scale(1.03); - box-shadow: 0px 10px 25px rgba(0, 0, 0, 0.08); -} - -.movie-content { - padding: 18px 18px 24px 18px; - margin: 0; -} - -.movie-content-header, -.movie-info { - display: table; - width: 100%; -} - -.movie-title { - font-size: 24px; - margin: 0; - display: table-cell; -} - -.movie-info { - margin-top: 1em; -} - -.info-section { - display: table-cell; - text-transform: uppercase; - text-align: center; -} - -.info-section:first-of-type { - text-align: left; -} - -.info-section:last-of-type { - text-align: right; -} - -.info-section label { - display: block; - color: rgba(0, 0, 0, 0.5); - margin-bottom: .5em; - font-size: 9px; -} - -.info-section span { - font-weight: 700; - font-size: 11px; -} \ No newline at end of file diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/tools/create_imagenetlvis_json.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/tools/create_imagenetlvis_json.py deleted file mode 100644 index 4d5a0b3712b5a2fb94737b8dfe5d70202305926b..0000000000000000000000000000000000000000 --- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/tools/create_imagenetlvis_json.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json -import os -import cv2 -from nltk.corpus import wordnet - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--imagenet_path', default='datasets/imagenet/ImageNet-LVIS') - parser.add_argument('--lvis_meta_path', default='datasets/lvis/lvis_v1_val.json') - parser.add_argument('--out_path', default='datasets/imagenet/annotations/imagenet_lvis_image_info.json') - args = parser.parse_args() - - print('Loading LVIS meta') - data = json.load(open(args.lvis_meta_path, 'r')) - print('Done') - synset2cat = {x['synset']: x for x in data['categories']} - count = 0 - images = [] - image_counts = {} - folders = sorted(os.listdir(args.imagenet_path)) - for i, folder in enumerate(folders): - class_path = args.imagenet_path + folder - files = sorted(os.listdir(class_path)) - synset = wordnet.synset_from_pos_and_offset('n', int(folder[1:])).name() - cat = synset2cat[synset] - cat_id = cat['id'] - cat_name = cat['name'] - cat_images = [] - for file in files: - count = count + 1 - file_name = '{}/{}'.format(folder, file) - img = cv2.imread('{}/{}'.format(args.imagenet_path, file_name)) - h, w = img.shape[:2] - image = { - 'id': count, - 'file_name': file_name, - 'pos_category_ids': [cat_id], - 'width': w, - 'height': h - } - cat_images.append(image) - images.extend(cat_images) - image_counts[cat_id] = len(cat_images) - print(i, cat_name, len(cat_images)) - print('# Images', len(images)) - for x in data['categories']: - x['image_count'] = image_counts[x['id']] if x['id'] in image_counts else 0 - out = {'categories': data['categories'], 'images': images, 'annotations': []} - print('Writing to', args.out_path) - json.dump(out, open(args.out_path, 'w')) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/utils/misc.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/utils/misc.py deleted file mode 100644 index 2c58d0d7fee9fe3d4519270ad8c1e998d0d8a18c..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/utils/misc.py +++ /dev/null @@ -1,377 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import collections.abc -import functools -import itertools -import subprocess -import warnings -from collections import abc -from importlib import import_module -from inspect import getfullargspec -from itertools import repeat - - -# From PyTorch internals -def _ntuple(n): - - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple - - -def is_str(x): - """Whether the input is an string instance. - - Note: This method is deprecated since python 2 is no longer supported. - """ - return isinstance(x, str) - - -def import_modules_from_strings(imports, allow_failed_imports=False): - """Import modules from the given list of strings. - - Args: - imports (list | str | None): The given module names to be imported. - allow_failed_imports (bool): If True, the failed imports will return - None. Otherwise, an ImportError is raise. Default: False. - - Returns: - list[module] | module | None: The imported modules. - - Examples: - >>> osp, sys = import_modules_from_strings( - ... ['os.path', 'sys']) - >>> import os.path as osp_ - >>> import sys as sys_ - >>> assert osp == osp_ - >>> assert sys == sys_ - """ - if not imports: - return - single_import = False - if isinstance(imports, str): - single_import = True - imports = [imports] - if not isinstance(imports, list): - raise TypeError( - f'custom_imports must be a list but got type {type(imports)}') - imported = [] - for imp in imports: - if not isinstance(imp, str): - raise TypeError( - f'{imp} is of type {type(imp)} and cannot be imported.') - try: - imported_tmp = import_module(imp) - except ImportError: - if allow_failed_imports: - warnings.warn(f'{imp} failed to import and is ignored.', - UserWarning) - imported_tmp = None - else: - raise ImportError - imported.append(imported_tmp) - if single_import: - imported = imported[0] - return imported - - -def iter_cast(inputs, dst_type, return_type=None): - """Cast elements of an iterable object into some type. - - Args: - inputs (Iterable): The input object. - dst_type (type): Destination type. - return_type (type, optional): If specified, the output object will be - converted to this type, otherwise an iterator. - - Returns: - iterator or specified type: The converted object. - """ - if not isinstance(inputs, abc.Iterable): - raise TypeError('inputs must be an iterable object') - if not isinstance(dst_type, type): - raise TypeError('"dst_type" must be a valid type') - - out_iterable = map(dst_type, inputs) - - if return_type is None: - return out_iterable - else: - return return_type(out_iterable) - - -def list_cast(inputs, dst_type): - """Cast elements of an iterable object into a list of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=list) - - -def tuple_cast(inputs, dst_type): - """Cast elements of an iterable object into a tuple of some type. - - A partial method of :func:`iter_cast`. - """ - return iter_cast(inputs, dst_type, return_type=tuple) - - -def is_seq_of(seq, expected_type, seq_type=None): - """Check whether it is a sequence of some type. - - Args: - seq (Sequence): The sequence to be checked. - expected_type (type): Expected type of sequence items. - seq_type (type, optional): Expected sequence type. - - Returns: - bool: Whether the sequence is valid. - """ - if seq_type is None: - exp_seq_type = abc.Sequence - else: - assert isinstance(seq_type, type) - exp_seq_type = seq_type - if not isinstance(seq, exp_seq_type): - return False - for item in seq: - if not isinstance(item, expected_type): - return False - return True - - -def is_list_of(seq, expected_type): - """Check whether it is a list of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=list) - - -def is_tuple_of(seq, expected_type): - """Check whether it is a tuple of some type. - - A partial method of :func:`is_seq_of`. - """ - return is_seq_of(seq, expected_type, seq_type=tuple) - - -def slice_list(in_list, lens): - """Slice a list into several sub lists by a list of given length. - - Args: - in_list (list): The list to be sliced. - lens(int or list): The expected length of each out list. - - Returns: - list: A list of sliced list. - """ - if isinstance(lens, int): - assert len(in_list) % lens == 0 - lens = [lens] * int(len(in_list) / lens) - if not isinstance(lens, list): - raise TypeError('"indices" must be an integer or a list of integers') - elif sum(lens) != len(in_list): - raise ValueError('sum of lens and list length does not ' - f'match: {sum(lens)} != {len(in_list)}') - out_list = [] - idx = 0 - for i in range(len(lens)): - out_list.append(in_list[idx:idx + lens[i]]) - idx += lens[i] - return out_list - - -def concat_list(in_list): - """Concatenate a list of list into a single list. - - Args: - in_list (list): The list of list to be merged. - - Returns: - list: The concatenated flat list. - """ - return list(itertools.chain(*in_list)) - - -def check_prerequisites( - prerequisites, - checker, - msg_tmpl='Prerequisites "{}" are required in method "{}" but not ' - 'found, please install them first.'): # yapf: disable - """A decorator factory to check if prerequisites are satisfied. - - Args: - prerequisites (str of list[str]): Prerequisites to be checked. - checker (callable): The checker method that returns True if a - prerequisite is meet, False otherwise. - msg_tmpl (str): The message template with two variables. - - Returns: - decorator: A specific decorator. - """ - - def wrap(func): - - @functools.wraps(func) - def wrapped_func(*args, **kwargs): - requirements = [prerequisites] if isinstance( - prerequisites, str) else prerequisites - missing = [] - for item in requirements: - if not checker(item): - missing.append(item) - if missing: - print(msg_tmpl.format(', '.join(missing), func.__name__)) - raise RuntimeError('Prerequisites not meet.') - else: - return func(*args, **kwargs) - - return wrapped_func - - return wrap - - -def _check_py_package(package): - try: - import_module(package) - except ImportError: - return False - else: - return True - - -def _check_executable(cmd): - if subprocess.call(f'which {cmd}', shell=True) != 0: - return False - else: - return True - - -def requires_package(prerequisites): - """A decorator to check if some python packages are installed. - - Example: - >>> @requires_package('numpy') - >>> func(arg1, args): - >>> return numpy.zeros(1) - array([0.]) - >>> @requires_package(['numpy', 'non_package']) - >>> func(arg1, args): - >>> return numpy.zeros(1) - ImportError - """ - return check_prerequisites(prerequisites, checker=_check_py_package) - - -def requires_executable(prerequisites): - """A decorator to check if some executable files are installed. - - Example: - >>> @requires_executable('ffmpeg') - >>> func(arg1, args): - >>> print(1) - 1 - """ - return check_prerequisites(prerequisites, checker=_check_executable) - - -def deprecated_api_warning(name_dict, cls_name=None): - """A decorator to check if some arguments are deprecate and try to replace - deprecate src_arg_name to dst_arg_name. - - Args: - name_dict(dict): - key (str): Deprecate argument names. - val (str): Expected argument names. - - Returns: - func: New function. - """ - - def api_warning_wrapper(old_func): - - @functools.wraps(old_func) - def new_func(*args, **kwargs): - # get the arg spec of the decorated method - args_info = getfullargspec(old_func) - # get name of the function - func_name = old_func.__name__ - if cls_name is not None: - func_name = f'{cls_name}.{func_name}' - if args: - arg_names = args_info.args[:len(args)] - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in arg_names: - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - arg_names[arg_names.index(src_arg_name)] = dst_arg_name - if kwargs: - for src_arg_name, dst_arg_name in name_dict.items(): - if src_arg_name in kwargs: - - assert dst_arg_name not in kwargs, ( - f'The expected behavior is to replace ' - f'the deprecated key `{src_arg_name}` to ' - f'new key `{dst_arg_name}`, but got them ' - f'in the arguments at the same time, which ' - f'is confusing. `{src_arg_name} will be ' - f'deprecated in the future, please ' - f'use `{dst_arg_name}` instead.') - - warnings.warn( - f'"{src_arg_name}" is deprecated in ' - f'`{func_name}`, please use "{dst_arg_name}" ' - 'instead') - kwargs[dst_arg_name] = kwargs.pop(src_arg_name) - - # apply converted arguments to the decorated method - output = old_func(*args, **kwargs) - return output - - return new_func - - return api_warning_wrapper - - -def is_method_overridden(method, base_class, derived_class): - """Check if a method of base class is overridden in derived class. - - Args: - method (str): the method name to check. - base_class (type): the class of the base class. - derived_class (type | Any): the class or instance of the derived class. - """ - assert isinstance(base_class, type), \ - "base_class doesn't accept instance, Please pass class instead." - - if not isinstance(derived_class, type): - derived_class = derived_class.__class__ - - base_method = getattr(base_class, method) - derived_method = getattr(derived_class, method) - return derived_method != base_method - - -def has_method(obj: object, method: str) -> bool: - """Check whether the object has a method. - - Args: - method (str): The method name to check. - obj (object): The object to check. - - Returns: - bool: True if the object has the method else False. - """ - return hasattr(obj, method) and callable(getattr(obj, method)) diff --git a/spaces/Menna2211/Text-Image/pages/Stable-Diffusion.py b/spaces/Menna2211/Text-Image/pages/Stable-Diffusion.py deleted file mode 100644 index ca6adf668fc94da440485cb1e4d0250fcb148428..0000000000000000000000000000000000000000 --- a/spaces/Menna2211/Text-Image/pages/Stable-Diffusion.py +++ /dev/null @@ -1,68 +0,0 @@ -import streamlit as st -import torch -import time -from diffusers import StableDiffusionPipeline - - - -# Model 1 -@st.cache_resource(show_spinner=False ,ttl=3600) -def get_model3(): - device = "cuda" if torch.cuda.is_available() else "cpu" - torch_dtype = torch.float16 if device == "cuda" else torch.float32 - model_id = "runwayml/stable-diffusion-v1-5" - pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch_dtype) - pipe = pipe.to(device) - return pipe - -pipe1 =get_model3() - - - -st.title("Stable Diffusion App") -# define the layout of your app - -# Define the Streamlit app layout -prompt = st.text_input("Write your sentence:") - -models = st.selectbox("Select a Model", ["Select a Model","Hugging-Face", "Github"]) -submit_buttons = st.button("Compute") - - -if models == "Select a Model" and not submit_buttons : - st.stop() - -elif models == "Select a Model" and submit_buttons : - st.warning('Warning.....!!,Plz..... Select a Model ', icon="⚠️") - -# Display the generated text - -if models == "Hugging-Face" and submit_buttons: - progress_text = "Operation in progress. Please wait." - bar = st.progress(0, text=progress_text) - for percent_complete in range(100): - generated_img=pipe1(prompt).images[0] - time.sleep(0.1) - bar.progress(percent_complete + 1, text=progress_text) - - # Display the uploaded image and its generated caption - st.write("Generated Image:") - st.image(generated_img) - time.sleep(3) - st.success('Congratulations task is done ', icon="✅") - st.balloons() - -elif models == "Github" and submit_buttons: - progress_text = "Operation in progress. Please wait." - bar = st.progress(0, text=progress_text) - for percent_complete in range(100): - generated_img2=pipe2(prompt).images[0] - time.sleep(0.1) - bar.progress(percent_complete + 1, text=progress_text) - - # Display the uploaded image and its generated caption - st.write("Generated Image:") - st.image(generated_img2) - time.sleep(3) - st.success('Congratulations task is done ', icon="✅") - st.balloons() diff --git a/spaces/MoonQiu/LongerCrafter/app.py b/spaces/MoonQiu/LongerCrafter/app.py deleted file mode 100644 index c0483b4452698aa5106073df689fedf248e794ba..0000000000000000000000000000000000000000 --- a/spaces/MoonQiu/LongerCrafter/app.py +++ /dev/null @@ -1,297 +0,0 @@ -import gradio as gr - -import os -import sys -import argparse -import random -from omegaconf import OmegaConf -import torch -import torchvision -from pytorch_lightning import seed_everything -from huggingface_hub import hf_hub_download - -sys.path.insert(0, "scripts/evaluation") -from funcs import ( - batch_ddim_sampling_freenoise, - load_model_checkpoint, -) -from utils.utils import instantiate_from_config - -ckpt_path_1024 = "checkpoints/base_1024_v1/model.ckpt" -ckpt_dir_1024 = "checkpoints/base_1024_v1" -os.makedirs(ckpt_dir_1024, exist_ok=True) -hf_hub_download(repo_id="VideoCrafter/Text2Video-1024", filename="model.ckpt", local_dir=ckpt_dir_1024) - -# ckpt_path_256 = "checkpoints/base_256_v1/model.pth" -# ckpt_dir_256 = "checkpoints/base_256_v1" -# os.makedirs(ckpt_dir_256, exist_ok=True) -# hf_hub_download(repo_id="MoonQiu/LongerCrafter", filename="model.pth", local_dir=ckpt_dir_256) - - -def infer(prompt, output_size, seed, num_frames, ddim_steps, unconditional_guidance_scale, save_fps): - window_size = 16 - window_stride = 4 - - if output_size == "576x1024": - width = 1024 - height = 576 - config_1024 = "configs/inference_t2v_1024_v1.0_freenoise.yaml" - config_1024 = OmegaConf.load(config_1024) - model_config_1024 = config_1024.pop("model", OmegaConf.create()) - model_1024 = instantiate_from_config(model_config_1024) - model_1024 = model_1024.cuda() - model_1024 = load_model_checkpoint(model_1024, ckpt_path_1024) - model_1024.eval() - model = model_1024 - fps = 28 - # elif output_size == "256x256": - # width = 256 - # height = 256 - # config_256 = "configs/inference_t2v_tconv256_v1.0_freenoise.yaml" - # config_256 = OmegaConf.load(config_256) - # model_config_256 = config_256.pop("model", OmegaConf.create()) - # model_256 = instantiate_from_config(model_config_256) - # model_256 = model_256.cuda() - # model_256 = load_model_checkpoint(model_256, ckpt_path_256) - # model_256.eval() - # model = model_256 - # fps = 8 - - if seed is None: - seed = int.from_bytes(os.urandom(2), "big") - print(f"Using seed: {seed}") - seed_everything(seed) - - args = argparse.Namespace( - mode="base", - savefps=save_fps, - n_samples=1, - ddim_steps=ddim_steps, - ddim_eta=0.0, - bs=1, - height=height, - width=width, - frames=num_frames, - fps=fps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_guidance_scale_temporal=None, - cond_input=None, - window_size=window_size, - window_stride=window_stride, - ) - - ## latent noise shape - h, w = args.height // 8, args.width // 8 - frames = model.temporal_length if args.frames < 0 else args.frames - channels = model.channels - - x_T_total = torch.randn( - [args.n_samples, 1, channels, frames, h, w], device=model.device - ).repeat(1, args.bs, 1, 1, 1, 1) - for frame_index in range(args.window_size, args.frames, args.window_stride): - list_index = list( - range( - frame_index - args.window_size, - frame_index + args.window_stride - args.window_size, - ) - ) - random.shuffle(list_index) - x_T_total[ - :, :, :, frame_index : frame_index + args.window_stride - ] = x_T_total[:, :, :, list_index] - - batch_size = 1 - noise_shape = [batch_size, channels, frames, h, w] - fps = torch.tensor([args.fps] * batch_size).to(model.device).long() - prompts = [prompt] - text_emb = model.get_learned_conditioning(prompts) - - cond = {"c_crossattn": [text_emb], "fps": fps} - - ## inference - batch_samples = batch_ddim_sampling_freenoise( - model, - cond, - noise_shape, - args.n_samples, - args.ddim_steps, - args.ddim_eta, - args.unconditional_guidance_scale, - args=args, - x_T_total=x_T_total, - ) - - video_path = "output.mp4" - vid_tensor = batch_samples[0] - video = vid_tensor.detach().cpu() - video = torch.clamp(video.float(), -1.0, 1.0) - video = video.permute(2, 0, 1, 3, 4) # t,n,c,h,w - - frame_grids = [ - torchvision.utils.make_grid(framesheet, nrow=int(args.n_samples)) - for framesheet in video - ] # [3, 1*h, n*w] - grid = torch.stack(frame_grids, dim=0) # stack in temporal dim [t, 3, n*h, w] - grid = (grid + 1.0) / 2.0 - grid = (grid * 255).to(torch.uint8).permute(0, 2, 3, 1) - - torchvision.io.write_video( - video_path, - grid, - fps=args.savefps, - video_codec="h264", - options={"crf": "10"}, - ) - - print(video_path) - return video_path - -examples = [ - ["A chihuahua in astronaut suit floating in space, cinematic lighting, glow effect",], - ["A corgi is swimming quickly",], - ["A bigfoot walking in the snowstorm",], - ["Campfire at night in a snowy forest with starry sky in the background",], - ["A panda is surfing in the universe",], -] - -css = """ -#col-container {max-width: 640px; margin-left: auto; margin-right: auto;} -a {text-decoration-line: underline; font-weight: 600;} -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} -#share-btn-container { - display: flex; - padding-left: 0.5rem !important; - padding-right: 0.5rem !important; - background-color: #000000; - justify-content: center; - align-items: center; - border-radius: 9999px !important; - max-width: 15rem; - height: 36px; -} -div#share-btn-container > div { - flex-direction: row; - background: black; - align-items: center; -} -#share-btn-container:hover { - background-color: #060606; -} -#share-btn { - all: initial; - color: #ffffff; - font-weight: 600; - cursor:pointer; - font-family: 'IBM Plex Sans', sans-serif; - margin-left: 0.5rem !important; - padding-top: 0.5rem !important; - padding-bottom: 0.5rem !important; - right:0; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -#share-btn-container.hidden { - display: none!important; -} -img[src*='#center'] { - display: inline-block; - margin: unset; -} -.footer { - margin-bottom: 45px; - margin-top: 10px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } -""" - -with gr.Blocks(css=css) as demo: - with gr.Column(elem_id="col-container"): - gr.Markdown( - """ -

    LongerCrafter(FreeNoise) Text-to-Video

    -

    - Tuning-Free Longer Video Diffusion via Noise Rescheduling
    -

    - - """ - ) - - prompt_in = gr.Textbox(label="Prompt", placeholder="A chihuahua in astronaut suit floating in space, cinematic lighting, glow effect") - - with gr.Row(): - with gr.Accordion('FreeNoise Parameters (feel free to adjust these parameters based on your prompt): ', open=False): - with gr.Row(): - output_size = gr.Dropdown(["576x1024"], value="576x1024", label="Output Size (around 900s for 576x1024)") - # output_size = gr.Dropdown(["576x1024", "256x256"], value="576x1024", label="Output Size", info="576x1024 is watermark-free") - with gr.Row(): - num_frames = gr.Slider(label='Frames (a multiple of 4)', - minimum=16, - maximum=36, - step=4, - value=32) - ddim_steps = gr.Slider(label='DDIM Steps', - minimum=5, - maximum=200, - step=1, - value=50) - with gr.Row(): - unconditional_guidance_scale = gr.Slider(label='Unconditional Guidance Scale', - minimum=1.0, - maximum=20.0, - step=0.1, - value=12.0) - save_fps = gr.Slider(label='Save FPS', - minimum=1, - maximum=30, - step=1, - value=10) - with gr.Row(): - seed = gr.Slider(label='Random Seed', - minimum=0, - maximum=10000, - step=1, - value=123) - - submit_btn = gr.Button("Generate") - video_result = gr.Video(label="Video Output") - - gr.Examples(examples=examples, inputs=[prompt_in, output_size, seed, num_frames, ddim_steps, unconditional_guidance_scale, save_fps]) - - submit_btn.click(fn=infer, - inputs=[prompt_in, output_size, seed, num_frames, ddim_steps, unconditional_guidance_scale, save_fps], - outputs=[video_result], - api_name="zrscp") - -demo.queue(max_size=12).launch(show_api=True) \ No newline at end of file diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/_base_/datasets/svtp.py b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/_base_/datasets/svtp.py deleted file mode 100644 index 38301d1bb8de9b056e4cd0bcaf16d86200cd4a7d..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/_base_/datasets/svtp.py +++ /dev/null @@ -1,14 +0,0 @@ -svtp_textrecog_data_root = '../data/common_benchmarks/SVTP' - -svtp_textrecog_train = dict( - type='OCRDataset', - data_root=svtp_textrecog_data_root, - ann_file='textrecog_train.json', - pipeline=None) - -svtp_textrecog_test = dict( - type='OCRDataset', - data_root=svtp_textrecog_data_root, - ann_file='annotation.json', - test_mode=True, - pipeline=None) diff --git a/spaces/NATSpeech/PortaSpeech/tasks/tts/tts_utils.py b/spaces/NATSpeech/PortaSpeech/tasks/tts/tts_utils.py deleted file mode 100644 index c4b82df98677e7ba132f77b4f147a0b9aa03c1f1..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/tasks/tts/tts_utils.py +++ /dev/null @@ -1,54 +0,0 @@ -import importlib - -from data_gen.tts.base_binarizer import BaseBinarizer -from data_gen.tts.base_preprocess import BasePreprocessor -from data_gen.tts.txt_processors.base_text_processor import get_txt_processor_cls -from utils.commons.hparams import hparams - - -def parse_dataset_configs(): - max_tokens = hparams['max_tokens'] - max_sentences = hparams['max_sentences'] - max_valid_tokens = hparams['max_valid_tokens'] - if max_valid_tokens == -1: - hparams['max_valid_tokens'] = max_valid_tokens = max_tokens - max_valid_sentences = hparams['max_valid_sentences'] - if max_valid_sentences == -1: - hparams['max_valid_sentences'] = max_valid_sentences = max_sentences - return max_tokens, max_sentences, max_valid_tokens, max_valid_sentences - - -def parse_mel_losses(): - mel_losses = hparams['mel_losses'].split("|") - loss_and_lambda = {} - for i, l in enumerate(mel_losses): - if l == '': - continue - if ':' in l: - l, lbd = l.split(":") - lbd = float(lbd) - else: - lbd = 1.0 - loss_and_lambda[l] = lbd - print("| Mel losses:", loss_and_lambda) - return loss_and_lambda - - -def load_data_preprocessor(): - preprocess_cls = hparams["preprocess_cls"] - pkg = ".".join(preprocess_cls.split(".")[:-1]) - cls_name = preprocess_cls.split(".")[-1] - preprocessor: BasePreprocessor = getattr(importlib.import_module(pkg), cls_name)() - preprocess_args = {} - preprocess_args.update(hparams['preprocess_args']) - return preprocessor, preprocess_args - - -def load_data_binarizer(): - binarizer_cls = hparams['binarizer_cls'] - pkg = ".".join(binarizer_cls.split(".")[:-1]) - cls_name = binarizer_cls.split(".")[-1] - binarizer: BaseBinarizer = getattr(importlib.import_module(pkg), cls_name)() - binarization_args = {} - binarization_args.update(hparams['binarization_args']) - return binarizer, binarization_args diff --git a/spaces/NCTCMumbai/NCTC/models/official/modeling/optimization/configs/optimization_config_test.py b/spaces/NCTCMumbai/NCTC/models/official/modeling/optimization/configs/optimization_config_test.py deleted file mode 100644 index 6dcd55e0e2071a23cae1494ae29c5efa282d052a..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/modeling/optimization/configs/optimization_config_test.py +++ /dev/null @@ -1,61 +0,0 @@ -# Lint as: python3 -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for optimization_config.py.""" - -import tensorflow as tf - -from official.modeling.optimization.configs import learning_rate_config as lr_cfg -from official.modeling.optimization.configs import optimization_config -from official.modeling.optimization.configs import optimizer_config as opt_cfg - - -class OptimizerConfigTest(tf.test.TestCase): - - def test_no_optimizer(self): - optimizer = optimization_config.OptimizationConfig({}).optimizer.get() - self.assertEqual(optimizer, None) - - def test_no_lr_schedule(self): - lr = optimization_config.OptimizationConfig({}).learning_rate.get() - self.assertEqual(lr, None) - - def test_no_warmup_schedule(self): - warmup = optimization_config.OptimizationConfig({}).warmup.get() - self.assertEqual(warmup, None) - - def test_config(self): - opt_config = optimization_config.OptimizationConfig({ - 'optimizer': { - 'type': 'sgd', - 'sgd': {} # default config - }, - 'learning_rate': { - 'type': 'polynomial', - 'polynomial': {} - }, - 'warmup': { - 'type': 'linear' - } - }) - self.assertEqual(opt_config.optimizer.get(), - opt_cfg.SGDConfig()) - self.assertEqual(opt_config.learning_rate.get(), - lr_cfg.PolynomialLrConfig()) - self.assertEqual(opt_config.warmup.get(), - lr_cfg.LinearWarmupConfig()) - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/transformer.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/transformer.py deleted file mode 100644 index 92f509cf26b802dcd769b97e4c11987e713d8d16..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/transformer.py +++ /dev/null @@ -1,437 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Keras-based transformer block layer.""" -# pylint: disable=g-classes-have-attributes -from __future__ import absolute_import -from __future__ import division -# from __future__ import google_type_annotations -from __future__ import print_function - -import gin -import tensorflow as tf - -from official.nlp.modeling.layers import attention -from official.nlp.modeling.layers import dense_einsum -from official.nlp.modeling.layers import multi_channel_attention -from official.nlp.modeling.layers.util import tf_function_if_eager - - -@tf.keras.utils.register_keras_serializable(package="Text") -class Transformer(tf.keras.layers.Layer): - """Transformer layer. - - This layer implements the Transformer from "Attention Is All You Need". - (https://arxiv.org/abs/1706.03762). - - Arguments: - num_attention_heads: Number of attention heads. - intermediate_size: Size of the intermediate layer. - intermediate_activation: Activation for the intermediate layer. - dropout_rate: Dropout probability for the post-attention and output dropout. - attention_dropout_rate: Dropout probability for within the attention layer. - output_range: the sequence output range, [0, output_range) by slicing the - target sequence. `None` means the target sequence is not sliced. - kernel_initializer: Initializer for dense layer kernels. - bias_initializer: Initializer for dense layer biases. - kernel_regularizer: Regularizer for dense layer kernels. - bias_regularizer: Regularizer for dense layer biases. - activity_regularizer: Regularizer for dense layer activity. - kernel_constraint: Constraint for dense layer kernels. - bias_constraint: Constraint for dense layer kernels. - """ - - def __init__(self, - num_attention_heads, - intermediate_size, - intermediate_activation, - dropout_rate=0.0, - attention_dropout_rate=0.0, - output_range=None, - kernel_initializer="glorot_uniform", - bias_initializer="zeros", - kernel_regularizer=None, - bias_regularizer=None, - activity_regularizer=None, - kernel_constraint=None, - bias_constraint=None, - **kwargs): - super(Transformer, self).__init__(**kwargs) - - self._num_heads = num_attention_heads - self._intermediate_size = intermediate_size - self._intermediate_activation = intermediate_activation - self._attention_dropout_rate = attention_dropout_rate - self._dropout_rate = dropout_rate - self._output_range = output_range - self._kernel_initializer = tf.keras.initializers.get(kernel_initializer) - self._bias_initializer = tf.keras.initializers.get(bias_initializer) - self._kernel_regularizer = tf.keras.regularizers.get(kernel_regularizer) - self._bias_regularizer = tf.keras.regularizers.get(bias_regularizer) - self._activity_regularizer = tf.keras.regularizers.get(activity_regularizer) - self._kernel_constraint = tf.keras.constraints.get(kernel_constraint) - self._bias_constraint = tf.keras.constraints.get(bias_constraint) - - def build(self, input_shape): - input_tensor = input_shape[0] if len(input_shape) == 2 else input_shape - input_tensor_shape = tf.TensorShape(input_tensor) - if len(input_tensor_shape) != 3: - raise ValueError("TransformerLayer expects a three-dimensional input of " - "shape [batch, sequence, width].") - batch_size, sequence_length, hidden_size = input_tensor_shape - - if len(input_shape) == 2: - mask_tensor_shape = tf.TensorShape(input_shape[1]) - expected_mask_tensor_shape = tf.TensorShape( - [batch_size, sequence_length, sequence_length]) - if not expected_mask_tensor_shape.is_compatible_with(mask_tensor_shape): - raise ValueError("When passing a mask tensor to TransformerLayer, the " - "mask tensor must be of shape [batch, " - "sequence_length, sequence_length] (here %s). Got a " - "mask tensor of shape %s." % - (expected_mask_tensor_shape, mask_tensor_shape)) - if hidden_size % self._num_heads != 0: - raise ValueError( - "The input size (%d) is not a multiple of the number of attention " - "heads (%d)" % (hidden_size, self._num_heads)) - self._attention_head_size = int(hidden_size // self._num_heads) - - self._attention_layer = attention.MultiHeadAttention( - num_heads=self._num_heads, - key_size=self._attention_head_size, - dropout=self._attention_dropout_rate, - kernel_initializer=self._kernel_initializer, - bias_initializer=self._bias_initializer, - kernel_regularizer=self._kernel_regularizer, - bias_regularizer=self._bias_regularizer, - activity_regularizer=self._activity_regularizer, - kernel_constraint=self._kernel_constraint, - bias_constraint=self._bias_constraint, - name="self_attention") - # pylint: disable=protected-access - self._attention_layer.build([input_tensor_shape] * 3) - self._attention_output_dense = self._attention_layer._output_dense - # pylint: enable=protected-access - self._attention_dropout = tf.keras.layers.Dropout(rate=self._dropout_rate) - # Use float32 in layernorm for numeric stability. - # It is probably safe in mixed_float16, but we haven't validated this yet. - self._attention_layer_norm = ( - tf.keras.layers.LayerNormalization( - name="self_attention_layer_norm", - axis=-1, - epsilon=1e-12, - dtype=tf.float32)) - self._intermediate_dense = dense_einsum.DenseEinsum( - output_shape=self._intermediate_size, - activation=None, - kernel_initializer=self._kernel_initializer, - bias_initializer=self._bias_initializer, - kernel_regularizer=self._kernel_regularizer, - bias_regularizer=self._bias_regularizer, - activity_regularizer=self._activity_regularizer, - kernel_constraint=self._kernel_constraint, - bias_constraint=self._bias_constraint, - name="intermediate") - policy = tf.keras.mixed_precision.experimental.global_policy() - if policy.name == "mixed_bfloat16": - # bfloat16 causes BERT with the LAMB optimizer to not converge - # as well, so we use float32. - # TODO(b/154538392): Investigate this. - policy = tf.float32 - self._intermediate_activation_layer = tf.keras.layers.Activation( - self._intermediate_activation, dtype=policy) - self._output_dense = dense_einsum.DenseEinsum( - output_shape=hidden_size, - kernel_initializer=self._kernel_initializer, - bias_initializer=self._bias_initializer, - kernel_regularizer=self._kernel_regularizer, - bias_regularizer=self._bias_regularizer, - activity_regularizer=self._activity_regularizer, - kernel_constraint=self._kernel_constraint, - bias_constraint=self._bias_constraint, - name="output") - self._output_dropout = tf.keras.layers.Dropout(rate=self._dropout_rate) - # Use float32 in layernorm for numeric stability. - self._output_layer_norm = tf.keras.layers.LayerNormalization( - name="output_layer_norm", axis=-1, epsilon=1e-12, dtype=tf.float32) - - super(Transformer, self).build(input_shape) - - def get_config(self): - config = { - "num_attention_heads": - self._num_heads, - "intermediate_size": - self._intermediate_size, - "intermediate_activation": - self._intermediate_activation, - "dropout_rate": - self._dropout_rate, - "attention_dropout_rate": - self._attention_dropout_rate, - "output_range": - self._output_range, - "kernel_initializer": - tf.keras.initializers.serialize(self._kernel_initializer), - "bias_initializer": - tf.keras.initializers.serialize(self._bias_initializer), - "kernel_regularizer": - tf.keras.regularizers.serialize(self._kernel_regularizer), - "bias_regularizer": - tf.keras.regularizers.serialize(self._bias_regularizer), - "activity_regularizer": - tf.keras.regularizers.serialize(self._activity_regularizer), - "kernel_constraint": - tf.keras.constraints.serialize(self._kernel_constraint), - "bias_constraint": - tf.keras.constraints.serialize(self._bias_constraint) - } - base_config = super(Transformer, self).get_config() - return dict(list(base_config.items()) + list(config.items())) - - def call(self, inputs): - if isinstance(inputs, (list, tuple)) and len(inputs) == 2: - input_tensor, attention_mask = inputs - else: - input_tensor, attention_mask = (inputs, None) - - if self._output_range: - target_tensor = input_tensor[:, 0:self._output_range, :] - attention_mask = attention_mask[:, 0:self._output_range, :] - else: - target_tensor = input_tensor - attention_inputs = [target_tensor, input_tensor] - - attention_output = self._attention_layer(attention_inputs, attention_mask) - attention_output = self._attention_dropout(attention_output) - attention_output = self._attention_layer_norm(target_tensor + - attention_output) - intermediate_output = self._intermediate_dense(attention_output) - intermediate_output = self._intermediate_activation_layer( - intermediate_output) - layer_output = self._output_dense(intermediate_output) - layer_output = self._output_dropout(layer_output) - # During mixed precision training, attention_output is from layer norm and - # is always fp32 for now. Cast layer_output to fp32 for the subsequent - # add. - layer_output = tf.cast(layer_output, tf.float32) - layer_output = self._output_layer_norm(layer_output + attention_output) - - return layer_output - - -@tf.keras.utils.register_keras_serializable(package="Text") -@gin.configurable -class CompiledTransformer(Transformer): - - @tf_function_if_eager(experimental_compile=True) - def call(self, inputs): - return super(CompiledTransformer, self).call(inputs) - - -@tf.keras.utils.register_keras_serializable(package="Text") -class TransformerDecoderLayer(tf.keras.layers.Layer): - """Single transformer layer for decoder. - - It has three sub-layers: - (1) a multi-head self-attention mechanism. - (2) a encoder-decoder attention. - (3) a positionwise fully connected feed-forward network. - - Arguments: - num_attention_heads: Number of attention heads. - intermediate_size: Size of the intermediate layer. - intermediate_activation: Activation for the intermediate layer. - dropout_rate: Dropout probability for the post-attention and output dropout. - attention_dropout_rate: Dropout probability for within the attention layer. - multi_channel_cross_attention: Whether to use `MultiChannelAttention` for - cross-attention between target sequences and source sequences. - kernel_initializer: Initializer for dense layer kernels. - bias_initializer: Initializer for dense layer biases. - kernel_regularizer: Regularizer for dense layer kernels. - bias_regularizer: Regularizer for dense layer biases. - activity_regularizer: Regularizer for dense layer activity. - kernel_constraint: Constraint for dense layer kernels. - bias_constraint: Constraint for dense layer kernels. - """ - - def __init__(self, - num_attention_heads, - intermediate_size, - intermediate_activation, - dropout_rate=0.0, - attention_dropout_rate=0.0, - multi_channel_cross_attention=False, - kernel_initializer="glorot_uniform", - bias_initializer="zeros", - kernel_regularizer=None, - bias_regularizer=None, - activity_regularizer=None, - kernel_constraint=None, - bias_constraint=None, - **kwargs): - super(TransformerDecoderLayer, self).__init__(**kwargs) - self.num_attention_heads = num_attention_heads - self.intermediate_size = intermediate_size - self.intermediate_activation = tf.keras.activations.get( - intermediate_activation) - self.dropout_rate = dropout_rate - self.attention_dropout_rate = attention_dropout_rate - self.multi_channel_cross_attention = multi_channel_cross_attention - self._kernel_initializer = tf.keras.initializers.get(kernel_initializer) - self._bias_initializer = tf.keras.initializers.get(bias_initializer) - self._kernel_regularizer = tf.keras.regularizers.get(kernel_regularizer) - self._bias_regularizer = tf.keras.regularizers.get(bias_regularizer) - self._activity_regularizer = tf.keras.regularizers.get(activity_regularizer) - self._kernel_constraint = tf.keras.constraints.get(kernel_constraint) - self._bias_constraint = tf.keras.constraints.get(bias_constraint) - if self.multi_channel_cross_attention: - self._cross_attention_cls = multi_channel_attention.MultiChannelAttention - else: - self._cross_attention_cls = attention.MultiHeadAttention - - def build(self, input_shape): - target_tensor_shape = tf.TensorShape(input_shape[0]) - if len(target_tensor_shape) != 3: - raise ValueError("TransformerLayer expects a three-dimensional input of " - "shape [batch, sequence, width].") - hidden_size = target_tensor_shape[2] - if hidden_size % self.num_attention_heads != 0: - raise ValueError( - "The hidden size (%d) is not a multiple of the number of attention " - "heads (%d)" % (hidden_size, self.num_attention_heads)) - self.attention_head_size = int(hidden_size / self.num_attention_heads) - # Self attention. - self.self_attention = attention.CachedAttention( - num_heads=self.num_attention_heads, - key_size=self.attention_head_size, - dropout=self.attention_dropout_rate, - kernel_initializer=self._kernel_initializer, - bias_initializer=self._bias_initializer, - kernel_regularizer=self._kernel_regularizer, - bias_regularizer=self._bias_regularizer, - activity_regularizer=self._activity_regularizer, - kernel_constraint=self._kernel_constraint, - bias_constraint=self._bias_constraint, - name="self_attention") - self.self_attention_output_dense = dense_einsum.DenseEinsum( - output_shape=hidden_size, - num_summed_dimensions=2, - kernel_initializer=self._kernel_initializer, - bias_initializer=self._bias_initializer, - kernel_regularizer=self._kernel_regularizer, - bias_regularizer=self._bias_regularizer, - activity_regularizer=self._activity_regularizer, - kernel_constraint=self._kernel_constraint, - bias_constraint=self._bias_constraint, - name="self_attention_output") - self.self_attention_dropout = tf.keras.layers.Dropout( - rate=self.dropout_rate) - self.self_attention_layer_norm = ( - tf.keras.layers.LayerNormalization( - name="self_attention_layer_norm", axis=-1, epsilon=1e-12)) - # Encoder-decoder attention. - self.encdec_attention = self._cross_attention_cls( - num_heads=self.num_attention_heads, - key_size=self.attention_head_size, - dropout=self.attention_dropout_rate, - output_shape=hidden_size, - kernel_initializer=self._kernel_initializer, - bias_initializer=self._bias_initializer, - kernel_regularizer=self._kernel_regularizer, - bias_regularizer=self._bias_regularizer, - activity_regularizer=self._activity_regularizer, - kernel_constraint=self._kernel_constraint, - bias_constraint=self._bias_constraint, - name="attention/encdec") - - self.encdec_attention_dropout = tf.keras.layers.Dropout( - rate=self.dropout_rate) - self.encdec_attention_layer_norm = ( - tf.keras.layers.LayerNormalization( - name="attention/encdec_output_layer_norm", axis=-1, epsilon=1e-12)) - - # Feed-forward projection. - self.intermediate_dense = dense_einsum.DenseEinsum( - output_shape=self.intermediate_size, - activation=None, - kernel_initializer=self._kernel_initializer, - bias_initializer=self._bias_initializer, - kernel_regularizer=self._kernel_regularizer, - bias_regularizer=self._bias_regularizer, - activity_regularizer=self._activity_regularizer, - kernel_constraint=self._kernel_constraint, - bias_constraint=self._bias_constraint, - name="intermediate") - self.intermediate_activation_layer = tf.keras.layers.Activation( - self.intermediate_activation) - self.output_dense = dense_einsum.DenseEinsum( - output_shape=hidden_size, - kernel_initializer=self._kernel_initializer, - bias_initializer=self._bias_initializer, - kernel_regularizer=self._kernel_regularizer, - bias_regularizer=self._bias_regularizer, - activity_regularizer=self._activity_regularizer, - kernel_constraint=self._kernel_constraint, - bias_constraint=self._bias_constraint, - name="output") - self.output_dropout = tf.keras.layers.Dropout(rate=self.dropout_rate) - self.output_layer_norm = tf.keras.layers.LayerNormalization( - name="output_layer_norm", axis=-1, epsilon=1e-12) - super(TransformerDecoderLayer, self).build(input_shape) - - def common_layers_with_encoder(self): - """Gets layer objects that can make a Transformer encoder block.""" - return [ - self.self_attention, self.self_attention_layer_norm, - self.intermediate_dense, self.output_dense, self.output_layer_norm - ] - - def call(self, inputs, cache=None, decode_loop_step=None): - if self.multi_channel_cross_attention: - if len(inputs) != 5: - raise ValueError( - "TransformerDecoderLayer must have 5 inputs, when it uses " - "multi_channel_cross_attention. But it got: %d" % len(inputs)) - elif len(inputs) != 4: - raise ValueError( - "TransformerDecoderLayer must have 4 inputs, but it got: %d" % - len(inputs)) - input_tensor, memory, attention_mask, self_attention_mask = inputs[:4] - self_attention_inputs = [input_tensor, input_tensor] - self_attention_output, cache = self.self_attention( - self_attention_inputs, - attention_mask=self_attention_mask, - cache=cache, - decode_loop_step=decode_loop_step) - self_attention_output = self.self_attention_dropout(self_attention_output) - self_attention_output = self.self_attention_layer_norm( - input_tensor + self_attention_output) - - cross_attn_inputs = [self_attention_output, memory] - if self.multi_channel_cross_attention: - # Accesses the 5-th input tensor for the doc-attention probabilities. - cross_attn_inputs.append(inputs[-1]) - attention_output = self.encdec_attention(cross_attn_inputs, attention_mask) - attention_output = self.encdec_attention_dropout(attention_output) - attention_output = self.encdec_attention_layer_norm(self_attention_output + - attention_output) - - intermediate_output = self.intermediate_dense(attention_output) - intermediate_output = self.intermediate_activation_layer( - intermediate_output) - layer_output = self.output_dense(intermediate_output) - layer_output = self.output_dropout(layer_output) - layer_output = self.output_layer_norm(layer_output + attention_output) - return layer_output, cache diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/architecture/factory.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/architecture/factory.py deleted file mode 100644 index ed5647d6fb83fbd7c404a4573ff247acb8999b8c..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/architecture/factory.py +++ /dev/null @@ -1,163 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Model architecture factory.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -from official.vision.detection.modeling.architecture import fpn -from official.vision.detection.modeling.architecture import heads -from official.vision.detection.modeling.architecture import identity -from official.vision.detection.modeling.architecture import nn_ops -from official.vision.detection.modeling.architecture import resnet - - -def norm_activation_generator(params): - return nn_ops.norm_activation_builder( - momentum=params.batch_norm_momentum, - epsilon=params.batch_norm_epsilon, - trainable=params.batch_norm_trainable, - activation=params.activation) - - -def backbone_generator(params): - """Generator function for various backbone models.""" - if params.architecture.backbone == 'resnet': - resnet_params = params.resnet - backbone_fn = resnet.Resnet( - resnet_depth=resnet_params.resnet_depth, - activation=params.norm_activation.activation, - norm_activation=norm_activation_generator( - params.norm_activation)) - else: - raise ValueError('Backbone model `{}` is not supported.' - .format(params.architecture.backbone)) - - return backbone_fn - - -def multilevel_features_generator(params): - """Generator function for various FPN models.""" - if params.architecture.multilevel_features == 'fpn': - fpn_params = params.fpn - fpn_fn = fpn.Fpn( - min_level=params.architecture.min_level, - max_level=params.architecture.max_level, - fpn_feat_dims=fpn_params.fpn_feat_dims, - use_separable_conv=fpn_params.use_separable_conv, - activation=params.norm_activation.activation, - use_batch_norm=fpn_params.use_batch_norm, - norm_activation=norm_activation_generator( - params.norm_activation)) - elif params.architecture.multilevel_features == 'identity': - fpn_fn = identity.Identity() - else: - raise ValueError('The multi-level feature model `{}` is not supported.' - .format(params.architecture.multilevel_features)) - return fpn_fn - - -def retinanet_head_generator(params): - """Generator function for RetinaNet head architecture.""" - head_params = params.retinanet_head - return heads.RetinanetHead( - params.architecture.min_level, - params.architecture.max_level, - params.architecture.num_classes, - head_params.anchors_per_location, - head_params.num_convs, - head_params.num_filters, - head_params.use_separable_conv, - norm_activation=norm_activation_generator(params.norm_activation)) - - -def rpn_head_generator(params): - """Generator function for RPN head architecture.""" - head_params = params.rpn_head - return heads.RpnHead( - params.architecture.min_level, - params.architecture.max_level, - head_params.anchors_per_location, - head_params.num_convs, - head_params.num_filters, - head_params.use_separable_conv, - params.norm_activation.activation, - head_params.use_batch_norm, - norm_activation=norm_activation_generator(params.norm_activation)) - - -def fast_rcnn_head_generator(params): - """Generator function for Fast R-CNN head architecture.""" - head_params = params.frcnn_head - return heads.FastrcnnHead( - params.architecture.num_classes, - head_params.num_convs, - head_params.num_filters, - head_params.use_separable_conv, - head_params.num_fcs, - head_params.fc_dims, - params.norm_activation.activation, - head_params.use_batch_norm, - norm_activation=norm_activation_generator(params.norm_activation)) - - -def mask_rcnn_head_generator(params): - """Generator function for Mask R-CNN head architecture.""" - head_params = params.mrcnn_head - return heads.MaskrcnnHead( - params.architecture.num_classes, - params.architecture.mask_target_size, - head_params.num_convs, - head_params.num_filters, - head_params.use_separable_conv, - params.norm_activation.activation, - head_params.use_batch_norm, - norm_activation=norm_activation_generator(params.norm_activation)) - - -def shapeprior_head_generator(params): - """Generator function for shape prior head architecture.""" - head_params = params.shapemask_head - return heads.ShapemaskPriorHead( - params.architecture.num_classes, - head_params.num_downsample_channels, - head_params.mask_crop_size, - head_params.use_category_for_mask, - head_params.shape_prior_path) - - -def coarsemask_head_generator(params): - """Generator function for ShapeMask coarse mask head architecture.""" - head_params = params.shapemask_head - return heads.ShapemaskCoarsemaskHead( - params.architecture.num_classes, - head_params.num_downsample_channels, - head_params.mask_crop_size, - head_params.use_category_for_mask, - head_params.num_convs, - norm_activation=norm_activation_generator(params.norm_activation)) - - -def finemask_head_generator(params): - """Generator function for Shapemask fine mask head architecture.""" - head_params = params.shapemask_head - return heads.ShapemaskFinemaskHead( - params.architecture.num_classes, - head_params.num_downsample_channels, - head_params.mask_crop_size, - head_params.use_category_for_mask, - head_params.num_convs, - head_params.upsample_factor) diff --git a/spaces/NicolasGaudemet/WritingAssistant/writing_assistant_app.py b/spaces/NicolasGaudemet/WritingAssistant/writing_assistant_app.py deleted file mode 100644 index a931be8396ea4397a1621a629f607f5ce39de7c4..0000000000000000000000000000000000000000 --- a/spaces/NicolasGaudemet/WritingAssistant/writing_assistant_app.py +++ /dev/null @@ -1,57 +0,0 @@ -import openai -import os -import gradio as gr - -# Configure votre clé API -openai.api_key = os.environ['OpenaiKey'] - -def writing_assistant(debut, suite, instructions): - # Construction de la requête - - with open('instructions.txt', 'r') as fichier: - # Lecture du contenu du fichier - instructions = fichier.read() + "\n" + instructions - - prompt = f"DEBUT = '{debut}'\n SUITE = '{suite}' \n INSTRUCTIONS = {instructions}" - - messages = [ - {"role": "system", "content": f"Tu es un assistant d'écriture. Tu aides un auteur contemporain à écrire, en t'inspirant de son style littéraire."}, - {"role": "user", "content": prompt} - ] - - # Call GPT-3.5-turbo API - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=messages, - temperature=0.9 - ) - - # Get generated text - texte_reecrit = response.choices[0].message['content'].strip() - - return texte_reecrit - -# Définition d'inputs par défaut -with open('debut_par_defaut.txt', 'r') as fichier: - # Lecture du contenu du fichier - debut_par_defaut = fichier.read() - -with open('suite_par_defaut.txt', 'r') as fichier: - # Lecture du contenu du fichier - suite_par_defaut = fichier.read() - -# Création de l'interface Gradio -iface = gr.Interface( - fn=writing_assistant, - inputs=[ - gr.inputs.Textbox(lines=5, label="Début", default = debut_par_defaut), - gr.inputs.Textbox(lines=5, label="Suite", default = suite_par_defaut), - gr.inputs.Textbox(lines=2, label="Instructions additionnelles") - ], - outputs=gr.outputs.Textbox(label="Texte réécrit"), - title="Assistant d'écriture", - description="par Nicolas \nRéécrit un brouillon en respectant un début avec un style donné." -) - -# Lancer l'interface -iface.launch() \ No newline at end of file diff --git a/spaces/Nultx/VITS-TTS/text/ngu_dialect.py b/spaces/Nultx/VITS-TTS/text/ngu_dialect.py deleted file mode 100644 index ce3e12bbf0469426872eed5f681985d3e1be9b26..0000000000000000000000000000000000000000 --- a/spaces/Nultx/VITS-TTS/text/ngu_dialect.py +++ /dev/null @@ -1,30 +0,0 @@ -import re -import opencc - - -dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou', - 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing', - 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang', - 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan', - 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen', - 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'} - -converters = {} - -for dialect in dialects.values(): - try: - converters[dialect] = opencc.OpenCC(dialect) - except: - pass - - -def ngu_dialect_to_ipa(text, dialect): - dialect = dialects[dialect] - text = converters[dialect].convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py deleted file mode 100644 index 655a9b0d19d11e35511392a016f9d6b7d7aa2925..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py +++ /dev/null @@ -1,177 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderModel, - register_model, - register_model_architecture, -) -from fairseq.modules.fairseq_dropout import FairseqDropout - - -default_conv_enc_config = """[ - (400, 13, 170, 0.2), - (440, 14, 0, 0.214), - (484, 15, 0, 0.22898), - (532, 16, 0, 0.2450086), - (584, 17, 0, 0.262159202), - (642, 18, 0, 0.28051034614), - (706, 19, 0, 0.30014607037), - (776, 20, 0, 0.321156295296), - (852, 21, 0, 0.343637235966), - (936, 22, 0, 0.367691842484), - (1028, 23, 0, 0.393430271458), - (1130, 24, 0, 0.42097039046), - (1242, 25, 0, 0.450438317792), - (1366, 26, 0, 0.481969000038), - (1502, 27, 0, 0.51570683004), - (1652, 28, 0, 0.551806308143), - (1816, 29, 0, 0.590432749713), -]""" - - -@register_model("asr_w2l_conv_glu_encoder") -class W2lConvGluEncoderModel(FairseqEncoderModel): - def __init__(self, encoder): - super().__init__(encoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--input-feat-per-channel", - type=int, - metavar="N", - help="encoder input dimension per input channel", - ) - parser.add_argument( - "--in-channels", - type=int, - metavar="N", - help="number of encoder input channels", - ) - parser.add_argument( - "--conv-enc-config", - type=str, - metavar="EXPR", - help=""" - an array of tuples each containing the configuration of one conv layer - [(out_channels, kernel_size, padding, dropout), ...] - """, - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - conv_enc_config = getattr(args, "conv_enc_config", default_conv_enc_config) - encoder = W2lConvGluEncoder( - vocab_size=len(task.target_dictionary), - input_feat_per_channel=args.input_feat_per_channel, - in_channels=args.in_channels, - conv_enc_config=eval(conv_enc_config), - ) - return cls(encoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - lprobs.batch_first = False - return lprobs - - -class W2lConvGluEncoder(FairseqEncoder): - def __init__( - self, vocab_size, input_feat_per_channel, in_channels, conv_enc_config - ): - super().__init__(None) - - self.input_dim = input_feat_per_channel - if in_channels != 1: - raise ValueError("only 1 input channel is currently supported") - - self.conv_layers = nn.ModuleList() - self.linear_layers = nn.ModuleList() - self.dropouts = [] - cur_channels = input_feat_per_channel - - for out_channels, kernel_size, padding, dropout in conv_enc_config: - layer = nn.Conv1d(cur_channels, out_channels, kernel_size, padding=padding) - layer.weight.data.mul_(math.sqrt(3)) # match wav2letter init - self.conv_layers.append(nn.utils.weight_norm(layer)) - self.dropouts.append( - FairseqDropout(dropout, module_name=self.__class__.__name__) - ) - if out_channels % 2 != 0: - raise ValueError("odd # of out_channels is incompatible with GLU") - cur_channels = out_channels // 2 # halved by GLU - - for out_channels in [2 * cur_channels, vocab_size]: - layer = nn.Linear(cur_channels, out_channels) - layer.weight.data.mul_(math.sqrt(3)) - self.linear_layers.append(nn.utils.weight_norm(layer)) - cur_channels = out_channels // 2 - - def forward(self, src_tokens, src_lengths, **kwargs): - - """ - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - B, T, _ = src_tokens.size() - x = src_tokens.transpose(1, 2).contiguous() # (B, feat, T) assuming C == 1 - - for layer_idx in range(len(self.conv_layers)): - x = self.conv_layers[layer_idx](x) - x = F.glu(x, dim=1) - x = self.dropouts[layer_idx](x) - - x = x.transpose(1, 2).contiguous() # (B, T, 908) - x = self.linear_layers[0](x) - x = F.glu(x, dim=2) - x = self.dropouts[-1](x) - x = self.linear_layers[1](x) - - assert x.size(0) == B - assert x.size(1) == T - - encoder_out = x.transpose(0, 1) # (T, B, vocab_size) - - # need to debug this -- find a simpler/elegant way in pytorch APIs - encoder_padding_mask = ( - torch.arange(T).view(1, T).expand(B, -1).to(x.device) - >= src_lengths.view(B, 1).expand(-1, T) - ).t() # (B x T) -> (T x B) - - return { - "encoder_out": encoder_out, # (T, B, vocab_size) - "encoder_padding_mask": encoder_padding_mask, # (T, B) - } - - def reorder_encoder_out(self, encoder_out, new_order): - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return (1e6, 1e6) # an arbitrary large number - - -@register_model_architecture("asr_w2l_conv_glu_encoder", "w2l_conv_glu_enc") -def w2l_conv_glu_enc(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.in_channels = getattr(args, "in_channels", 1) - args.conv_enc_config = getattr(args, "conv_enc_config", default_conv_enc_config) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/sentence_ranking.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/sentence_ranking.py deleted file mode 100644 index d4c76341d4d87e6d0da21ac89e833ce0bda13a0c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/sentence_ranking.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -@register_criterion("sentence_ranking") -class SentenceRankingCriterion(FairseqCriterion): - def __init__(self, task, ranking_head_name, save_predictions, num_classes): - super().__init__(task) - self.ranking_head_name = ranking_head_name - if save_predictions is not None: - self.prediction_h = open(save_predictions, "w") - else: - self.prediction_h = None - self.num_classes = num_classes - - def __del__(self): - if self.prediction_h is not None: - self.prediction_h.close() - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--save-predictions', metavar='FILE', - help='file to save predictions to') - parser.add_argument('--ranking-head-name', - default='sentence_classification_head', - help='name of the ranking head to use') - # fmt: on - - def forward(self, model, sample, reduce=True): - """Compute ranking loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - assert ( - hasattr(model, "classification_heads") - and self.ranking_head_name in model.classification_heads - ), "model must provide sentence ranking head for --criterion=sentence_ranking" - - scores = [] - for idx in range(self.num_classes): - score, _ = model( - **sample["net_input{idx}".format(idx=idx + 1)], - classification_head_name=self.ranking_head_name, - ) - scores.append(score) - - logits = torch.cat(scores, dim=1) - sample_size = logits.size(0) - - if "target" in sample: - targets = model.get_targets(sample, [logits]).view(-1) - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32) - loss = F.nll_loss(lprobs, targets, reduction="sum") - else: - targets = None - loss = torch.tensor(0.0, requires_grad=True) - - if self.prediction_h is not None: - preds = logits.argmax(dim=1) - for i, (id, pred) in enumerate(zip(sample["id"].tolist(), preds.tolist())): - if targets is not None: - label = targets[i].item() - print("{}\t{}\t{}".format(id, pred, label), file=self.prediction_h) - else: - print("{}\t{}".format(id, pred), file=self.prediction_h) - - logging_output = { - "loss": loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample_size, - "sample_size": sample_size, - } - if targets is not None: - logging_output["ncorrect"] = (logits.argmax(dim=1) == targets).sum() - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - - if len(logging_outputs) > 0 and "ncorrect" in logging_outputs[0]: - ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs) - metrics.log_scalar( - "accuracy", 100.0 * ncorrect / nsentences, nsentences, round=1 - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/roll_dataset.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/roll_dataset.py deleted file mode 100644 index a2915eeb3e8fb4dfb4b2bb33e0464ad0783d854c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/roll_dataset.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import BaseWrapperDataset - - -class RollDataset(BaseWrapperDataset): - def __init__(self, dataset, shifts): - super().__init__(dataset) - self.shifts = shifts - - def __getitem__(self, index): - item = self.dataset[index] - return torch.roll(item, self.shifts) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_backtranslation_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_backtranslation_dataset.py deleted file mode 100644 index dffc3b49387dfdc046ea23d7db179377040b7cbc..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_backtranslation_dataset.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -import tests.utils as test_utils -import torch -from fairseq.data import ( - BacktranslationDataset, - LanguagePairDataset, - TransformEosDataset, -) -from fairseq.sequence_generator import SequenceGenerator - - -class TestBacktranslationDataset(unittest.TestCase): - def setUp(self): - ( - self.tgt_dict, - self.w1, - self.w2, - self.src_tokens, - self.src_lengths, - self.model, - ) = test_utils.sequence_generator_setup() - - dummy_src_samples = self.src_tokens - - self.tgt_dataset = test_utils.TestDataset(data=dummy_src_samples) - self.cuda = torch.cuda.is_available() - - def _backtranslation_dataset_helper( - self, - remove_eos_from_input_src, - remove_eos_from_output_src, - ): - tgt_dataset = LanguagePairDataset( - src=self.tgt_dataset, - src_sizes=self.tgt_dataset.sizes, - src_dict=self.tgt_dict, - tgt=None, - tgt_sizes=None, - tgt_dict=None, - ) - - generator = SequenceGenerator( - [self.model], - tgt_dict=self.tgt_dict, - max_len_a=0, - max_len_b=200, - beam_size=2, - unk_penalty=0, - ) - - backtranslation_dataset = BacktranslationDataset( - tgt_dataset=TransformEosDataset( - dataset=tgt_dataset, - eos=self.tgt_dict.eos(), - # remove eos from the input src - remove_eos_from_src=remove_eos_from_input_src, - ), - src_dict=self.tgt_dict, - backtranslation_fn=( - lambda sample: generator.generate([self.model], sample) - ), - output_collater=TransformEosDataset( - dataset=tgt_dataset, - eos=self.tgt_dict.eos(), - # if we remove eos from the input src, then we need to add it - # back to the output tgt - append_eos_to_tgt=remove_eos_from_input_src, - remove_eos_from_src=remove_eos_from_output_src, - ).collater, - cuda=self.cuda, - ) - dataloader = torch.utils.data.DataLoader( - backtranslation_dataset, - batch_size=2, - collate_fn=backtranslation_dataset.collater, - ) - backtranslation_batch_result = next(iter(dataloader)) - - eos, pad, w1, w2 = self.tgt_dict.eos(), self.tgt_dict.pad(), self.w1, self.w2 - - # Note that we sort by src_lengths and add left padding, so actually - # ids will look like: [1, 0] - expected_src = torch.LongTensor([[w1, w2, w1, eos], [pad, pad, w1, eos]]) - if remove_eos_from_output_src: - expected_src = expected_src[:, :-1] - expected_tgt = torch.LongTensor([[w1, w2, eos], [w1, w2, eos]]) - generated_src = backtranslation_batch_result["net_input"]["src_tokens"] - tgt_tokens = backtranslation_batch_result["target"] - - self.assertTensorEqual(expected_src, generated_src) - self.assertTensorEqual(expected_tgt, tgt_tokens) - - def test_backtranslation_dataset_no_eos_in_output_src(self): - self._backtranslation_dataset_helper( - remove_eos_from_input_src=False, - remove_eos_from_output_src=True, - ) - - def test_backtranslation_dataset_with_eos_in_output_src(self): - self._backtranslation_dataset_helper( - remove_eos_from_input_src=False, - remove_eos_from_output_src=False, - ) - - def test_backtranslation_dataset_no_eos_in_input_src(self): - self._backtranslation_dataset_helper( - remove_eos_from_input_src=True, - remove_eos_from_output_src=False, - ) - - def assertTensorEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertEqual(t1.ne(t2).long().sum(), 0) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Old-Fat-Boy/Youtube_Thumbnail_CTR_Analyzer/app.py b/spaces/Old-Fat-Boy/Youtube_Thumbnail_CTR_Analyzer/app.py deleted file mode 100644 index 970f9b45f8db4686fe6bfb9f9cfaba734a1f5a12..0000000000000000000000000000000000000000 --- a/spaces/Old-Fat-Boy/Youtube_Thumbnail_CTR_Analyzer/app.py +++ /dev/null @@ -1,25 +0,0 @@ -# AUTOGENERATED! DO NOT EDIT! File to edit: app.ipynb. - -# %% auto 0 -__all__ = ['learn', 'categories', 'image', 'label', 'examples', 'intf', 'classify_image'] - -# %% app.ipynb 2 -from fastai.vision.all import * -import gradio as gr - -# %% app.ipynb 5 -learn = load_learner('model.pkl') - -# %% app.ipynb 7 -categories = ('Above Average', 'Below Average') - -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) - -# %% app.ipynb 9 -image = gr.Image(shape=(192,192), label="Thumbnail") -label = gr.Label() -examples = ['example.jpg', 'above_average.jpg', 'below_average.jpg'] -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples) -intf.launch(inline=False) diff --git a/spaces/Olivier-Truong/faster-whisper-webui-v2/src/languages.py b/spaces/Olivier-Truong/faster-whisper-webui-v2/src/languages.py deleted file mode 100644 index fbad66e4d34119d27d12e3dfecbe99b6fdde4db7..0000000000000000000000000000000000000000 --- a/spaces/Olivier-Truong/faster-whisper-webui-v2/src/languages.py +++ /dev/null @@ -1,147 +0,0 @@ -class Language(): - def __init__(self, code, name): - self.code = code - self.name = name - - def __str__(self): - return "Language(code={}, name={})".format(self.code, self.name) - -LANGUAGES = [ - Language('en', 'English'), - Language('zh', 'Chinese'), - Language('de', 'German'), - Language('es', 'Spanish'), - Language('ru', 'Russian'), - Language('ko', 'Korean'), - Language('fr', 'French'), - Language('ja', 'Japanese'), - Language('pt', 'Portuguese'), - Language('tr', 'Turkish'), - Language('pl', 'Polish'), - Language('ca', 'Catalan'), - Language('nl', 'Dutch'), - Language('ar', 'Arabic'), - Language('sv', 'Swedish'), - Language('it', 'Italian'), - Language('id', 'Indonesian'), - Language('hi', 'Hindi'), - Language('fi', 'Finnish'), - Language('vi', 'Vietnamese'), - Language('he', 'Hebrew'), - Language('uk', 'Ukrainian'), - Language('el', 'Greek'), - Language('ms', 'Malay'), - Language('cs', 'Czech'), - Language('ro', 'Romanian'), - Language('da', 'Danish'), - Language('hu', 'Hungarian'), - Language('ta', 'Tamil'), - Language('no', 'Norwegian'), - Language('th', 'Thai'), - Language('ur', 'Urdu'), - Language('hr', 'Croatian'), - Language('bg', 'Bulgarian'), - Language('lt', 'Lithuanian'), - Language('la', 'Latin'), - Language('mi', 'Maori'), - Language('ml', 'Malayalam'), - Language('cy', 'Welsh'), - Language('sk', 'Slovak'), - Language('te', 'Telugu'), - Language('fa', 'Persian'), - Language('lv', 'Latvian'), - Language('bn', 'Bengali'), - Language('sr', 'Serbian'), - Language('az', 'Azerbaijani'), - Language('sl', 'Slovenian'), - Language('kn', 'Kannada'), - Language('et', 'Estonian'), - Language('mk', 'Macedonian'), - Language('br', 'Breton'), - Language('eu', 'Basque'), - Language('is', 'Icelandic'), - Language('hy', 'Armenian'), - Language('ne', 'Nepali'), - Language('mn', 'Mongolian'), - Language('bs', 'Bosnian'), - Language('kk', 'Kazakh'), - Language('sq', 'Albanian'), - Language('sw', 'Swahili'), - Language('gl', 'Galician'), - Language('mr', 'Marathi'), - Language('pa', 'Punjabi'), - Language('si', 'Sinhala'), - Language('km', 'Khmer'), - Language('sn', 'Shona'), - Language('yo', 'Yoruba'), - Language('so', 'Somali'), - Language('af', 'Afrikaans'), - Language('oc', 'Occitan'), - Language('ka', 'Georgian'), - Language('be', 'Belarusian'), - Language('tg', 'Tajik'), - Language('sd', 'Sindhi'), - Language('gu', 'Gujarati'), - Language('am', 'Amharic'), - Language('yi', 'Yiddish'), - Language('lo', 'Lao'), - Language('uz', 'Uzbek'), - Language('fo', 'Faroese'), - Language('ht', 'Haitian creole'), - Language('ps', 'Pashto'), - Language('tk', 'Turkmen'), - Language('nn', 'Nynorsk'), - Language('mt', 'Maltese'), - Language('sa', 'Sanskrit'), - Language('lb', 'Luxembourgish'), - Language('my', 'Myanmar'), - Language('bo', 'Tibetan'), - Language('tl', 'Tagalog'), - Language('mg', 'Malagasy'), - Language('as', 'Assamese'), - Language('tt', 'Tatar'), - Language('haw', 'Hawaiian'), - Language('ln', 'Lingala'), - Language('ha', 'Hausa'), - Language('ba', 'Bashkir'), - Language('jw', 'Javanese'), - Language('su', 'Sundanese') -] - -_TO_LANGUAGE_CODE = { - **{language.code: language for language in LANGUAGES}, - "burmese": "my", - "valencian": "ca", - "flemish": "nl", - "haitian": "ht", - "letzeburgesch": "lb", - "pushto": "ps", - "panjabi": "pa", - "moldavian": "ro", - "moldovan": "ro", - "sinhalese": "si", - "castilian": "es", -} - -_FROM_LANGUAGE_NAME = { - **{language.name.lower(): language for language in LANGUAGES} -} - -def get_language_from_code(language_code, default=None) -> Language: - """Return the language name from the language code.""" - return _TO_LANGUAGE_CODE.get(language_code, default) - -def get_language_from_name(language, default=None) -> Language: - """Return the language code from the language name.""" - return _FROM_LANGUAGE_NAME.get(language.lower() if language else None, default) - -def get_language_names(): - """Return a list of language names.""" - return [language.name for language in LANGUAGES] - -if __name__ == "__main__": - # Test lookup - print(get_language_from_code('en')) - print(get_language_from_name('English')) - - print(get_language_names()) \ No newline at end of file diff --git a/spaces/Omnibus/MusicGen/tests/models/test_musicgen.py b/spaces/Omnibus/MusicGen/tests/models/test_musicgen.py deleted file mode 100644 index d43cf73763f6c690ab0b277227ac225b286fa143..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/MusicGen/tests/models/test_musicgen.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import pytest -import torch - -from audiocraft.models import MusicGen - - -class TestSEANetModel: - def get_musicgen(self): - mg = MusicGen.get_pretrained(name='debug', device='cpu') - mg.set_generation_params(duration=2.0, extend_stride=2.) - return mg - - def test_base(self): - mg = self.get_musicgen() - assert mg.frame_rate == 25 - assert mg.sample_rate == 32000 - assert mg.audio_channels == 1 - - def test_generate_unconditional(self): - mg = self.get_musicgen() - wav = mg.generate_unconditional(3) - assert list(wav.shape) == [3, 1, 64000] - - def test_generate_continuation(self): - mg = self.get_musicgen() - prompt = torch.randn(3, 1, 32000) - wav = mg.generate_continuation(prompt, 32000) - assert list(wav.shape) == [3, 1, 64000] - - prompt = torch.randn(2, 1, 32000) - wav = mg.generate_continuation( - prompt, 32000, ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 64000] - - prompt = torch.randn(2, 1, 32000) - with pytest.raises(AssertionError): - wav = mg.generate_continuation( - prompt, 32000, ['youpi', 'lapin dort', 'one too many']) - - def test_generate(self): - mg = self.get_musicgen() - wav = mg.generate( - ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 64000] - - def test_generate_long(self): - mg = self.get_musicgen() - mg.max_duration = 3. - mg.set_generation_params(duration=4., extend_stride=2.) - wav = mg.generate( - ['youpi', 'lapin dort']) - assert list(wav.shape) == [2, 1, 32000 * 4] diff --git a/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/image_generation.py b/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/image_generation.py deleted file mode 100644 index f04b4bcc76ff3c8dc59d1c61004073a3a6815c01..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/image_generation.py +++ /dev/null @@ -1,363 +0,0 @@ -import json -import math -import random -import time -from pathlib import Path -from uuid import uuid4 - -import torch -from diffusers import __version__ as diffusers_version -from huggingface_hub import CommitOperationAdd, create_commit, create_repo - -from .upsampling import RealESRGANModel -from .utils import pad_along_axis - - -def get_all_files(root: Path): - dirs = [root] - while len(dirs) > 0: - dir = dirs.pop() - for candidate in dir.iterdir(): - if candidate.is_file(): - yield candidate - if candidate.is_dir(): - dirs.append(candidate) - - -def get_groups_of_n(n: int, iterator): - assert n > 1 - buffer = [] - for elt in iterator: - if len(buffer) == n: - yield buffer - buffer = [] - buffer.append(elt) - if len(buffer) != 0: - yield buffer - - -def upload_folder_chunked( - repo_id: str, - upload_dir: Path, - n: int = 100, - private: bool = False, - create_pr: bool = False, -): - """Upload a folder to the Hugging Face Hub in chunks of n files at a time. - Args: - repo_id (str): The repo id to upload to. - upload_dir (Path): The directory to upload. - n (int, *optional*, defaults to 100): The number of files to upload at a time. - private (bool, *optional*): Whether to upload the repo as private. - create_pr (bool, *optional*): Whether to create a PR after uploading instead of commiting directly. - """ - - url = create_repo(repo_id, exist_ok=True, private=private, repo_type="dataset") - print(f"Uploading files to: {url}") - - root = Path(upload_dir) - if not root.exists(): - raise ValueError(f"Upload directory {root} does not exist.") - - for i, file_paths in enumerate(get_groups_of_n(n, get_all_files(root))): - print(f"Committing {file_paths}") - operations = [ - CommitOperationAdd( - path_in_repo=f"{file_path.parent.name}/{file_path.name}", - path_or_fileobj=str(file_path), - ) - for file_path in file_paths - ] - create_commit( - repo_id=repo_id, - operations=operations, - commit_message=f"Upload part {i}", - repo_type="dataset", - create_pr=create_pr, - ) - - -def generate_input_batches(pipeline, prompts, seeds, batch_size, height, width): - if len(prompts) != len(seeds): - raise ValueError("Number of prompts and seeds must be equal.") - - embeds_batch, noise_batch = None, None - batch_idx = 0 - for i, (prompt, seed) in enumerate(zip(prompts, seeds)): - embeds = pipeline.embed_text(prompt) - noise = torch.randn( - (1, pipeline.unet.in_channels, height // 8, width // 8), - device=pipeline.device, - generator=torch.Generator(device="cpu" if pipeline.device.type == "mps" else pipeline.device).manual_seed( - seed - ), - ) - embeds_batch = embeds if embeds_batch is None else torch.cat([embeds_batch, embeds]) - noise_batch = noise if noise_batch is None else torch.cat([noise_batch, noise]) - batch_is_ready = embeds_batch.shape[0] == batch_size or i + 1 == len(prompts) - if not batch_is_ready: - continue - yield batch_idx, embeds_batch.type(torch.cuda.HalfTensor), noise_batch.type(torch.cuda.HalfTensor) - batch_idx += 1 - del embeds_batch, noise_batch - torch.cuda.empty_cache() - embeds_batch, noise_batch = None, None - - -def generate_images( - pipeline, - prompt, - batch_size=1, - num_batches=1, - seeds=None, - num_inference_steps=50, - guidance_scale=7.5, - output_dir="./images", - image_file_ext=".jpg", - upsample=False, - height=512, - width=512, - eta=0.0, - push_to_hub=False, - repo_id=None, - private=False, - create_pr=False, - name=None, -): - """Generate images using the StableDiffusion pipeline. - Args: - pipeline (StableDiffusionWalkPipeline): The StableDiffusion pipeline instance. - prompt (str): The prompt to use for the image generation. - batch_size (int, *optional*, defaults to 1): The batch size to use for image generation. - num_batches (int, *optional*, defaults to 1): The number of batches to generate. - seeds (list[int], *optional*): The seeds to use for the image generation. - num_inference_steps (int, *optional*, defaults to 50): The number of inference steps to take. - guidance_scale (float, *optional*, defaults to 7.5): The guidance scale to use for image generation. - output_dir (str, *optional*, defaults to "./images"): The output directory to save the images to. - image_file_ext (str, *optional*, defaults to '.jpg'): The image file extension to use. - upsample (bool, *optional*, defaults to False): Whether to upsample the images. - height (int, *optional*, defaults to 512): The height of the images to generate. - width (int, *optional*, defaults to 512): The width of the images to generate. - eta (float, *optional*, defaults to 0.0): The eta parameter to use for image generation. - push_to_hub (bool, *optional*, defaults to False): Whether to push the generated images to the Hugging Face Hub. - repo_id (str, *optional*): The repo id to push the images to. - private (bool, *optional*): Whether to push the repo as private. - create_pr (bool, *optional*): Whether to create a PR after pushing instead of commiting directly. - name (str, *optional*, defaults to current timestamp str): The name of the sub-directory of - output_dir to save the images to. - """ - if push_to_hub: - if repo_id is None: - raise ValueError("Must provide repo_id if push_to_hub is True.") - - name = name or time.strftime("%Y%m%d-%H%M%S") - save_path = Path(output_dir) / name - save_path.mkdir(exist_ok=False, parents=True) - prompt_config_path = save_path / "prompt_config.json" - - num_images = batch_size * num_batches - seeds = seeds or [random.choice(list(range(0, 9999999))) for _ in range(num_images)] - if len(seeds) != num_images: - raise ValueError("Number of seeds must be equal to batch_size * num_batches.") - - if upsample: - if getattr(pipeline, "upsampler", None) is None: - pipeline.upsampler = RealESRGANModel.from_pretrained("nateraw/real-esrgan") - pipeline.upsampler.to(pipeline.device) - - cfg = dict( - prompt=prompt, - guidance_scale=guidance_scale, - eta=eta, - num_inference_steps=num_inference_steps, - upsample=upsample, - height=height, - width=width, - scheduler=dict(pipeline.scheduler.config), - tiled=pipeline.tiled, - diffusers_version=diffusers_version, - device_name=torch.cuda.get_device_name(0) if torch.cuda.is_available() else "unknown", - ) - prompt_config_path.write_text(json.dumps(cfg, indent=2, sort_keys=False)) - - frame_index = 0 - frame_filepaths = [] - for batch_idx, embeds, noise in generate_input_batches( - pipeline, [prompt] * num_images, seeds, batch_size, height, width - ): - print(f"Generating batch {batch_idx}") - - outputs = pipeline( - text_embeddings=embeds, - latents=noise, - num_inference_steps=num_inference_steps, - guidance_scale=guidance_scale, - eta=eta, - height=height, - width=width, - output_type="pil" if not upsample else "numpy", - )["images"] - if upsample: - images = [] - for output in outputs: - images.append(pipeline.upsampler(output)) - else: - images = outputs - - for image in images: - frame_filepath = save_path / f"{seeds[frame_index]}{image_file_ext}" - image.save(frame_filepath) - frame_filepaths.append(str(frame_filepath)) - frame_index += 1 - - return frame_filepaths - - if push_to_hub: - upload_folder_chunked(repo_id, save_path, private=private, create_pr=create_pr) - - -def generate_images_flax( - pipeline, - params, - prompt, - batch_size=1, - num_batches=1, - seeds=None, - num_inference_steps=50, - guidance_scale=7.5, - output_dir="./images", - image_file_ext=".jpg", - upsample=False, - height=512, - width=512, - push_to_hub=False, - repo_id=None, - private=False, - create_pr=False, - name=None, -): - import jax - from flax.training.common_utils import shard - - """Generate images using the StableDiffusion pipeline. - Args: - pipeline (StableDiffusionWalkPipeline): The StableDiffusion pipeline instance. - params (`Union[Dict, FrozenDict]`): The model parameters. - prompt (str): The prompt to use for the image generation. - batch_size (int, *optional*, defaults to 1): The batch size to use for image generation. - num_batches (int, *optional*, defaults to 1): The number of batches to generate. - seeds (int, *optional*): The seed to use for the image generation. - num_inference_steps (int, *optional*, defaults to 50): The number of inference steps to take. - guidance_scale (float, *optional*, defaults to 7.5): The guidance scale to use for image generation. - output_dir (str, *optional*, defaults to "./images"): The output directory to save the images to. - image_file_ext (str, *optional*, defaults to '.jpg'): The image file extension to use. - upsample (bool, *optional*, defaults to False): Whether to upsample the images. - height (int, *optional*, defaults to 512): The height of the images to generate. - width (int, *optional*, defaults to 512): The width of the images to generate. - push_to_hub (bool, *optional*, defaults to False): Whether to push the generated images to the Hugging Face Hub. - repo_id (str, *optional*): The repo id to push the images to. - private (bool, *optional*): Whether to push the repo as private. - create_pr (bool, *optional*): Whether to create a PR after pushing instead of commiting directly. - name (str, *optional*, defaults to current timestamp str): The name of the sub-directory of - output_dir to save the images to. - """ - if push_to_hub: - if repo_id is None: - raise ValueError("Must provide repo_id if push_to_hub is True.") - - name = name or time.strftime("%Y%m%d-%H%M%S") - save_path = Path(output_dir) / name - save_path.mkdir(exist_ok=False, parents=True) - prompt_config_path = save_path / "prompt_config.json" - - num_images = batch_size * num_batches - seeds = seeds or random.choice(list(range(0, 9999999))) - prng_seed = jax.random.PRNGKey(seeds) - - if upsample: - if getattr(pipeline, "upsampler", None) is None: - pipeline.upsampler = RealESRGANModel.from_pretrained("nateraw/real-esrgan") - if not torch.cuda.is_available(): - print("Upsampling is recommended to be done on a GPU, as it is very slow on CPU") - else: - pipeline.upsampler = pipeline.upsampler.cuda() - - cfg = dict( - prompt=prompt, - guidance_scale=guidance_scale, - num_inference_steps=num_inference_steps, - upsample=upsample, - height=height, - width=width, - scheduler=dict(pipeline.scheduler.config), - # tiled=pipeline.tiled, - diffusers_version=diffusers_version, - device_name=torch.cuda.get_device_name(0) if torch.cuda.is_available() else "unknown", - ) - prompt_config_path.write_text(json.dumps(cfg, indent=2, sort_keys=False)) - - NUM_TPU_CORES = jax.device_count() - jit = True # force jit, assume params are already sharded - batch_size_total = NUM_TPU_CORES * batch_size if jit else batch_size - - def generate_input_batches(prompts, batch_size): - prompt_batch = None - for batch_idx in range(math.ceil(len(prompts) / batch_size)): - prompt_batch = prompts[batch_idx * batch_size : (batch_idx + 1) * batch_size] - yield batch_idx, prompt_batch - - frame_index = 0 - frame_filepaths = [] - for batch_idx, prompt_batch in generate_input_batches([prompt] * num_images, batch_size_total): - # This batch size correspond to each TPU core, so we are generating batch_size * NUM_TPU_CORES images - print(f"Generating batches: {batch_idx*NUM_TPU_CORES} - {min((batch_idx+1)*NUM_TPU_CORES, num_batches)}") - prompt_ids_batch = pipeline.prepare_inputs(prompt_batch) - prng_seed_batch = prng_seed - - if jit: - padded = False - # Check if len of prompt_batch is multiple of NUM_TPU_CORES, if not pad its ids - if len(prompt_batch) % NUM_TPU_CORES != 0: - padded = True - pad_size = NUM_TPU_CORES - (len(prompt_batch) % NUM_TPU_CORES) - # Pad embeds_batch and noise_batch with zeros in batch dimension - prompt_ids_batch = pad_along_axis(prompt_ids_batch, pad_size, axis=0) - - prompt_ids_batch = shard(prompt_ids_batch) - prng_seed_batch = jax.random.split(prng_seed, jax.device_count()) - - outputs = pipeline( - params, - prng_seed=prng_seed_batch, - prompt_ids=prompt_ids_batch, - height=height, - width=width, - guidance_scale=guidance_scale, - num_inference_steps=num_inference_steps, - output_type="pil" if not upsample else "numpy", - jit=jit, - )["images"] - - if jit: - # check if we padded and remove that padding from outputs - if padded: - outputs = outputs[:-pad_size] - - if upsample: - images = [] - for output in outputs: - images.append(pipeline.upsampler(output)) - else: - images = outputs - - for image in images: - uuid = str(uuid4()) - frame_filepath = save_path / f"{uuid}{image_file_ext}" - image.save(frame_filepath) - frame_filepaths.append(str(frame_filepath)) - frame_index += 1 - - return frame_filepaths - - if push_to_hub: - upload_folder_chunked(repo_id, save_path, private=private, create_pr=create_pr) diff --git a/spaces/Omnibus/text-to-vid/README.md b/spaces/Omnibus/text-to-vid/README.md deleted file mode 100644 index 138ece8a04cada5fe7b02434b1e4261af4ae4443..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/text-to-vid/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text To Vid -emoji: 💻 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OpenGVLab/DragGAN/stylegan2/op/conv2d_gradfix.py b/spaces/OpenGVLab/DragGAN/stylegan2/op/conv2d_gradfix.py deleted file mode 100644 index 8636ed397f6162975b46b2a32667211567b6800e..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/DragGAN/stylegan2/op/conv2d_gradfix.py +++ /dev/null @@ -1,229 +0,0 @@ -import contextlib -import warnings - -import torch -from torch import autograd -from torch.nn import functional as F - -enabled = True -weight_gradients_disabled = False - - -@contextlib.contextmanager -def no_weight_gradients(): - global weight_gradients_disabled - - old = weight_gradients_disabled - weight_gradients_disabled = True - yield - weight_gradients_disabled = old - - -def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1): - if could_use_op(input): - return conv2d_gradfix( - transpose=False, - weight_shape=weight.shape, - stride=stride, - padding=padding, - output_padding=0, - dilation=dilation, - groups=groups, - ).apply(input, weight, bias) - - return F.conv2d( - input=input, - weight=weight, - bias=bias, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - ) - - -def conv_transpose2d( - input, - weight, - bias=None, - stride=1, - padding=0, - output_padding=0, - groups=1, - dilation=1, -): - if could_use_op(input): - return conv2d_gradfix( - transpose=True, - weight_shape=weight.shape, - stride=stride, - padding=padding, - output_padding=output_padding, - groups=groups, - dilation=dilation, - ).apply(input, weight, bias) - - return F.conv_transpose2d( - input=input, - weight=weight, - bias=bias, - stride=stride, - padding=padding, - output_padding=output_padding, - dilation=dilation, - groups=groups, - ) - - -def could_use_op(input): - return False - - if (not enabled) or (not torch.backends.cudnn.enabled): - return False - - if input.device.type != "cuda": - return False - - if any(torch.__version__.startswith(x) for x in ["1.7.", "1.8."]): - return True - - warnings.warn( - f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()." - ) - - return False - - -def ensure_tuple(xs, ndim): - xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim - - return xs - - -conv2d_gradfix_cache = dict() - - -def conv2d_gradfix( - transpose, weight_shape, stride, padding, output_padding, dilation, groups -): - ndim = 2 - weight_shape = tuple(weight_shape) - stride = ensure_tuple(stride, ndim) - padding = ensure_tuple(padding, ndim) - output_padding = ensure_tuple(output_padding, ndim) - dilation = ensure_tuple(dilation, ndim) - - key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups) - if key in conv2d_gradfix_cache: - return conv2d_gradfix_cache[key] - - common_kwargs = dict( - stride=stride, padding=padding, dilation=dilation, groups=groups - ) - - def calc_output_padding(input_shape, output_shape): - if transpose: - return [0, 0] - - return [ - input_shape[i + 2] - - (output_shape[i + 2] - 1) * stride[i] - - (1 - 2 * padding[i]) - - dilation[i] * (weight_shape[i + 2] - 1) - for i in range(ndim) - ] - - class Conv2d(autograd.Function): - @staticmethod - def forward(ctx, input, weight, bias): - if not transpose: - out = F.conv2d(input=input, weight=weight, bias=bias, **common_kwargs) - - else: - out = F.conv_transpose2d( - input=input, - weight=weight, - bias=bias, - output_padding=output_padding, - **common_kwargs, - ) - - ctx.save_for_backward(input, weight) - - return out - - @staticmethod - def backward(ctx, grad_output): - input, weight = ctx.saved_tensors - grad_input, grad_weight, grad_bias = None, None, None - - if ctx.needs_input_grad[0]: - p = calc_output_padding( - input_shape=input.shape, output_shape=grad_output.shape - ) - grad_input = conv2d_gradfix( - transpose=(not transpose), - weight_shape=weight_shape, - output_padding=p, - **common_kwargs, - ).apply(grad_output, weight, None) - - if ctx.needs_input_grad[1] and not weight_gradients_disabled: - grad_weight = Conv2dGradWeight.apply(grad_output, input) - - if ctx.needs_input_grad[2]: - grad_bias = grad_output.sum((0, 2, 3)) - - return grad_input, grad_weight, grad_bias - - class Conv2dGradWeight(autograd.Function): - @staticmethod - def forward(ctx, grad_output, input): - op = torch._C._jit_get_operation( - "aten::cudnn_convolution_backward_weight" - if not transpose - else "aten::cudnn_convolution_transpose_backward_weight" - ) - flags = [ - torch.backends.cudnn.benchmark, - torch.backends.cudnn.deterministic, - torch.backends.cudnn.allow_tf32, - ] - grad_weight = op( - weight_shape, - grad_output, - input, - padding, - stride, - dilation, - groups, - *flags, - ) - ctx.save_for_backward(grad_output, input) - - return grad_weight - - @staticmethod - def backward(ctx, grad_grad_weight): - grad_output, input = ctx.saved_tensors - grad_grad_output, grad_grad_input = None, None - - if ctx.needs_input_grad[0]: - grad_grad_output = Conv2d.apply(input, grad_grad_weight, None) - - if ctx.needs_input_grad[1]: - p = calc_output_padding( - input_shape=input.shape, output_shape=grad_output.shape - ) - grad_grad_input = conv2d_gradfix( - transpose=(not transpose), - weight_shape=weight_shape, - output_padding=p, - **common_kwargs, - ).apply(grad_output, grad_grad_weight, None) - - return grad_grad_output, grad_grad_input - - conv2d_gradfix_cache[key] = Conv2d - - return Conv2d diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/debug.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/debug.py deleted file mode 100644 index 0a4437fb5ae7522e46ca6c42ba5fd980df250446..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/debug.py +++ /dev/null @@ -1,283 +0,0 @@ -import cv2 -import numpy as np -import torch -import torch.nn.functional as F - -COLORS = ((np.random.rand(1300, 3) * 0.4 + 0.6) * 255).astype( - np.uint8).reshape(1300, 1, 1, 3) - -def _get_color_image(heatmap): - heatmap = heatmap.reshape( - heatmap.shape[0], heatmap.shape[1], heatmap.shape[2], 1) - if heatmap.shape[0] == 1: - color_map = (heatmap * np.ones((1, 1, 1, 3), np.uint8) * 255).max( - axis=0).astype(np.uint8) # H, W, 3 - else: - color_map = (heatmap * COLORS[:heatmap.shape[0]]).max(axis=0).astype(np.uint8) # H, W, 3 - - return color_map - -def _blend_image(image, color_map, a=0.7): - color_map = cv2.resize(color_map, (image.shape[1], image.shape[0])) - ret = np.clip(image * (1 - a) + color_map * a, 0, 255).astype(np.uint8) - return ret - -def _blend_image_heatmaps(image, color_maps, a=0.7): - merges = np.zeros((image.shape[0], image.shape[1], 3), np.float32) - for color_map in color_maps: - color_map = cv2.resize(color_map, (image.shape[1], image.shape[0])) - merges = np.maximum(merges, color_map) - ret = np.clip(image * (1 - a) + merges * a, 0, 255).astype(np.uint8) - return ret - -def _decompose_level(x, shapes_per_level, N): - ''' - x: LNHiWi x C - ''' - x = x.view(x.shape[0], -1) - ret = [] - st = 0 - for l in range(len(shapes_per_level)): - ret.append([]) - h = shapes_per_level[l][0].int().item() - w = shapes_per_level[l][1].int().item() - for i in range(N): - ret[l].append(x[st + h * w * i:st + h * w * (i + 1)].view( - h, w, -1).permute(2, 0, 1)) - st += h * w * N - return ret - -def _imagelist_to_tensor(images): - images = [x for x in images] - image_sizes = [x.shape[-2:] for x in images] - h = max([size[0] for size in image_sizes]) - w = max([size[1] for size in image_sizes]) - S = 32 - h, w = ((h - 1) // S + 1) * S, ((w - 1) // S + 1) * S - images = [F.pad(x, (0, w - x.shape[2], 0, h - x.shape[1], 0, 0)) \ - for x in images] - images = torch.stack(images) - return images - - -def _ind2il(ind, shapes_per_level, N): - r = ind - l = 0 - S = 0 - while r - S >= N * shapes_per_level[l][0] * shapes_per_level[l][1]: - S += N * shapes_per_level[l][0] * shapes_per_level[l][1] - l += 1 - i = (r - S) // (shapes_per_level[l][0] * shapes_per_level[l][1]) - return i, l - -def debug_train( - images, gt_instances, flattened_hms, reg_targets, labels, pos_inds, - shapes_per_level, locations, strides): - ''' - images: N x 3 x H x W - flattened_hms: LNHiWi x C - shapes_per_level: L x 2 [(H_i, W_i)] - locations: LNHiWi x 2 - ''' - reg_inds = torch.nonzero( - reg_targets.max(dim=1)[0] > 0).squeeze(1) - N = len(images) - images = _imagelist_to_tensor(images) - repeated_locations = [torch.cat([loc] * N, dim=0) \ - for loc in locations] - locations = torch.cat(repeated_locations, dim=0) - gt_hms = _decompose_level(flattened_hms, shapes_per_level, N) - masks = flattened_hms.new_zeros((flattened_hms.shape[0], 1)) - masks[pos_inds] = 1 - masks = _decompose_level(masks, shapes_per_level, N) - for i in range(len(images)): - image = images[i].detach().cpu().numpy().transpose(1, 2, 0) - color_maps = [] - for l in range(len(gt_hms)): - color_map = _get_color_image( - gt_hms[l][i].detach().cpu().numpy()) - color_maps.append(color_map) - cv2.imshow('gthm_{}'.format(l), color_map) - blend = _blend_image_heatmaps(image.copy(), color_maps) - if gt_instances is not None: - bboxes = gt_instances[i].gt_boxes.tensor - for j in range(len(bboxes)): - bbox = bboxes[j] - cv2.rectangle( - blend, - (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - (0, 0, 255), 3, cv2.LINE_AA) - - for j in range(len(pos_inds)): - image_id, l = _ind2il(pos_inds[j], shapes_per_level, N) - if image_id != i: - continue - loc = locations[pos_inds[j]] - cv2.drawMarker( - blend, (int(loc[0]), int(loc[1])), (0, 255, 255), - markerSize=(l + 1) * 16) - - for j in range(len(reg_inds)): - image_id, l = _ind2il(reg_inds[j], shapes_per_level, N) - if image_id != i: - continue - ltrb = reg_targets[reg_inds[j]] - ltrb *= strides[l] - loc = locations[reg_inds[j]] - bbox = [(loc[0] - ltrb[0]), (loc[1] - ltrb[1]), - (loc[0] + ltrb[2]), (loc[1] + ltrb[3])] - cv2.rectangle( - blend, - (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - (255, 0, 0), 1, cv2.LINE_AA) - cv2.circle(blend, (int(loc[0]), int(loc[1])), 2, (255, 0, 0), -1) - - cv2.imshow('blend', blend) - cv2.waitKey() - - -def debug_test( - images, logits_pred, reg_pred, agn_hm_pred=[], preds=[], - vis_thresh=0.3, debug_show_name=False, mult_agn=False): - ''' - images: N x 3 x H x W - class_target: LNHiWi x C - cat_agn_heatmap: LNHiWi - shapes_per_level: L x 2 [(H_i, W_i)] - ''' - N = len(images) - for i in range(len(images)): - image = images[i].detach().cpu().numpy().transpose(1, 2, 0) - result = image.copy().astype(np.uint8) - pred_image = image.copy().astype(np.uint8) - color_maps = [] - L = len(logits_pred) - for l in range(L): - if logits_pred[0] is not None: - stride = min(image.shape[0], image.shape[1]) / min( - logits_pred[l][i].shape[1], logits_pred[l][i].shape[2]) - else: - stride = min(image.shape[0], image.shape[1]) / min( - agn_hm_pred[l][i].shape[1], agn_hm_pred[l][i].shape[2]) - stride = stride if stride < 60 else 64 if stride < 100 else 128 - if logits_pred[0] is not None: - if mult_agn: - logits_pred[l][i] = logits_pred[l][i] * agn_hm_pred[l][i] - color_map = _get_color_image( - logits_pred[l][i].detach().cpu().numpy()) - color_maps.append(color_map) - cv2.imshow('predhm_{}'.format(l), color_map) - - if debug_show_name: - from detectron2.data.datasets.lvis_v1_categories import LVIS_CATEGORIES - cat2name = [x['name'] for x in LVIS_CATEGORIES] - for j in range(len(preds[i].scores) if preds is not None else 0): - if preds[i].scores[j] > vis_thresh: - bbox = preds[i].proposal_boxes[j] \ - if preds[i].has('proposal_boxes') else \ - preds[i].pred_boxes[j] - bbox = bbox.tensor[0].detach().cpu().numpy().astype(np.int32) - cat = int(preds[i].pred_classes[j]) \ - if preds[i].has('pred_classes') else 0 - cl = COLORS[cat, 0, 0] - cv2.rectangle( - pred_image, (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - (int(cl[0]), int(cl[1]), int(cl[2])), 2, cv2.LINE_AA) - if debug_show_name: - txt = '{}{:.1f}'.format( - cat2name[cat] if cat > 0 else '', - preds[i].scores[j]) - font = cv2.FONT_HERSHEY_SIMPLEX - cat_size = cv2.getTextSize(txt, font, 0.5, 2)[0] - cv2.rectangle( - pred_image, - (int(bbox[0]), int(bbox[1] - cat_size[1] - 2)), - (int(bbox[0] + cat_size[0]), int(bbox[1] - 2)), - (int(cl[0]), int(cl[1]), int(cl[2])), -1) - cv2.putText( - pred_image, txt, (int(bbox[0]), int(bbox[1] - 2)), - font, 0.5, (0, 0, 0), thickness=1, lineType=cv2.LINE_AA) - - - if agn_hm_pred[l] is not None: - agn_hm_ = agn_hm_pred[l][i, 0, :, :, None].detach().cpu().numpy() - agn_hm_ = (agn_hm_ * np.array([255, 255, 255]).reshape( - 1, 1, 3)).astype(np.uint8) - cv2.imshow('agn_hm_{}'.format(l), agn_hm_) - blend = _blend_image_heatmaps(image.copy(), color_maps) - cv2.imshow('blend', blend) - cv2.imshow('preds', pred_image) - cv2.waitKey() - -global cnt -cnt = 0 - -def debug_second_stage(images, instances, proposals=None, vis_thresh=0.3, - save_debug=False, debug_show_name=False): - images = _imagelist_to_tensor(images) - if debug_show_name: - from detectron2.data.datasets.lvis_v1_categories import LVIS_CATEGORIES - cat2name = [x['name'] for x in LVIS_CATEGORIES] - for i in range(len(images)): - image = images[i].detach().cpu().numpy().transpose(1, 2, 0).astype(np.uint8).copy() - if instances[i].has('gt_boxes'): - bboxes = instances[i].gt_boxes.tensor.cpu().numpy() - scores = np.ones(bboxes.shape[0]) - cats = instances[i].gt_classes.cpu().numpy() - else: - bboxes = instances[i].pred_boxes.tensor.cpu().numpy() - scores = instances[i].scores.cpu().numpy() - cats = instances[i].pred_classes.cpu().numpy() - for j in range(len(bboxes)): - if scores[j] > vis_thresh: - bbox = bboxes[j] - cl = COLORS[cats[j], 0, 0] - cl = (int(cl[0]), int(cl[1]), int(cl[2])) - cv2.rectangle( - image, - (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - cl, 2, cv2.LINE_AA) - if debug_show_name: - cat = cats[j] - txt = '{}{:.1f}'.format( - cat2name[cat] if cat > 0 else '', - scores[j]) - font = cv2.FONT_HERSHEY_SIMPLEX - cat_size = cv2.getTextSize(txt, font, 0.5, 2)[0] - cv2.rectangle( - image, - (int(bbox[0]), int(bbox[1] - cat_size[1] - 2)), - (int(bbox[0] + cat_size[0]), int(bbox[1] - 2)), - (int(cl[0]), int(cl[1]), int(cl[2])), -1) - cv2.putText( - image, txt, (int(bbox[0]), int(bbox[1] - 2)), - font, 0.5, (0, 0, 0), thickness=1, lineType=cv2.LINE_AA) - if proposals is not None: - proposal_image = images[i].detach().cpu().numpy().transpose(1, 2, 0).astype(np.uint8).copy() - bboxes = proposals[i].proposal_boxes.tensor.cpu().numpy() - if proposals[i].has('scores'): - scores = proposals[i].scores.cpu().numpy() - else: - scores = proposals[i].objectness_logits.sigmoid().cpu().numpy() - for j in range(len(bboxes)): - if scores[j] > vis_thresh: - bbox = bboxes[j] - cl = (209, 159, 83) - cv2.rectangle( - proposal_image, - (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - cl, 2, cv2.LINE_AA) - - cv2.imshow('image', image) - if proposals is not None: - cv2.imshow('proposals', proposal_image) - if save_debug: - global cnt - cnt += 1 - cv2.imwrite('output/save_debug/{}.jpg'.format(cnt), proposal_image) - cv2.waitKey() \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/resnet.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/resnet.py deleted file mode 100644 index 3e1d521f171c984cf6a7ff3dcebd96f8c5faf908..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/resnet.py +++ /dev/null @@ -1,181 +0,0 @@ -"""Modified from https://github.com/CSAILVision/semantic-segmentation-pytorch""" - -import math - -import torch.nn as nn -from torch.nn import BatchNorm2d - -from .utils import load_url - -__all__ = ['ResNet', 'resnet50'] - - -model_urls = { - 'resnet50': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnet50-imagenet.pth', -} - - -def conv3x3(in_planes, out_planes, stride=1): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = BatchNorm2d(planes) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, - padding=1, bias=False) - self.bn2 = BatchNorm2d(planes) - self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) - self.bn3 = BatchNorm2d(planes * 4) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__(self, block, layers, num_classes=1000): - self.inplanes = 128 - super(ResNet, self).__init__() - self.conv1 = conv3x3(3, 64, stride=2) - self.bn1 = BatchNorm2d(64) - self.relu1 = nn.ReLU(inplace=True) - self.conv2 = conv3x3(64, 64) - self.bn2 = BatchNorm2d(64) - self.relu2 = nn.ReLU(inplace=True) - self.conv3 = conv3x3(64, 128) - self.bn3 = BatchNorm2d(128) - self.relu3 = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2) - self.avgpool = nn.AvgPool2d(7, stride=1) - self.fc = nn.Linear(512 * block.expansion, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(0, math.sqrt(2. / n)) - elif isinstance(m, BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - x = x.view(x.size(0), -1) - x = self.fc(x) - - return x - - -def resnet50(pretrained=False, **kwargs): - """Constructs a ResNet-50 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) - if pretrained: - model.load_state_dict(load_url(model_urls['resnet50']), strict=False) - return model - - -def resnet18(pretrained=False, **kwargs): - """Constructs a ResNet-18 model. - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs) - if pretrained: - model.load_state_dict(load_url(model_urls['resnet18'])) - return model \ No newline at end of file diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/tools/geometry.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/tools/geometry.py deleted file mode 100644 index e6eafa2e1f2459a0f6f5ad1280c71e6a9625549e..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/tools/geometry.py +++ /dev/null @@ -1,566 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved. -# Check PYTORCH3D_LICENCE before use - -import functools -from typing import Optional - -import torch -import torch.nn.functional as F - - -""" -The transformation matrices returned from the functions in this file assume -the points on which the transformation will be applied are column vectors. -i.e. the R matrix is structured as - - R = [ - [Rxx, Rxy, Rxz], - [Ryx, Ryy, Ryz], - [Rzx, Rzy, Rzz], - ] # (3, 3) - -This matrix can be applied to column vectors by post multiplication -by the points e.g. - - points = [[0], [1], [2]] # (3 x 1) xyz coordinates of a point - transformed_points = R * points - -To apply the same matrix to points which are row vectors, the R matrix -can be transposed and pre multiplied by the points: - -e.g. - points = [[0, 1, 2]] # (1 x 3) xyz coordinates of a point - transformed_points = points * R.transpose(1, 0) -""" - - -# Added -def matrix_of_angles(cos, sin, inv=False, dim=2): - assert dim in [2, 3] - sin = -sin if inv else sin - if dim == 2: - row1 = torch.stack((cos, -sin), axis=-1) - row2 = torch.stack((sin, cos), axis=-1) - return torch.stack((row1, row2), axis=-2) - elif dim == 3: - row1 = torch.stack((cos, -sin, 0*cos), axis=-1) - row2 = torch.stack((sin, cos, 0*cos), axis=-1) - row3 = torch.stack((0*sin, 0*cos, 1+0*cos), axis=-1) - return torch.stack((row1, row2, row3),axis=-2) - - -def quaternion_to_matrix(quaternions): - """ - Convert rotations given as quaternions to rotation matrices. - - Args: - quaternions: quaternions with real part first, - as tensor of shape (..., 4). - - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - r, i, j, k = torch.unbind(quaternions, -1) - two_s = 2.0 / (quaternions * quaternions).sum(-1) - - o = torch.stack( - ( - 1 - two_s * (j * j + k * k), - two_s * (i * j - k * r), - two_s * (i * k + j * r), - two_s * (i * j + k * r), - 1 - two_s * (i * i + k * k), - two_s * (j * k - i * r), - two_s * (i * k - j * r), - two_s * (j * k + i * r), - 1 - two_s * (i * i + j * j), - ), - -1, - ) - return o.reshape(quaternions.shape[:-1] + (3, 3)) - - -def _copysign(a, b): - """ - Return a tensor where each element has the absolute value taken from the, - corresponding element of a, with sign taken from the corresponding - element of b. This is like the standard copysign floating-point operation, - but is not careful about negative 0 and NaN. - - Args: - a: source tensor. - b: tensor whose signs will be used, of the same shape as a. - - Returns: - Tensor of the same shape as a with the signs of b. - """ - signs_differ = (a < 0) != (b < 0) - return torch.where(signs_differ, -a, a) - - -def _sqrt_positive_part(x): - """ - Returns torch.sqrt(torch.max(0, x)) - but with a zero subgradient where x is 0. - """ - ret = torch.zeros_like(x) - positive_mask = x > 0 - ret[positive_mask] = torch.sqrt(x[positive_mask]) - return ret - - -def matrix_to_quaternion(matrix): - """ - Convert rotations given as rotation matrices to quaternions. - - Args: - matrix: Rotation matrices as tensor of shape (..., 3, 3). - - Returns: - quaternions with real part first, as tensor of shape (..., 4). - """ - if matrix.size(-1) != 3 or matrix.size(-2) != 3: - raise ValueError(f"Invalid rotation matrix shape f{matrix.shape}.") - m00 = matrix[..., 0, 0] - m11 = matrix[..., 1, 1] - m22 = matrix[..., 2, 2] - o0 = 0.5 * _sqrt_positive_part(1 + m00 + m11 + m22) - x = 0.5 * _sqrt_positive_part(1 + m00 - m11 - m22) - y = 0.5 * _sqrt_positive_part(1 - m00 + m11 - m22) - z = 0.5 * _sqrt_positive_part(1 - m00 - m11 + m22) - o1 = _copysign(x, matrix[..., 2, 1] - matrix[..., 1, 2]) - o2 = _copysign(y, matrix[..., 0, 2] - matrix[..., 2, 0]) - o3 = _copysign(z, matrix[..., 1, 0] - matrix[..., 0, 1]) - return torch.stack((o0, o1, o2, o3), -1) - - -def _axis_angle_rotation(axis: str, angle): - """ - Return the rotation matrices for one of the rotations about an axis - of which Euler angles describe, for each value of the angle given. - - Args: - axis: Axis label "X" or "Y or "Z". - angle: any shape tensor of Euler angles in radians - - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - - cos = torch.cos(angle) - sin = torch.sin(angle) - one = torch.ones_like(angle) - zero = torch.zeros_like(angle) - - if axis == "X": - R_flat = (one, zero, zero, zero, cos, -sin, zero, sin, cos) - if axis == "Y": - R_flat = (cos, zero, sin, zero, one, zero, -sin, zero, cos) - if axis == "Z": - R_flat = (cos, -sin, zero, sin, cos, zero, zero, zero, one) - - return torch.stack(R_flat, -1).reshape(angle.shape + (3, 3)) - - -def euler_angles_to_matrix(euler_angles, convention: str): - """ - Convert rotations given as Euler angles in radians to rotation matrices. - - Args: - euler_angles: Euler angles in radians as tensor of shape (..., 3). - convention: Convention string of three uppercase letters from - {"X", "Y", and "Z"}. - - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - if euler_angles.dim() == 0 or euler_angles.shape[-1] != 3: - raise ValueError("Invalid input euler angles.") - if len(convention) != 3: - raise ValueError("Convention must have 3 letters.") - if convention[1] in (convention[0], convention[2]): - raise ValueError(f"Invalid convention {convention}.") - for letter in convention: - if letter not in ("X", "Y", "Z"): - raise ValueError(f"Invalid letter {letter} in convention string.") - matrices = map(_axis_angle_rotation, convention, torch.unbind(euler_angles, -1)) - return functools.reduce(torch.matmul, matrices) - - -def _angle_from_tan( - axis: str, other_axis: str, data, horizontal: bool, tait_bryan: bool -): - """ - Extract the first or third Euler angle from the two members of - the matrix which are positive constant times its sine and cosine. - - Args: - axis: Axis label "X" or "Y or "Z" for the angle we are finding. - other_axis: Axis label "X" or "Y or "Z" for the middle axis in the - convention. - data: Rotation matrices as tensor of shape (..., 3, 3). - horizontal: Whether we are looking for the angle for the third axis, - which means the relevant entries are in the same row of the - rotation matrix. If not, they are in the same column. - tait_bryan: Whether the first and third axes in the convention differ. - - Returns: - Euler Angles in radians for each matrix in data as a tensor - of shape (...). - """ - - i1, i2 = {"X": (2, 1), "Y": (0, 2), "Z": (1, 0)}[axis] - if horizontal: - i2, i1 = i1, i2 - even = (axis + other_axis) in ["XY", "YZ", "ZX"] - if horizontal == even: - return torch.atan2(data[..., i1], data[..., i2]) - if tait_bryan: - return torch.atan2(-data[..., i2], data[..., i1]) - return torch.atan2(data[..., i2], -data[..., i1]) - - -def _index_from_letter(letter: str): - if letter == "X": - return 0 - if letter == "Y": - return 1 - if letter == "Z": - return 2 - - -def matrix_to_euler_angles(matrix, convention: str): - """ - Convert rotations given as rotation matrices to Euler angles in radians. - - Args: - matrix: Rotation matrices as tensor of shape (..., 3, 3). - convention: Convention string of three uppercase letters. - - Returns: - Euler angles in radians as tensor of shape (..., 3). - """ - if len(convention) != 3: - raise ValueError("Convention must have 3 letters.") - if convention[1] in (convention[0], convention[2]): - raise ValueError(f"Invalid convention {convention}.") - for letter in convention: - if letter not in ("X", "Y", "Z"): - raise ValueError(f"Invalid letter {letter} in convention string.") - if matrix.size(-1) != 3 or matrix.size(-2) != 3: - raise ValueError(f"Invalid rotation matrix shape f{matrix.shape}.") - i0 = _index_from_letter(convention[0]) - i2 = _index_from_letter(convention[2]) - tait_bryan = i0 != i2 - if tait_bryan: - central_angle = torch.asin( - matrix[..., i0, i2] * (-1.0 if i0 - i2 in [-1, 2] else 1.0) - ) - else: - central_angle = torch.acos(matrix[..., i0, i0]) - - o = ( - _angle_from_tan( - convention[0], convention[1], matrix[..., i2], False, tait_bryan - ), - central_angle, - _angle_from_tan( - convention[2], convention[1], matrix[..., i0, :], True, tait_bryan - ), - ) - return torch.stack(o, -1) - - -def random_quaternions( - n: int, dtype: Optional[torch.dtype] = None, device=None, requires_grad=False -): - """ - Generate random quaternions representing rotations, - i.e. versors with nonnegative real part. - - Args: - n: Number of quaternions in a batch to return. - dtype: Type to return. - device: Desired device of returned tensor. Default: - uses the current device for the default tensor type. - requires_grad: Whether the resulting tensor should have the gradient - flag set. - - Returns: - Quaternions as tensor of shape (N, 4). - """ - o = torch.randn((n, 4), dtype=dtype, device=device, requires_grad=requires_grad) - s = (o * o).sum(1) - o = o / _copysign(torch.sqrt(s), o[:, 0])[:, None] - return o - - -def random_rotations( - n: int, dtype: Optional[torch.dtype] = None, device=None, requires_grad=False -): - """ - Generate random rotations as 3x3 rotation matrices. - - Args: - n: Number of rotation matrices in a batch to return. - dtype: Type to return. - device: Device of returned tensor. Default: if None, - uses the current device for the default tensor type. - requires_grad: Whether the resulting tensor should have the gradient - flag set. - - Returns: - Rotation matrices as tensor of shape (n, 3, 3). - """ - quaternions = random_quaternions( - n, dtype=dtype, device=device, requires_grad=requires_grad - ) - return quaternion_to_matrix(quaternions) - - -def random_rotation( - dtype: Optional[torch.dtype] = None, device=None, requires_grad=False -): - """ - Generate a single random 3x3 rotation matrix. - - Args: - dtype: Type to return - device: Device of returned tensor. Default: if None, - uses the current device for the default tensor type - requires_grad: Whether the resulting tensor should have the gradient - flag set - - Returns: - Rotation matrix as tensor of shape (3, 3). - """ - return random_rotations(1, dtype, device, requires_grad)[0] - - -def standardize_quaternion(quaternions): - """ - Convert a unit quaternion to a standard form: one in which the real - part is non negative. - - Args: - quaternions: Quaternions with real part first, - as tensor of shape (..., 4). - - Returns: - Standardized quaternions as tensor of shape (..., 4). - """ - return torch.where(quaternions[..., 0:1] < 0, -quaternions, quaternions) - - -def quaternion_raw_multiply(a, b): - """ - Multiply two quaternions. - Usual torch rules for broadcasting apply. - - Args: - a: Quaternions as tensor of shape (..., 4), real part first. - b: Quaternions as tensor of shape (..., 4), real part first. - - Returns: - The product of a and b, a tensor of quaternions shape (..., 4). - """ - aw, ax, ay, az = torch.unbind(a, -1) - bw, bx, by, bz = torch.unbind(b, -1) - ow = aw * bw - ax * bx - ay * by - az * bz - ox = aw * bx + ax * bw + ay * bz - az * by - oy = aw * by - ax * bz + ay * bw + az * bx - oz = aw * bz + ax * by - ay * bx + az * bw - return torch.stack((ow, ox, oy, oz), -1) - - -def quaternion_multiply(a, b): - """ - Multiply two quaternions representing rotations, returning the quaternion - representing their composition, i.e. the versor with nonnegative real part. - Usual torch rules for broadcasting apply. - - Args: - a: Quaternions as tensor of shape (..., 4), real part first. - b: Quaternions as tensor of shape (..., 4), real part first. - - Returns: - The product of a and b, a tensor of quaternions of shape (..., 4). - """ - ab = quaternion_raw_multiply(a, b) - return standardize_quaternion(ab) - - -def quaternion_invert(quaternion): - """ - Given a quaternion representing rotation, get the quaternion representing - its inverse. - - Args: - quaternion: Quaternions as tensor of shape (..., 4), with real part - first, which must be versors (unit quaternions). - - Returns: - The inverse, a tensor of quaternions of shape (..., 4). - """ - - return quaternion * quaternion.new_tensor([1, -1, -1, -1]) - - -def quaternion_apply(quaternion, point): - """ - Apply the rotation given by a quaternion to a 3D point. - Usual torch rules for broadcasting apply. - - Args: - quaternion: Tensor of quaternions, real part first, of shape (..., 4). - point: Tensor of 3D points of shape (..., 3). - - Returns: - Tensor of rotated points of shape (..., 3). - """ - if point.size(-1) != 3: - raise ValueError(f"Points are not in 3D, f{point.shape}.") - real_parts = point.new_zeros(point.shape[:-1] + (1,)) - point_as_quaternion = torch.cat((real_parts, point), -1) - out = quaternion_raw_multiply( - quaternion_raw_multiply(quaternion, point_as_quaternion), - quaternion_invert(quaternion), - ) - return out[..., 1:] - - -def axis_angle_to_matrix(axis_angle): - """ - Convert rotations given as axis/angle to rotation matrices. - - Args: - axis_angle: Rotations given as a vector in axis angle form, - as a tensor of shape (..., 3), where the magnitude is - the angle turned anticlockwise in radians around the - vector's direction. - - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - return quaternion_to_matrix(axis_angle_to_quaternion(axis_angle)) - - -def matrix_to_axis_angle(matrix): - """ - Convert rotations given as rotation matrices to axis/angle. - - Args: - matrix: Rotation matrices as tensor of shape (..., 3, 3). - - Returns: - Rotations given as a vector in axis angle form, as a tensor - of shape (..., 3), where the magnitude is the angle - turned anticlockwise in radians around the vector's - direction. - """ - return quaternion_to_axis_angle(matrix_to_quaternion(matrix)) - - -def axis_angle_to_quaternion(axis_angle): - """ - Convert rotations given as axis/angle to quaternions. - - Args: - axis_angle: Rotations given as a vector in axis angle form, - as a tensor of shape (..., 3), where the magnitude is - the angle turned anticlockwise in radians around the - vector's direction. - - Returns: - quaternions with real part first, as tensor of shape (..., 4). - """ - angles = torch.norm(axis_angle, p=2, dim=-1, keepdim=True) - half_angles = 0.5 * angles - eps = 1e-6 - small_angles = angles.abs() < eps - sin_half_angles_over_angles = torch.empty_like(angles) - sin_half_angles_over_angles[~small_angles] = ( - torch.sin(half_angles[~small_angles]) / angles[~small_angles] - ) - # for x small, sin(x/2) is about x/2 - (x/2)^3/6 - # so sin(x/2)/x is about 1/2 - (x*x)/48 - sin_half_angles_over_angles[small_angles] = ( - 0.5 - (angles[small_angles] * angles[small_angles]) / 48 - ) - quaternions = torch.cat( - [torch.cos(half_angles), axis_angle * sin_half_angles_over_angles], dim=-1 - ) - return quaternions - - -def quaternion_to_axis_angle(quaternions): - """ - Convert rotations given as quaternions to axis/angle. - - Args: - quaternions: quaternions with real part first, - as tensor of shape (..., 4). - - Returns: - Rotations given as a vector in axis angle form, as a tensor - of shape (..., 3), where the magnitude is the angle - turned anticlockwise in radians around the vector's - direction. - """ - norms = torch.norm(quaternions[..., 1:], p=2, dim=-1, keepdim=True) - half_angles = torch.atan2(norms, quaternions[..., :1]) - angles = 2 * half_angles - eps = 1e-6 - small_angles = angles.abs() < eps - sin_half_angles_over_angles = torch.empty_like(angles) - sin_half_angles_over_angles[~small_angles] = ( - torch.sin(half_angles[~small_angles]) / angles[~small_angles] - ) - # for x small, sin(x/2) is about x/2 - (x/2)^3/6 - # so sin(x/2)/x is about 1/2 - (x*x)/48 - sin_half_angles_over_angles[small_angles] = ( - 0.5 - (angles[small_angles] * angles[small_angles]) / 48 - ) - return quaternions[..., 1:] / sin_half_angles_over_angles - - -def rotation_6d_to_matrix(d6: torch.Tensor) -> torch.Tensor: - """ - Converts 6D rotation representation by Zhou et al. [1] to rotation matrix - using Gram--Schmidt orthogonalisation per Section B of [1]. - Args: - d6: 6D rotation representation, of size (*, 6) - - Returns: - batch of rotation matrices of size (*, 3, 3) - - [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. - On the Continuity of Rotation Representations in Neural Networks. - IEEE Conference on Computer Vision and Pattern Recognition, 2019. - Retrieved from http://arxiv.org/abs/1812.07035 - """ - - a1, a2 = d6[..., :3], d6[..., 3:] - b1 = F.normalize(a1, dim=-1) - b2 = a2 - (b1 * a2).sum(-1, keepdim=True) * b1 - b2 = F.normalize(b2, dim=-1) - b3 = torch.cross(b1, b2, dim=-1) - return torch.stack((b1, b2, b3), dim=-2) - - -def matrix_to_rotation_6d(matrix: torch.Tensor) -> torch.Tensor: - """ - Converts rotation matrices to 6D rotation representation by Zhou et al. [1] - by dropping the last row. Note that 6D representation is not unique. - Args: - matrix: batch of rotation matrices of size (*, 3, 3) - - Returns: - 6D rotation representation, of size (*, 6) - - [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. - On the Continuity of Rotation Representations in Neural Networks. - IEEE Conference on Computer Vision and Pattern Recognition, 2019. - Retrieved from http://arxiv.org/abs/1812.07035 - """ - return matrix[..., :2, :].clone().reshape(*matrix.size()[:-2], 6) diff --git a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/__init__.py b/spaces/OptimalScale/Robin-33b/lmflow/pipeline/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/chord-ignatzek-names.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/chord-ignatzek-names.go deleted file mode 100644 index 0c5f8e0091e98963e39cedca2d7d3377d03926ed..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/chord-ignatzek-names.go and /dev/null differ diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/mobilenet_v2.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/mobilenet_v2.py deleted file mode 100644 index ab6b3791692a0d1b5da3601875711710b7bd01ba..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/mobilenet_v2.py +++ /dev/null @@ -1,180 +0,0 @@ -import logging - -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule, constant_init, kaiming_init -from annotator.uniformer.mmcv.runner import load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import InvertedResidual, make_divisible - - -@BACKBONES.register_module() -class MobileNetV2(nn.Module): - """MobileNetV2 backbone. - - Args: - widen_factor (float): Width multiplier, multiply number of - channels in each layer by this amount. Default: 1.0. - strides (Sequence[int], optional): Strides of the first block of each - layer. If not specified, default config in ``arch_setting`` will - be used. - dilations (Sequence[int]): Dilation of each layer. - out_indices (None or Sequence[int]): Output from which stages. - Default: (7, ). - frozen_stages (int): Stages to be frozen (all param fixed). - Default: -1, which means not freezing any parameters. - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='ReLU6'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - # Parameters to build layers. 3 parameters are needed to construct a - # layer, from left to right: expand_ratio, channel, num_blocks. - arch_settings = [[1, 16, 1], [6, 24, 2], [6, 32, 3], [6, 64, 4], - [6, 96, 3], [6, 160, 3], [6, 320, 1]] - - def __init__(self, - widen_factor=1., - strides=(1, 2, 2, 2, 1, 2, 1), - dilations=(1, 1, 1, 1, 1, 1, 1), - out_indices=(1, 2, 4, 6), - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU6'), - norm_eval=False, - with_cp=False): - super(MobileNetV2, self).__init__() - self.widen_factor = widen_factor - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == len(self.arch_settings) - self.out_indices = out_indices - for index in out_indices: - if index not in range(0, 7): - raise ValueError('the item in out_indices must in ' - f'range(0, 8). But received {index}') - - if frozen_stages not in range(-1, 7): - raise ValueError('frozen_stages must be in range(-1, 7). ' - f'But received {frozen_stages}') - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - - self.in_channels = make_divisible(32 * widen_factor, 8) - - self.conv1 = ConvModule( - in_channels=3, - out_channels=self.in_channels, - kernel_size=3, - stride=2, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - self.layers = [] - - for i, layer_cfg in enumerate(self.arch_settings): - expand_ratio, channel, num_blocks = layer_cfg - stride = self.strides[i] - dilation = self.dilations[i] - out_channels = make_divisible(channel * widen_factor, 8) - inverted_res_layer = self.make_layer( - out_channels=out_channels, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - expand_ratio=expand_ratio) - layer_name = f'layer{i + 1}' - self.add_module(layer_name, inverted_res_layer) - self.layers.append(layer_name) - - def make_layer(self, out_channels, num_blocks, stride, dilation, - expand_ratio): - """Stack InvertedResidual blocks to build a layer for MobileNetV2. - - Args: - out_channels (int): out_channels of block. - num_blocks (int): Number of blocks. - stride (int): Stride of the first block. - dilation (int): Dilation of the first block. - expand_ratio (int): Expand the number of channels of the - hidden layer in InvertedResidual by this ratio. - """ - layers = [] - for i in range(num_blocks): - layers.append( - InvertedResidual( - self.in_channels, - out_channels, - stride if i == 0 else 1, - expand_ratio=expand_ratio, - dilation=dilation if i == 0 else 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - with_cp=self.with_cp)) - self.in_channels = out_channels - - return nn.Sequential(*layers) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - x = self.conv1(x) - - outs = [] - for i, layer_name in enumerate(self.layers): - layer = getattr(self, layer_name) - x = layer(x) - if i in self.out_indices: - outs.append(x) - - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for i in range(1, self.frozen_stages + 1): - layer = getattr(self, f'layer{i}') - layer.eval() - for param in layer.parameters(): - param.requires_grad = False - - def train(self, mode=True): - super(MobileNetV2, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, _BatchNorm): - m.eval() diff --git a/spaces/PlanetHades361/Change-Your-Style/README.md b/spaces/PlanetHades361/Change-Your-Style/README.md deleted file mode 100644 index 81ccf10de11a6105e515cccbbb45b1e13137c0b7..0000000000000000000000000000000000000000 --- a/spaces/PlanetHades361/Change-Your-Style/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Change Your Style -emoji: ⚡ -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: deedax/Change-Your-Style ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RamAnanth1/roomGPT/model.py b/spaces/RamAnanth1/roomGPT/model.py deleted file mode 100644 index 6420a55da888514dab11cb4f157a36ab5cd4af46..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/roomGPT/model.py +++ /dev/null @@ -1,183 +0,0 @@ -# This file is adapted from gradio_*.py in https://github.com/lllyasviel/ControlNet/tree/f4748e3630d8141d7765e2bd9b1e348f47847707 -# The original license file is LICENSE.ControlNet in this repo. -from __future__ import annotations - -import gc -import pathlib -import sys - -import cv2 -import numpy as np -import PIL.Image -import torch -from diffusers import (ControlNetModel, DiffusionPipeline, - StableDiffusionControlNetPipeline, - UniPCMultistepScheduler) - -repo_dir = pathlib.Path(__file__).parent -submodule_dir = repo_dir / 'ControlNet' -sys.path.append(submodule_dir.as_posix()) - -from annotator.mlsd import apply_mlsd -from annotator.uniformer import apply_uniformer -from annotator.util import HWC3, resize_image - -CONTROLNET_MODEL_IDS = { - - 'hough': 'lllyasviel/sd-controlnet-mlsd', - -} - - -def download_all_controlnet_weights() -> None: - for model_id in CONTROLNET_MODEL_IDS.values(): - ControlNetModel.from_pretrained(model_id) - - -class Model: - def __init__(self, - base_model_id: str = 'runwayml/stable-diffusion-v1-5', - task_name: str = 'hough'): - self.device = torch.device( - 'cuda:0' if torch.cuda.is_available() else 'cpu') - self.base_model_id = '' - self.task_name = '' - self.pipe = self.load_pipe(base_model_id, task_name) - - def load_pipe(self, base_model_id: str, task_name) -> DiffusionPipeline: - if base_model_id == self.base_model_id and task_name == self.task_name and hasattr( - self, 'pipe'): - return self.pipe - model_id = CONTROLNET_MODEL_IDS[task_name] - controlnet = ControlNetModel.from_pretrained(model_id, - torch_dtype=torch.float16) - pipe = StableDiffusionControlNetPipeline.from_pretrained( - base_model_id, - safety_checker=None, - controlnet=controlnet, - torch_dtype=torch.float16) - pipe.scheduler = UniPCMultistepScheduler.from_config( - pipe.scheduler.config) - pipe.enable_xformers_memory_efficient_attention() - pipe.to(self.device) - torch.cuda.empty_cache() - gc.collect() - self.base_model_id = base_model_id - self.task_name = task_name - return pipe - - def set_base_model(self, base_model_id: str) -> str: - if not base_model_id or base_model_id == self.base_model_id: - return self.base_model_id - del self.pipe - torch.cuda.empty_cache() - gc.collect() - try: - self.pipe = self.load_pipe(base_model_id, self.task_name) - except Exception: - self.pipe = self.load_pipe(self.base_model_id, self.task_name) - return self.base_model_id - - def load_controlnet_weight(self, task_name: str) -> None: - if task_name == self.task_name: - return - del self.pipe.controlnet - torch.cuda.empty_cache() - gc.collect() - model_id = CONTROLNET_MODEL_IDS[task_name] - controlnet = ControlNetModel.from_pretrained(model_id, - torch_dtype=torch.float16) - controlnet.to(self.device) - torch.cuda.empty_cache() - gc.collect() - self.pipe.controlnet = controlnet - self.task_name = task_name - - def get_prompt(self, prompt: str, additional_prompt: str) -> str: - if not prompt: - prompt = additional_prompt - else: - prompt = f'{prompt}, {additional_prompt}' - return prompt - - @torch.autocast('cuda') - def run_pipe( - self, - prompt: str, - negative_prompt: str, - control_image: PIL.Image.Image, - num_images: int, - num_steps: int, - guidance_scale: float, - seed: int, - ) -> list[PIL.Image.Image]: - if seed == -1: - seed = np.random.randint(0, np.iinfo(np.int64).max) - generator = torch.Generator().manual_seed(seed) - return self.pipe(prompt=prompt, - negative_prompt=negative_prompt, - guidance_scale=guidance_scale, - num_images_per_prompt=num_images, - num_inference_steps=num_steps, - generator=generator, - image=control_image).images - - @staticmethod - def preprocess_hough( - input_image: np.ndarray, - image_resolution: int, - detect_resolution: int, - value_threshold: float, - distance_threshold: float, - ) -> tuple[PIL.Image.Image, PIL.Image.Image]: - input_image = HWC3(input_image) - control_image = apply_mlsd( - resize_image(input_image, detect_resolution), value_threshold, - distance_threshold) - control_image = HWC3(control_image) - image = resize_image(input_image, image_resolution) - H, W = image.shape[:2] - control_image = cv2.resize(control_image, (W, H), - interpolation=cv2.INTER_NEAREST) - - vis_control_image = 255 - cv2.dilate( - control_image, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1) - - return PIL.Image.fromarray(control_image), PIL.Image.fromarray( - vis_control_image) - - @torch.inference_mode() - def process_hough( - self, - input_image: np.ndarray, - prompt: str, - additional_prompt: str, - negative_prompt: str, - num_images: int, - image_resolution: int, - detect_resolution: int, - num_steps: int, - guidance_scale: float, - seed: int, - value_threshold: float, - distance_threshold: float, - ) -> list[PIL.Image.Image]: - control_image, vis_control_image = self.preprocess_hough( - input_image=input_image, - image_resolution=image_resolution, - detect_resolution=detect_resolution, - value_threshold=value_threshold, - distance_threshold=distance_threshold, - ) - self.load_controlnet_weight('hough') - results = self.run_pipe( - prompt=self.get_prompt(prompt, additional_prompt), - negative_prompt=negative_prompt, - control_image=control_image, - - num_images=num_images, - num_steps=num_steps, - guidance_scale=guidance_scale, - seed=seed, - ) - return results diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/config/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/config/__init__.py deleted file mode 100644 index 1a5153ad4fa51e14c83b8bb00345354d42ed3f0a..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/config/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -"""For backward compatibility, expose main functions from -``setuptools.config.setupcfg`` -""" -import warnings -from functools import wraps -from textwrap import dedent -from typing import Callable, TypeVar, cast - -from .._deprecation_warning import SetuptoolsDeprecationWarning -from . import setupcfg - -Fn = TypeVar("Fn", bound=Callable) - -__all__ = ('parse_configuration', 'read_configuration') - - -def _deprecation_notice(fn: Fn) -> Fn: - @wraps(fn) - def _wrapper(*args, **kwargs): - msg = f"""\ - As setuptools moves its configuration towards `pyproject.toml`, - `{__name__}.{fn.__name__}` became deprecated. - - For the time being, you can use the `{setupcfg.__name__}` module - to access a backward compatible API, but this module is provisional - and might be removed in the future. - """ - warnings.warn(dedent(msg), SetuptoolsDeprecationWarning, stacklevel=2) - return fn(*args, **kwargs) - - return cast(Fn, _wrapper) - - -read_configuration = _deprecation_notice(setupcfg.read_configuration) -parse_configuration = _deprecation_notice(setupcfg.parse_configuration) diff --git a/spaces/Rayzggz/illi-Bert-VITS2/text/__init__.py b/spaces/Rayzggz/illi-Bert-VITS2/text/__init__.py deleted file mode 100644 index a45b650424306b6e077d7013e93e2c9bd1e073c2..0000000000000000000000000000000000000000 --- a/spaces/Rayzggz/illi-Bert-VITS2/text/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from text.symbols import * - -_symbol_to_id = {s: i for i, s in enumerate(symbols)} - - -def cleaned_text_to_sequence(cleaned_text, tones, language): - """Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - """ - phones = [_symbol_to_id[symbol] for symbol in cleaned_text] - tone_start = language_tone_start_map[language] - tones = [i + tone_start for i in tones] - lang_id = language_id_map[language] - lang_ids = [lang_id for i in phones] - return phones, tones, lang_ids - - -def get_bert(norm_text, word2ph, language, device): - from .chinese_bert import get_bert_feature as zh_bert - from .english_bert_mock import get_bert_feature as en_bert - from .japanese_bert import get_bert_feature as jp_bert - - lang_bert_func_map = {"ZH": zh_bert, "EN": en_bert, "JP": jp_bert} - bert = lang_bert_func_map[language](norm_text, word2ph, device) - return bert diff --git a/spaces/Rimi98/Relax-Teacher/app.py b/spaces/Rimi98/Relax-Teacher/app.py deleted file mode 100644 index 7851cd1728b7b02d1d9a53ac4319c62a00d3e714..0000000000000000000000000000000000000000 --- a/spaces/Rimi98/Relax-Teacher/app.py +++ /dev/null @@ -1,102 +0,0 @@ -import gradio as gr -import onnxruntime -from transformers import AutoTokenizer -import torch -import os -from transformers import pipeline -import subprocess -import moviepy.editor as mp -import base64 - -token = AutoTokenizer.from_pretrained('distilroberta-base') - -inf_session = onnxruntime.InferenceSession('classifier-quantized2.onnx') -input_name = inf_session.get_inputs()[0].name -output_name = inf_session.get_outputs()[0].name - -classes = ['Art', 'Astrology', 'Biology', 'Chemistry', 'Economics', 'History', 'Literature', 'Philosophy', 'Physics', 'Politics', 'Psychology', 'Sociology'] - -### --- Audio/Video to txt ---### -device = "cuda:0" if torch.cuda.is_available() else "cpu" -pipe = pipeline("automatic-speech-recognition", - model="openai/whisper-tiny.en", - chunk_length_s=30, device=device) - -### --- Text Summary --- ### -summarizer = pipeline("summarization", model="sshleifer/distilbart-cnn-12-6", device=device) - - -def video_identity(video): - transcription = pipe(video)["text"] - return transcription - -def summary(text): - text = text.split('.') - max_chunk = 500 - current_chunk = 0 - chunks = [] - - - for t in text: - if len(chunks) == current_chunk + 1: - if len(chunks[current_chunk]) + len(t.split(' ')) <= max_chunk: - chunks[current_chunk].extend(t.split(' ')) - else: - current_chunk += 1 - chunks.append(t.split(' ')) - else: - chunks.append(t.split(' ')) - - for chunk in range(len(chunks)): - chunks[chunk] =' '.join(chunks[chunk]) - - summ = summarizer(chunks,max_length = 100) - - return summ - -def classify(video_file,encoded_video): - - if encoded_video != "": - - decoded_file_data = base64.b64decode(encoded_video) - - with open("temp_video.mp4", "wb") as f: - f.write(decoded_file_data) - - video_file = "temp_video.mp4" - - clip = mp.VideoFileClip(video_file) - clip.audio.write_audiofile(r"audio.wav") - - full_text = video_identity(r"audio.wav") - sum = summary(full_text)[0]['summary_text'] - - - input_ids = token(sum)['input_ids'][:512] - logits = inf_session.run([output_name],{input_name : [input_ids]})[0] - logits = torch.FloatTensor(logits) - probs = torch.sigmoid(logits)[0] - probs = list(probs) - label = classes[probs.index(max(probs))] - - final = { - 'text':full_text, - 'summary':sum, - 'label':label, -} - return final - -text1 = gr.Textbox(label="Text") -text2 = gr.Textbox(label="Summary") - - - - -iface = gr.Interface(fn=classify, - inputs=['video','text'], - outputs = ['json']) -iface.launch(inline=False) - - - - diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/ae_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/ae_loss.py deleted file mode 100644 index cff472aa03080fb49dbb3adba6fec68647a575e6..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/ae_loss.py +++ /dev/null @@ -1,102 +0,0 @@ -import mmcv -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES - - -@mmcv.jit(derivate=True, coderize=True) -def ae_loss_per_image(tl_preds, br_preds, match): - """Associative Embedding Loss in one image. - - Associative Embedding Loss including two parts: pull loss and push loss. - Pull loss makes embedding vectors from same object closer to each other. - Push loss distinguish embedding vector from different objects, and makes - the gap between them is large enough. - - During computing, usually there are 3 cases: - - no object in image: both pull loss and push loss will be 0. - - one object in image: push loss will be 0 and pull loss is computed - by the two corner of the only object. - - more than one objects in image: pull loss is computed by corner pairs - from each object, push loss is computed by each object with all - other objects. We use confusion matrix with 0 in diagonal to - compute the push loss. - - Args: - tl_preds (tensor): Embedding feature map of left-top corner. - br_preds (tensor): Embedding feature map of bottim-right corner. - match (list): Downsampled coordinates pair of each ground truth box. - """ - - tl_list, br_list, me_list = [], [], [] - if len(match) == 0: # no object in image - pull_loss = tl_preds.sum() * 0. - push_loss = tl_preds.sum() * 0. - else: - for m in match: - [tl_y, tl_x], [br_y, br_x] = m - tl_e = tl_preds[:, tl_y, tl_x].view(-1, 1) - br_e = br_preds[:, br_y, br_x].view(-1, 1) - tl_list.append(tl_e) - br_list.append(br_e) - me_list.append((tl_e + br_e) / 2.0) - - tl_list = torch.cat(tl_list) - br_list = torch.cat(br_list) - me_list = torch.cat(me_list) - - assert tl_list.size() == br_list.size() - - # N is object number in image, M is dimension of embedding vector - N, M = tl_list.size() - - pull_loss = (tl_list - me_list).pow(2) + (br_list - me_list).pow(2) - pull_loss = pull_loss.sum() / N - - margin = 1 # exp setting of CornerNet, details in section 3.3 of paper - - # confusion matrix of push loss - conf_mat = me_list.expand((N, N, M)).permute(1, 0, 2) - me_list - conf_weight = 1 - torch.eye(N).type_as(me_list) - conf_mat = conf_weight * (margin - conf_mat.sum(-1).abs()) - - if N > 1: # more than one object in current image - push_loss = F.relu(conf_mat).sum() / (N * (N - 1)) - else: - push_loss = tl_preds.sum() * 0. - - return pull_loss, push_loss - - -@LOSSES.register_module() -class AssociativeEmbeddingLoss(nn.Module): - """Associative Embedding Loss. - - More details can be found in - `Associative Embedding `_ and - `CornerNet `_ . - Code is modified from `kp_utils.py `_ # noqa: E501 - - Args: - pull_weight (float): Loss weight for corners from same object. - push_weight (float): Loss weight for corners from different object. - """ - - def __init__(self, pull_weight=0.25, push_weight=0.25): - super(AssociativeEmbeddingLoss, self).__init__() - self.pull_weight = pull_weight - self.push_weight = push_weight - - def forward(self, pred, target, match): - """Forward function.""" - batch = pred.size(0) - pull_all, push_all = 0.0, 0.0 - for i in range(batch): - pull, push = ae_loss_per_image(pred[i], target[i], match[i]) - - pull_all += self.pull_weight * pull - push_all += self.push_weight * push - - return pull_all, push_all diff --git a/spaces/RoundtTble/dinov2-pca/README.md b/spaces/RoundtTble/dinov2-pca/README.md deleted file mode 100644 index ccca95642fdcae86ea83deb3fe9524bb5003f799..0000000000000000000000000000000000000000 --- a/spaces/RoundtTble/dinov2-pca/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dinov2 PCA -emoji: 🚀 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/monotonic_align/__init__.py b/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/monotonic_align/__init__.py deleted file mode 100644 index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000 --- a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import torch -from .monotonic_align.core import maximum_path_c - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/commands/env.py b/spaces/Salesforce/EDICT/my_half_diffusers/commands/env.py deleted file mode 100644 index 81a878bff6688d3c510b53c60ac9d0e51e4aebcc..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_half_diffusers/commands/env.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import platform -from argparse import ArgumentParser - -import huggingface_hub - -from .. import __version__ as version -from ..utils import is_torch_available, is_transformers_available -from . import BaseDiffusersCLICommand - - -def info_command_factory(_): - return EnvironmentCommand() - - -class EnvironmentCommand(BaseDiffusersCLICommand): - @staticmethod - def register_subcommand(parser: ArgumentParser): - download_parser = parser.add_parser("env") - download_parser.set_defaults(func=info_command_factory) - - def run(self): - hub_version = huggingface_hub.__version__ - - pt_version = "not installed" - pt_cuda_available = "NA" - if is_torch_available(): - import torch - - pt_version = torch.__version__ - pt_cuda_available = torch.cuda.is_available() - - transformers_version = "not installed" - if is_transformers_available: - import transformers - - transformers_version = transformers.__version__ - - info = { - "`diffusers` version": version, - "Platform": platform.platform(), - "Python version": platform.python_version(), - "PyTorch version (GPU?)": f"{pt_version} ({pt_cuda_available})", - "Huggingface_hub version": hub_version, - "Transformers version": transformers_version, - "Using GPU in script?": "", - "Using distributed or parallel set-up in script?": "", - } - - print("\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\n") - print(self.format_dict(info)) - - return info - - @staticmethod - def format_dict(d): - return "\n".join([f"- {prop}: {val}" for prop, val in d.items()]) + "\n" diff --git a/spaces/Sapphire-356/Video2MC/skeleton_num_visualization.py b/spaces/Sapphire-356/Video2MC/skeleton_num_visualization.py deleted file mode 100644 index 7c1914b53c7675fe18107bbe28281824607e331e..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/skeleton_num_visualization.py +++ /dev/null @@ -1,46 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt - -# 导入数据 -import pickle -with open("output_3Dpose_npy/kun_1280x720_30fps_0-14_0-32.npy", 'rb') as file: - data = np.load(file) -with open("skeleton.npy", 'rb') as file: - skeleton = pickle.load(file) - -# 提取第0帧坐标 -xyz_0 = data[0] - -# 创建3D坐标系 -fig = plt.figure(figsize=(10, 8)) -ax = fig.add_subplot(111, projection='3d') -# 设置俯仰角和方位角 -ax.view_init(elev=0., azim=70) - -# 绘制3D点 -radius = 1.7 -ax.scatter(xyz_0[:, 0], xyz_0[:, 1], xyz_0[:, 2]) -# 添加文本标记 -for i in range(xyz_0.shape[0]): - ax.text(xyz_0[i, 0], xyz_0[i, 1], xyz_0[i, 2], str(i), fontsize=10) - -# 绘制两点间的线段 -for num1 in range(xyz_0.shape[0]): - parent = skeleton._parents - num2 = parent[num1] - if num2 != -1: - x1, y1, z1 = xyz_0[num1, :] - x2, y2, z2 = xyz_0[num2, :] - ax.plot([x1, x2], [y1, y2], [z1, z2]) - -ax.set_xlabel('X') -ax.set_ylabel('Y') -ax.set_zlabel('Z') -# ax.set_xlim3d([-radius/2, radius/2]) -# ax.set_ylim3d([-radius/2, radius/2]) -# ax.set_xticklabels([]) -# ax.set_yticklabels([]) -# ax.set_zticklabels([]) - -# 保存图像 -plt.savefig('plot.png') diff --git a/spaces/SeyedAli/Persian-Speech-Emotion-Detection/collator.py b/spaces/SeyedAli/Persian-Speech-Emotion-Detection/collator.py deleted file mode 100644 index cd5c74f2cf41037cd64022f2c9579fbb39210ce9..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Persian-Speech-Emotion-Detection/collator.py +++ /dev/null @@ -1,58 +0,0 @@ -from dataclasses import dataclass -from typing import Dict, List, Optional, Union -import torch - -import transformers -from transformers import Wav2Vec2Processor, Wav2Vec2FeatureExtractor - - -@dataclass -class DataCollatorCTCWithPadding: - """ - Data collator that will dynamically pad the inputs received. - Args: - feature_extractor (:class:`~transformers.Wav2Vec2FeatureExtractor`) - The feature_extractor used for proccessing the data. - padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`): - Select a strategy to pad the returned sequences (according to the model's padding side and padding index) - among: - * :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single - sequence if provided). - * :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the - maximum acceptable input length for the model if that argument is not provided. - * :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of - different lengths). - max_length (:obj:`int`, `optional`): - Maximum length of the ``input_values`` of the returned list and optionally padding length (see above). - max_length_labels (:obj:`int`, `optional`): - Maximum length of the ``labels`` returned list and optionally padding length (see above). - pad_to_multiple_of (:obj:`int`, `optional`): - If set will pad the sequence to a multiple of the provided value. - This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= - 7.5 (Volta). - """ - - feature_extractor: Wav2Vec2FeatureExtractor - padding: Union[bool, str] = True - max_length: Optional[int] = None - max_length_labels: Optional[int] = None - pad_to_multiple_of: Optional[int] = None - pad_to_multiple_of_labels: Optional[int] = None - - def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: - input_features = [{"input_values": feature["input_values"]} for feature in features] - label_features = [feature["labels"] for feature in features] - - d_type = torch.long if isinstance(label_features[0], int) else torch.float - - batch = self.feature_extractor.pad( - input_features, - padding=self.padding, - max_length=self.max_length, - pad_to_multiple_of=self.pad_to_multiple_of, - return_tensors="pt", - ) - - batch["labels"] = torch.tensor(label_features, dtype=d_type) - - return batch diff --git a/spaces/Shad0ws/Information_Extraction_with_ChatGPT/app.py b/spaces/Shad0ws/Information_Extraction_with_ChatGPT/app.py deleted file mode 100644 index 3aa9d792215b451f59b1f30ddfc71414552f27d5..0000000000000000000000000000000000000000 --- a/spaces/Shad0ws/Information_Extraction_with_ChatGPT/app.py +++ /dev/null @@ -1,68 +0,0 @@ -import gradio as gr -import os -import openai -import newspaper -import json -import re -from transformers import GPT2Tokenizer - - -# define the text summarizer function -def text_prompt(request, page_url, contraseña, temp): - try: - page = newspaper.Article(url=page_url) - page.download() - page.parse() - except Exception as e: - return "", f"--- An error occurred while processing the URL: {e} ---", "" - - tokenizer = GPT2Tokenizer.from_pretrained("gpt2") - - tokens = tokenizer.tokenize(page.text) - num_tokens = len(tokens) - - if num_tokens > 10 and num_tokens < 2000: - openai.api_key = contraseña - # get the response from openai API - try: - response = openai.Completion.create( - engine="text-davinci-003", - prompt=request + "\n\n" + page.text, - max_tokens=2048, - temperature=temp, - top_p=0.9, - ) - # get the response text - response_text = response.choices[0].text - # clean the response text - response_text = re.sub(r'\s+', ' ', response_text) - return page.text, response_text, num_tokens - except Exception as e: - return page.text, f"--- An error occurred while processing the request: {e} ---", num_tokens - return page.text, "--- Max number of tokens ---", num_tokens - -# define the gradio interface -iface = gr.Interface( - fn=text_prompt, - inputs=[gr.Textbox(lines=1, placeholder="Enter your prompt here...", label="Prompt:", type="text"), - gr.Textbox(lines=1, placeholder="Enter the URL here...", label="URL to parse:", type="text"), - gr.Textbox(lines=1, placeholder="Enter your API-key here...", label="API-Key:", type="password"), - gr.Slider(0.0,1.0, value=0.3, label="Temperature:") - ], - outputs=[gr.Textbox(label="Input:"), gr.Textbox(label="Output:"), gr.Textbox(label="Tokens:")], - examples=[["Summarize the following text as a list:","https://blog.google/outreach-initiatives/google-org/our-commitment-on-using-ai-to-accelerate-progress-on-global-development-goals/","",0.3], - ["Generate a summary of the following text. Give me an overview of main business impact from the text following this template:\n- Summary:\n- Business Impact:\n- Companies:", "https://ai.googleblog.com/2019/10/quantum-supremacy-using-programmable.html","",0.7] - ], - title="Information Extraction Interface:", - # description="This tool allows querying the text retrieved from the URL using OpenAI's [text-davinci-003] engine.\nThe URL text can be referenced in the prompt as \"following text\".\nA GPT2 tokenizer is included to ensure that the 2000 token limit for OpenAI queries is not exceeded. Provide a prompt with your request, the url for text retrieval, your api-key and temperature to process the text." -) - - - -error_message = "" - -try: - iface.launch() -except Exception as e: - error_message = "An error occurred: " + str(e) - iface.outputs[1].value = error_message \ No newline at end of file diff --git a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/__init__.py b/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/__init__.py deleted file mode 100644 index 76e4b272b479a26c63d120c818c140870cd8c287..0000000000000000000000000000000000000000 --- a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/backbone/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .backbone import build_backbone diff --git a/spaces/Silence1412/Text2img/app.py b/spaces/Silence1412/Text2img/app.py deleted file mode 100644 index 5287d2bf9660d75b8d22732812302531ef86e67a..0000000000000000000000000000000000000000 --- a/spaces/Silence1412/Text2img/app.py +++ /dev/null @@ -1,82 +0,0 @@ -import streamlit as st -import cv2 as cv -import time -import torch -from diffusers import StableDiffusionPipeline - - -def create_model(loc = "stabilityai/stable-diffusion-2-1-base", mch = 'cpu'): - pipe = StableDiffusionPipeline.from_pretrained(loc) - pipe = pipe.to(mch) - return pipe - -# t2i = st.title(""" -# Txt2Img -# ###### `CLICK "Create_Update_Model"` : -# - `FIRST RUN OF THE CODE` -# - `CHANGING MODEL`""") - -# the_type = st.selectbox("Model",("stabilityai/stable-diffusion-2-1-base", -# "CompVis/stable-diffusion-v1-4")) - -# create = st.button("Create The Model") - -# if create: -# st.session_state.t2m_mod = create_model(loc=the_type) - -the_type = "stabilityai/stable-diffusion-2-1-base" -st.session_state.t2m_mod = create_model(loc=the_type) - - -prom = st.text_input("Prompt",'') - -neg_prom = st.text_input("Negative Prompt",'') - -style = st.selectbox("TODO: Image Style",("Cyberpunk", - "Picasso", - "Real-world specific", - "Digital Art", - "Aesthetics")) - -c1,c2,c3,c6 = st.columns([1,1,1,1]) -c8 = st.columns([1,1,1,1]) -c4,c5 = st.columns(2) - -with c1: - bu_1 = st.text_input("Seed",'666') -with c2: - bu_2 = st.text_input("Steps",'12') -with c3: - bu_3 = st.text_input("Number of Images",'1') -with c6: - bu_6 = st.text_input("Guidance Scale",'7.5') -with c4: - sl_1 = st.slider("Width",128,1024,512,8) -with c5: - sl_2 = st.slider("hight",128,1024,512,8) - -st.session_state.generator = torch.Generator("cpu").manual_seed(int(bu_1)) - -create = st.button("Imagine") - - -if create: - model = st.session_state.t2m_mod - generator = st.session_state.generator - - if int(bu_3) == 1 : - IMG = model(prom, negative_prompt = neg_prom, width=int(sl_1), height=int(sl_2), - num_inference_steps=int(bu_2), - guidance_scale = float(bu_6), - generator=generator).images[0] - st.image(IMG) - - else : - PROMS = [prom]*int(bu_3) - - IMGS = model(PROMS, negative_prompt = neg_prom, width=int(sl_1), height=int(sl_2), - num_inference_steps=int(bu_2), - guidance_scale = float(bu_6), - generator=generator).images - - st.image(IMGS) \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/tclass.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/tclass.py deleted file mode 100644 index 6bd9ffcd9a09e58f24b217e706fef1d7d2465cfe..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/tclass.py +++ /dev/null @@ -1,34 +0,0 @@ -"""Simple script to be run *twice*, to check reference counting bugs. - -See test_run for details.""" - - -import sys - -# We want to ensure that while objects remain available for immediate access, -# objects from *previous* runs of the same script get collected, to avoid -# accumulating massive amounts of old references. -class C(object): - def __init__(self,name): - self.name = name - self.p = print - self.flush_stdout = sys.stdout.flush - - def __del__(self): - self.p('tclass.py: deleting object:',self.name) - self.flush_stdout() - -try: - name = sys.argv[1] -except IndexError: - pass -else: - if name.startswith('C'): - c = C(name) - -#print >> sys.stderr, "ARGV:", sys.argv # dbg - -# This next print statement is NOT debugging, we're making the check on a -# completely separate process so we verify by capturing stdout: -print('ARGV 1-:', sys.argv[1:]) -sys.stdout.flush() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_cfg.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_cfg.py deleted file mode 100644 index 9b5b07b1c7b3239eee2df30b3ff4443cdbec7d89..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_cfg.py +++ /dev/null @@ -1,836 +0,0 @@ - -import pytest -from tests_python.debugger_unittest import IS_PY36_OR_GREATER, IS_CPYTHON -from tests_python.debug_constants import TEST_CYTHON -pytestmark = pytest.mark.skipif(not IS_PY36_OR_GREATER or not IS_CPYTHON or not TEST_CYTHON, reason='Requires CPython >= 3.6') -#!/usr/bin/env python3 -import io -import sys -import unittest -import contextlib -from _pydevd_frame_eval.vendored.bytecode import ( - Label, - Compare, - SetLineno, - Instr, - Bytecode, - BasicBlock, - ControlFlowGraph, -) -from _pydevd_frame_eval.vendored.bytecode.concrete import OFFSET_AS_INSTRUCTION -from _pydevd_frame_eval.vendored.bytecode.tests import disassemble as _disassemble, TestCase - - -def disassemble( - source, *, filename="", function=False, remove_last_return_none=False -): - code = _disassemble(source, filename=filename, function=function) - blocks = ControlFlowGraph.from_bytecode(code) - if remove_last_return_none: - # drop LOAD_CONST+RETURN_VALUE to only keep 2 instructions, - # to make unit tests shorter - block = blocks[-1] - test = ( - block[-2].name == "LOAD_CONST" - and block[-2].arg is None - and block[-1].name == "RETURN_VALUE" - ) - if not test: - raise ValueError( - "unable to find implicit RETURN_VALUE : %s" % block[-2:] - ) - del block[-2:] - return blocks - - -class BlockTests(unittest.TestCase): - def test_iter_invalid_types(self): - # Labels are not allowed in basic blocks - block = BasicBlock() - block.append(Label()) - with self.assertRaises(ValueError): - list(block) - with self.assertRaises(ValueError): - block.legalize(1) - - # Only one jump allowed and only at the end - block = BasicBlock() - block2 = BasicBlock() - block.extend([Instr("JUMP_ABSOLUTE", block2), Instr("NOP")]) - with self.assertRaises(ValueError): - list(block) - with self.assertRaises(ValueError): - block.legalize(1) - - # jump target must be a BasicBlock - block = BasicBlock() - label = Label() - block.extend([Instr("JUMP_ABSOLUTE", label)]) - with self.assertRaises(ValueError): - list(block) - with self.assertRaises(ValueError): - block.legalize(1) - - def test_slice(self): - block = BasicBlock([Instr("NOP")]) - next_block = BasicBlock() - block.next_block = next_block - self.assertEqual(block, block[:]) - self.assertIs(next_block, block[:].next_block) - - def test_copy(self): - block = BasicBlock([Instr("NOP")]) - next_block = BasicBlock() - block.next_block = next_block - self.assertEqual(block, block.copy()) - self.assertIs(next_block, block.copy().next_block) - - -class BytecodeBlocksTests(TestCase): - maxDiff = 80 * 100 - - def test_constructor(self): - code = ControlFlowGraph() - self.assertEqual(code.name, "") - self.assertEqual(code.filename, "") - self.assertEqual(code.flags, 0) - self.assertBlocksEqual(code, []) - - def test_attr(self): - source = """ - first_line = 1 - - def func(arg1, arg2, *, arg3): - x = 1 - y = 2 - return arg1 - """ - code = disassemble(source, filename="hello.py", function=True) - self.assertEqual(code.argcount, 2) - self.assertEqual(code.filename, "hello.py") - self.assertEqual(code.first_lineno, 3) - if sys.version_info > (3, 8): - self.assertEqual(code.posonlyargcount, 0) - self.assertEqual(code.kwonlyargcount, 1) - self.assertEqual(code.name, "func") - self.assertEqual(code.cellvars, []) - - code.name = "name" - code.filename = "filename" - code.flags = 123 - self.assertEqual(code.name, "name") - self.assertEqual(code.filename, "filename") - self.assertEqual(code.flags, 123) - - # FIXME: test non-empty cellvars - - def test_add_del_block(self): - code = ControlFlowGraph() - code[0].append(Instr("LOAD_CONST", 0)) - - block = code.add_block() - self.assertEqual(len(code), 2) - self.assertIs(block, code[1]) - - code[1].append(Instr("LOAD_CONST", 2)) - self.assertBlocksEqual(code, [Instr("LOAD_CONST", 0)], [Instr("LOAD_CONST", 2)]) - - del code[0] - self.assertBlocksEqual(code, [Instr("LOAD_CONST", 2)]) - - del code[0] - self.assertEqual(len(code), 0) - - def test_setlineno(self): - # x = 7 - # y = 8 - # z = 9 - code = Bytecode() - code.first_lineno = 3 - code.extend( - [ - Instr("LOAD_CONST", 7), - Instr("STORE_NAME", "x"), - SetLineno(4), - Instr("LOAD_CONST", 8), - Instr("STORE_NAME", "y"), - SetLineno(5), - Instr("LOAD_CONST", 9), - Instr("STORE_NAME", "z"), - ] - ) - - blocks = ControlFlowGraph.from_bytecode(code) - self.assertBlocksEqual( - blocks, - [ - Instr("LOAD_CONST", 7), - Instr("STORE_NAME", "x"), - SetLineno(4), - Instr("LOAD_CONST", 8), - Instr("STORE_NAME", "y"), - SetLineno(5), - Instr("LOAD_CONST", 9), - Instr("STORE_NAME", "z"), - ], - ) - - def test_legalize(self): - code = Bytecode() - code.first_lineno = 3 - code.extend( - [ - Instr("LOAD_CONST", 7), - Instr("STORE_NAME", "x"), - Instr("LOAD_CONST", 8, lineno=4), - Instr("STORE_NAME", "y"), - SetLineno(5), - Instr("LOAD_CONST", 9, lineno=6), - Instr("STORE_NAME", "z"), - ] - ) - - blocks = ControlFlowGraph.from_bytecode(code) - blocks.legalize() - self.assertBlocksEqual( - blocks, - [ - Instr("LOAD_CONST", 7, lineno=3), - Instr("STORE_NAME", "x", lineno=3), - Instr("LOAD_CONST", 8, lineno=4), - Instr("STORE_NAME", "y", lineno=4), - Instr("LOAD_CONST", 9, lineno=5), - Instr("STORE_NAME", "z", lineno=5), - ], - ) - - def test_repr(self): - r = repr(ControlFlowGraph()) - self.assertIn("ControlFlowGraph", r) - self.assertIn("1", r) - - def test_to_bytecode(self): - # if test: - # x = 2 - # x = 5 - blocks = ControlFlowGraph() - blocks.add_block() - blocks.add_block() - blocks[0].extend( - [ - Instr("LOAD_NAME", "test", lineno=1), - Instr("POP_JUMP_IF_FALSE", blocks[2], lineno=1), - ] - ) - - blocks[1].extend( - [ - Instr("LOAD_CONST", 5, lineno=2), - Instr("STORE_NAME", "x", lineno=2), - Instr("JUMP_FORWARD", blocks[2], lineno=2), - ] - ) - - blocks[2].extend( - [ - Instr("LOAD_CONST", 7, lineno=3), - Instr("STORE_NAME", "x", lineno=3), - Instr("LOAD_CONST", None, lineno=3), - Instr("RETURN_VALUE", lineno=3), - ] - ) - - bytecode = blocks.to_bytecode() - label = Label() - self.assertEqual( - bytecode, - [ - Instr("LOAD_NAME", "test", lineno=1), - Instr("POP_JUMP_IF_FALSE", label, lineno=1), - Instr("LOAD_CONST", 5, lineno=2), - Instr("STORE_NAME", "x", lineno=2), - Instr("JUMP_FORWARD", label, lineno=2), - label, - Instr("LOAD_CONST", 7, lineno=3), - Instr("STORE_NAME", "x", lineno=3), - Instr("LOAD_CONST", None, lineno=3), - Instr("RETURN_VALUE", lineno=3), - ], - ) - # FIXME: test other attributes - - def test_label_at_the_end(self): - label = Label() - code = Bytecode( - [ - Instr("LOAD_NAME", "x"), - Instr("UNARY_NOT"), - Instr("POP_JUMP_IF_FALSE", label), - Instr("LOAD_CONST", 9), - Instr("STORE_NAME", "y"), - label, - ] - ) - - cfg = ControlFlowGraph.from_bytecode(code) - self.assertBlocksEqual( - cfg, - [ - Instr("LOAD_NAME", "x"), - Instr("UNARY_NOT"), - Instr("POP_JUMP_IF_FALSE", cfg[2]), - ], - [Instr("LOAD_CONST", 9), Instr("STORE_NAME", "y")], - [], - ) - - def test_from_bytecode(self): - bytecode = Bytecode() - label = Label() - bytecode.extend( - [ - Instr("LOAD_NAME", "test", lineno=1), - Instr("POP_JUMP_IF_FALSE", label, lineno=1), - Instr("LOAD_CONST", 5, lineno=2), - Instr("STORE_NAME", "x", lineno=2), - Instr("JUMP_FORWARD", label, lineno=2), - # dead code! - Instr("LOAD_CONST", 7, lineno=4), - Instr("STORE_NAME", "x", lineno=4), - Label(), # unused label - label, - Label(), # unused label - Instr("LOAD_CONST", None, lineno=4), - Instr("RETURN_VALUE", lineno=4), - ] - ) - - blocks = ControlFlowGraph.from_bytecode(bytecode) - label2 = blocks[3] - self.assertBlocksEqual( - blocks, - [ - Instr("LOAD_NAME", "test", lineno=1), - Instr("POP_JUMP_IF_FALSE", label2, lineno=1), - ], - [ - Instr("LOAD_CONST", 5, lineno=2), - Instr("STORE_NAME", "x", lineno=2), - Instr("JUMP_FORWARD", label2, lineno=2), - ], - [Instr("LOAD_CONST", 7, lineno=4), Instr("STORE_NAME", "x", lineno=4)], - [Instr("LOAD_CONST", None, lineno=4), Instr("RETURN_VALUE", lineno=4)], - ) - # FIXME: test other attributes - - def test_from_bytecode_loop(self): - # for x in (1, 2, 3): - # if x == 2: - # break - # continue - - if sys.version_info < (3, 8): - label_loop_start = Label() - label_loop_exit = Label() - label_loop_end = Label() - - code = Bytecode() - code.extend( - ( - Instr("SETUP_LOOP", label_loop_end, lineno=1), - Instr("LOAD_CONST", (1, 2, 3), lineno=1), - Instr("GET_ITER", lineno=1), - label_loop_start, - Instr("FOR_ITER", label_loop_exit, lineno=1), - Instr("STORE_NAME", "x", lineno=1), - Instr("LOAD_NAME", "x", lineno=2), - Instr("LOAD_CONST", 2, lineno=2), - Instr("COMPARE_OP", Compare.EQ, lineno=2), - Instr("POP_JUMP_IF_FALSE", label_loop_start, lineno=2), - Instr("BREAK_LOOP", lineno=3), - Instr("JUMP_ABSOLUTE", label_loop_start, lineno=4), - Instr("JUMP_ABSOLUTE", label_loop_start, lineno=4), - label_loop_exit, - Instr("POP_BLOCK", lineno=4), - label_loop_end, - Instr("LOAD_CONST", None, lineno=4), - Instr("RETURN_VALUE", lineno=4), - ) - ) - blocks = ControlFlowGraph.from_bytecode(code) - - expected = [ - [Instr("SETUP_LOOP", blocks[8], lineno=1)], - [Instr("LOAD_CONST", (1, 2, 3), lineno=1), Instr("GET_ITER", lineno=1)], - [Instr("FOR_ITER", blocks[7], lineno=1)], - [ - Instr("STORE_NAME", "x", lineno=1), - Instr("LOAD_NAME", "x", lineno=2), - Instr("LOAD_CONST", 2, lineno=2), - Instr("COMPARE_OP", Compare.EQ, lineno=2), - Instr("POP_JUMP_IF_FALSE", blocks[2], lineno=2), - ], - [Instr("BREAK_LOOP", lineno=3)], - [Instr("JUMP_ABSOLUTE", blocks[2], lineno=4)], - [Instr("JUMP_ABSOLUTE", blocks[2], lineno=4)], - [Instr("POP_BLOCK", lineno=4)], - [Instr("LOAD_CONST", None, lineno=4), Instr("RETURN_VALUE", lineno=4)], - ] - self.assertBlocksEqual(blocks, *expected) - else: - label_loop_start = Label() - label_loop_exit = Label() - - code = Bytecode() - code.extend( - ( - Instr("LOAD_CONST", (1, 2, 3), lineno=1), - Instr("GET_ITER", lineno=1), - label_loop_start, - Instr("FOR_ITER", label_loop_exit, lineno=1), - Instr("STORE_NAME", "x", lineno=1), - Instr("LOAD_NAME", "x", lineno=2), - Instr("LOAD_CONST", 2, lineno=2), - Instr("COMPARE_OP", Compare.EQ, lineno=2), - Instr("POP_JUMP_IF_FALSE", label_loop_start, lineno=2), - Instr("JUMP_ABSOLUTE", label_loop_exit, lineno=3), - Instr("JUMP_ABSOLUTE", label_loop_start, lineno=4), - Instr("JUMP_ABSOLUTE", label_loop_start, lineno=4), - label_loop_exit, - Instr("LOAD_CONST", None, lineno=4), - Instr("RETURN_VALUE", lineno=4), - ) - ) - blocks = ControlFlowGraph.from_bytecode(code) - - expected = [ - [Instr("LOAD_CONST", (1, 2, 3), lineno=1), Instr("GET_ITER", lineno=1)], - [Instr("FOR_ITER", blocks[6], lineno=1)], - [ - Instr("STORE_NAME", "x", lineno=1), - Instr("LOAD_NAME", "x", lineno=2), - Instr("LOAD_CONST", 2, lineno=2), - Instr("COMPARE_OP", Compare.EQ, lineno=2), - Instr("POP_JUMP_IF_FALSE", blocks[1], lineno=2), - ], - [Instr("JUMP_ABSOLUTE", blocks[6], lineno=3)], - [Instr("JUMP_ABSOLUTE", blocks[1], lineno=4)], - [Instr("JUMP_ABSOLUTE", blocks[1], lineno=4)], - [Instr("LOAD_CONST", None, lineno=4), Instr("RETURN_VALUE", lineno=4)], - ] - self.assertBlocksEqual(blocks, *expected) - - -class BytecodeBlocksFunctionalTests(TestCase): - def test_eq(self): - # compare codes with multiple blocks and labels, - # Code.__eq__() renumbers labels to get equal labels - source = "x = 1 if test else 2" - code1 = disassemble(source) - code2 = disassemble(source) - self.assertEqual(code1, code2) - - # Type mismatch - self.assertFalse(code1 == 1) - - # argnames mismatch - cfg = ControlFlowGraph() - cfg.argnames = 10 - self.assertFalse(code1 == cfg) - - # instr mismatch - cfg = ControlFlowGraph() - cfg.argnames = code1.argnames - self.assertFalse(code1 == cfg) - - def check_getitem(self, code): - # check internal Code block indexes (index by index, index by label) - for block_index, block in enumerate(code): - self.assertIs(code[block_index], block) - self.assertIs(code[block], block) - self.assertEqual(code.get_block_index(block), block_index) - - def test_delitem(self): - cfg = ControlFlowGraph() - b = cfg.add_block() - del cfg[b] - self.assertEqual(len(cfg.get_instructions()), 0) - - def sample_code(self): - code = disassemble("x = 1", remove_last_return_none=True) - self.assertBlocksEqual( - code, [Instr("LOAD_CONST", 1, lineno=1), Instr("STORE_NAME", "x", lineno=1)] - ) - return code - - def test_split_block(self): - code = self.sample_code() - code[0].append(Instr("NOP", lineno=1)) - - label = code.split_block(code[0], 2) - self.assertIs(label, code[1]) - self.assertBlocksEqual( - code, - [Instr("LOAD_CONST", 1, lineno=1), Instr("STORE_NAME", "x", lineno=1)], - [Instr("NOP", lineno=1)], - ) - self.check_getitem(code) - - label2 = code.split_block(code[0], 1) - self.assertIs(label2, code[1]) - self.assertBlocksEqual( - code, - [Instr("LOAD_CONST", 1, lineno=1)], - [Instr("STORE_NAME", "x", lineno=1)], - [Instr("NOP", lineno=1)], - ) - self.check_getitem(code) - - with self.assertRaises(TypeError): - code.split_block(1, 1) - - with self.assertRaises(ValueError) as e: - code.split_block(code[0], -2) - self.assertIn("positive", e.exception.args[0]) - - def test_split_block_end(self): - code = self.sample_code() - - # split at the end of the last block requires to add a new empty block - label = code.split_block(code[0], 2) - self.assertIs(label, code[1]) - self.assertBlocksEqual( - code, - [Instr("LOAD_CONST", 1, lineno=1), Instr("STORE_NAME", "x", lineno=1)], - [], - ) - self.check_getitem(code) - - # split at the end of a block which is not the end doesn't require to - # add a new block - label = code.split_block(code[0], 2) - self.assertIs(label, code[1]) - self.assertBlocksEqual( - code, - [Instr("LOAD_CONST", 1, lineno=1), Instr("STORE_NAME", "x", lineno=1)], - [], - ) - - def test_split_block_dont_split(self): - code = self.sample_code() - - # FIXME: is it really useful to support that? - block = code.split_block(code[0], 0) - self.assertIs(block, code[0]) - self.assertBlocksEqual( - code, [Instr("LOAD_CONST", 1, lineno=1), Instr("STORE_NAME", "x", lineno=1)] - ) - - def test_split_block_error(self): - code = self.sample_code() - - with self.assertRaises(ValueError): - # invalid index - code.split_block(code[0], 3) - - def test_to_code(self): - # test resolution of jump labels - bytecode = ControlFlowGraph() - bytecode.first_lineno = 3 - bytecode.argcount = 3 - if sys.version_info > (3, 8): - bytecode.posonlyargcount = 0 - bytecode.kwonlyargcount = 2 - bytecode.name = "func" - bytecode.filename = "hello.py" - bytecode.flags = 0x43 - bytecode.argnames = ("arg", "arg2", "arg3", "kwonly", "kwonly2") - bytecode.docstring = None - block0 = bytecode[0] - block1 = bytecode.add_block() - block2 = bytecode.add_block() - block0.extend( - [ - Instr("LOAD_FAST", "x", lineno=4), - Instr("POP_JUMP_IF_FALSE", block2, lineno=4), - ] - ) - block1.extend( - [Instr("LOAD_FAST", "arg", lineno=5), Instr("STORE_FAST", "x", lineno=5)] - ) - block2.extend( - [ - Instr("LOAD_CONST", 3, lineno=6), - Instr("STORE_FAST", "x", lineno=6), - Instr("LOAD_FAST", "x", lineno=7), - Instr("RETURN_VALUE", lineno=7), - ] - ) - - if OFFSET_AS_INSTRUCTION: - # The argument of the jump is divided by 2 - expected = ( - b"|\x05" b"r\x04" b"|\x00" b"}\x05" b"d\x01" b"}\x05" b"|\x05" b"S\x00" - ) - else: - expected = ( - b"|\x05" b"r\x08" b"|\x00" b"}\x05" b"d\x01" b"}\x05" b"|\x05" b"S\x00" - ) - - code = bytecode.to_code() - self.assertEqual(code.co_consts, (None, 3)) - self.assertEqual(code.co_argcount, 3) - if sys.version_info > (3, 8): - self.assertEqual(code.co_posonlyargcount, 0) - self.assertEqual(code.co_kwonlyargcount, 2) - self.assertEqual(code.co_nlocals, 6) - self.assertEqual(code.co_stacksize, 1) - # FIXME: don't use hardcoded constants - self.assertEqual(code.co_flags, 0x43) - self.assertEqual(code.co_code, expected) - self.assertEqual(code.co_names, ()) - self.assertEqual( - code.co_varnames, ("arg", "arg2", "arg3", "kwonly", "kwonly2", "x") - ) - self.assertEqual(code.co_filename, "hello.py") - self.assertEqual(code.co_name, "func") - self.assertEqual(code.co_firstlineno, 3) - - # verify stacksize argument is honored - explicit_stacksize = code.co_stacksize + 42 - code = bytecode.to_code(stacksize=explicit_stacksize) - self.assertEqual(code.co_stacksize, explicit_stacksize) - - def test_get_block_index(self): - blocks = ControlFlowGraph() - block0 = blocks[0] - block1 = blocks.add_block() - block2 = blocks.add_block() - self.assertEqual(blocks.get_block_index(block0), 0) - self.assertEqual(blocks.get_block_index(block1), 1) - self.assertEqual(blocks.get_block_index(block2), 2) - - other_block = BasicBlock() - self.assertRaises(ValueError, blocks.get_block_index, other_block) - - -class CFGStacksizeComputationTests(TestCase): - def check_stack_size(self, func): - code = func.__code__ - bytecode = Bytecode.from_code(code) - cfg = ControlFlowGraph.from_bytecode(bytecode) - self.assertEqual(code.co_stacksize, cfg.compute_stacksize()) - - def test_empty_code(self): - cfg = ControlFlowGraph() - del cfg[0] - self.assertEqual(cfg.compute_stacksize(), 0) - - def test_handling_of_set_lineno(self): - code = Bytecode() - code.first_lineno = 3 - code.extend( - [ - Instr("LOAD_CONST", 7), - Instr("STORE_NAME", "x"), - SetLineno(4), - Instr("LOAD_CONST", 8), - Instr("STORE_NAME", "y"), - SetLineno(5), - Instr("LOAD_CONST", 9), - Instr("STORE_NAME", "z"), - ] - ) - self.assertEqual(code.compute_stacksize(), 1) - - def test_invalid_stacksize(self): - code = Bytecode() - code.extend([Instr("STORE_NAME", "x")]) - with self.assertRaises(RuntimeError): - code.compute_stacksize() - - def test_stack_size_computation_and(self): - def test(arg1, *args, **kwargs): # pragma: no cover - return arg1 and args # Test JUMP_IF_FALSE_OR_POP - - self.check_stack_size(test) - - def test_stack_size_computation_or(self): - def test(arg1, *args, **kwargs): # pragma: no cover - return arg1 or args # Test JUMP_IF_TRUE_OR_POP - - self.check_stack_size(test) - - def test_stack_size_computation_if_else(self): - def test(arg1, *args, **kwargs): # pragma: no cover - if args: - return 0 - elif kwargs: - return 1 - else: - return 2 - - self.check_stack_size(test) - - def test_stack_size_computation_for_loop_continue(self): - def test(arg1, *args, **kwargs): # pragma: no cover - for k in kwargs: - if k in args: - continue - else: - return 1 - - self.check_stack_size(test) - - def test_stack_size_computation_while_loop_break(self): - def test(arg1, *args, **kwargs): # pragma: no cover - while True: - if arg1: - break - - self.check_stack_size(test) - - def test_stack_size_computation_with(self): - def test(arg1, *args, **kwargs): # pragma: no cover - with open(arg1) as f: - return f.read() - - self.check_stack_size(test) - - def test_stack_size_computation_try_except(self): - def test(arg1, *args, **kwargs): # pragma: no cover - try: - return args[0] - except Exception: - return 2 - - self.check_stack_size(test) - - def test_stack_size_computation_try_finally(self): - def test(arg1, *args, **kwargs): # pragma: no cover - try: - return args[0] - finally: - return 2 - - self.check_stack_size(test) - - def test_stack_size_computation_try_except_finally(self): - def test(arg1, *args, **kwargs): # pragma: no cover - try: - return args[0] - except Exception: - return 2 - finally: - print("Interrupt") - - self.check_stack_size(test) - - def test_stack_size_computation_try_except_else_finally(self): - def test(arg1, *args, **kwargs): # pragma: no cover - try: - return args[0] - except Exception: - return 2 - else: - return arg1 - finally: - print("Interrupt") - - self.check_stack_size(test) - - def test_stack_size_computation_nested_try_except_finally(self): - def test(arg1, *args, **kwargs): # pragma: no cover - k = 1 - try: - getattr(arg1, k) - except AttributeError: - pass - except Exception: - try: - assert False - except Exception: - return 2 - finally: - print("unexpected") - finally: - print("attempted to get {}".format(k)) - - self.check_stack_size(test) - - def test_stack_size_computation_nested_try_except_else_finally(self): - def test(*args, **kwargs): - try: - v = args[1] - except IndexError: - try: - w = kwargs["value"] - except KeyError: - return -1 - else: - return w - finally: - print("second finally") - else: - return v - finally: - print("first finally") - - # A direct comparison of the stack depth fails because CPython - # generate dead code that is used in stack computation. - cpython_stacksize = test.__code__.co_stacksize - test.__code__ = Bytecode.from_code(test.__code__).to_code() - self.assertLessEqual(test.__code__.co_stacksize, cpython_stacksize) - with contextlib.redirect_stdout(io.StringIO()) as stdout: - self.assertEqual(test(1, 4), 4) - self.assertEqual(stdout.getvalue(), "first finally\n") - - with contextlib.redirect_stdout(io.StringIO()) as stdout: - self.assertEqual(test([], value=3), 3) - self.assertEqual(stdout.getvalue(), "second finally\nfirst finally\n") - - with contextlib.redirect_stdout(io.StringIO()) as stdout: - self.assertEqual(test([], name=None), -1) - self.assertEqual(stdout.getvalue(), "second finally\nfirst finally\n") - - def test_stack_size_with_dead_code(self): - # Simply demonstrate more directly the previously mentioned issue. - def test(*args): # pragma: no cover - return 0 - try: - a = args[0] - except IndexError: - return -1 - else: - return a - - test.__code__ = Bytecode.from_code(test.__code__).to_code() - self.assertEqual(test.__code__.co_stacksize, 1) - self.assertEqual(test(1), 0) - - def test_huge_code_with_numerous_blocks(self): - def base_func(x): - pass - - def mk_if_then_else(depth): - instructions = [] - for i in range(depth): - label_else = Label() - instructions.extend( - [ - Instr("LOAD_FAST", "x"), - Instr("POP_JUMP_IF_FALSE", label_else), - Instr("LOAD_GLOBAL", "f{}".format(i)), - Instr("RETURN_VALUE"), - label_else, - ] - ) - instructions.extend([Instr("LOAD_CONST", None), Instr("RETURN_VALUE")]) - return instructions - - bytecode = Bytecode(mk_if_then_else(5000)) - bytecode.compute_stacksize() - - -if __name__ == "__main__": - unittest.main() # pragma: no cover diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/core/evaluation/metrics.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/core/evaluation/metrics.py deleted file mode 100644 index 16c7dd47cadd53cf1caaa194e28a343f2aacc599..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/core/evaluation/metrics.py +++ /dev/null @@ -1,326 +0,0 @@ -from collections import OrderedDict - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch - - -def f_score(precision, recall, beta=1): - """calcuate the f-score value. - - Args: - precision (float | torch.Tensor): The precision value. - recall (float | torch.Tensor): The recall value. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - Returns: - [torch.tensor]: The f-score value. - """ - score = (1 + beta**2) * (precision * recall) / ( - (beta**2 * precision) + recall) - return score - - -def intersect_and_union(pred_label, - label, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate intersection and Union. - - Args: - pred_label (ndarray | str): Prediction segmentation map - or predict result filename. - label (ndarray | str): Ground truth segmentation map - or label filename. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. The parameter will - work only when label is str. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. The parameter will - work only when label is str. Default: False. - - Returns: - torch.Tensor: The intersection of prediction and ground truth - histogram on all classes. - torch.Tensor: The union of prediction and ground truth histogram on - all classes. - torch.Tensor: The prediction histogram on all classes. - torch.Tensor: The ground truth histogram on all classes. - """ - - if isinstance(pred_label, str): - pred_label = torch.from_numpy(np.load(pred_label)) - else: - pred_label = torch.from_numpy((pred_label)) - - if isinstance(label, str): - label = torch.from_numpy( - mmcv.imread(label, flag='unchanged', backend='pillow')) - else: - label = torch.from_numpy(label) - - if label_map is not None: - for old_id, new_id in label_map.items(): - label[label == old_id] = new_id - if reduce_zero_label: - label[label == 0] = 255 - label = label - 1 - label[label == 254] = 255 - - mask = (label != ignore_index) - pred_label = pred_label[mask] - label = label[mask] - - intersect = pred_label[pred_label == label] - area_intersect = torch.histc( - intersect.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_pred_label = torch.histc( - pred_label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_label = torch.histc( - label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_union = area_pred_label + area_label - area_intersect - return area_intersect, area_union, area_pred_label, area_label - - -def total_intersect_and_union(results, - gt_seg_maps, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate Total Intersection and Union. - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - - Returns: - ndarray: The intersection of prediction and ground truth histogram - on all classes. - ndarray: The union of prediction and ground truth histogram on all - classes. - ndarray: The prediction histogram on all classes. - ndarray: The ground truth histogram on all classes. - """ - num_imgs = len(results) - assert len(gt_seg_maps) == num_imgs - total_area_intersect = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_union = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_pred_label = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_label = torch.zeros((num_classes, ), dtype=torch.float64) - for i in range(num_imgs): - area_intersect, area_union, area_pred_label, area_label = \ - intersect_and_union( - results[i], gt_seg_maps[i], num_classes, ignore_index, - label_map, reduce_zero_label) - total_area_intersect += area_intersect - total_area_union += area_union - total_area_pred_label += area_pred_label - total_area_label += area_label - return total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label - - -def mean_iou(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category IoU, shape (num_classes, ). - """ - iou_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mIoU'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label) - return iou_result - - -def mean_dice(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Dice (mDice) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: Default metrics. - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category dice, shape (num_classes, ). - """ - - dice_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mDice'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label) - return dice_result - - -def mean_fscore(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False, - beta=1): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - - Returns: - dict[str, float | ndarray]: Default metrics. - float: Overall accuracy on all images. - ndarray: Per category recall, shape (num_classes, ). - ndarray: Per category precision, shape (num_classes, ). - ndarray: Per category f-score, shape (num_classes, ). - """ - fscore_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mFscore'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label, - beta=beta) - return fscore_result - - -def eval_metrics(results, - gt_seg_maps, - num_classes, - ignore_index, - metrics=['mIoU'], - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False, - beta=1): - """Calculate evaluation metrics - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - Returns: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category evaluation metrics, shape (num_classes, ). - """ - if isinstance(metrics, str): - metrics = [metrics] - allowed_metrics = ['mIoU', 'mDice', 'mFscore'] - if not set(metrics).issubset(set(allowed_metrics)): - raise KeyError('metrics {} is not supported'.format(metrics)) - - total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label = total_intersect_and_union( - results, gt_seg_maps, num_classes, ignore_index, label_map, - reduce_zero_label) - all_acc = total_area_intersect.sum() / total_area_label.sum() - ret_metrics = OrderedDict({'aAcc': all_acc}) - for metric in metrics: - if metric == 'mIoU': - iou = total_area_intersect / total_area_union - acc = total_area_intersect / total_area_label - ret_metrics['IoU'] = iou - ret_metrics['Acc'] = acc - elif metric == 'mDice': - dice = 2 * total_area_intersect / ( - total_area_pred_label + total_area_label) - acc = total_area_intersect / total_area_label - ret_metrics['Dice'] = dice - ret_metrics['Acc'] = acc - elif metric == 'mFscore': - precision = total_area_intersect / total_area_pred_label - recall = total_area_intersect / total_area_label - f_value = torch.tensor( - [f_score(x[0], x[1], beta) for x in zip(precision, recall)]) - ret_metrics['Fscore'] = f_value - ret_metrics['Precision'] = precision - ret_metrics['Recall'] = recall - - ret_metrics = { - metric: value.numpy() - for metric, value in ret_metrics.items() - } - if nan_to_num is not None: - ret_metrics = OrderedDict({ - metric: np.nan_to_num(metric_value, nan=nan_to_num) - for metric, metric_value in ret_metrics.items() - }) - return ret_metrics diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langrussianmodel.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langrussianmodel.py deleted file mode 100644 index 39a5388948ef12b69b65fbfa89a84c6ef4a4bfd6..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/langrussianmodel.py +++ /dev/null @@ -1,5725 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -RUSSIAN_LANG_MODEL = { - 37: { # 'А' - 37: 0, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 2, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 1, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 0, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 1, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 0, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 44: { # 'Б' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 1, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 2, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 33: { # 'В' - 37: 2, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 2, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 1, # 'ъ' - 18: 3, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 0, # 'ю' - 16: 1, # 'я' - }, - 46: { # 'Г' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 2, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 41: { # 'Д' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 2, # 'Е' - 56: 1, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 2, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 3, # 'ж' - 20: 1, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 48: { # 'Е' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 2, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 2, # 'Р' - 32: 2, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 2, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 2, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 1, # 'н' - 1: 0, # 'о' - 15: 1, # 'п' - 9: 1, # 'р' - 7: 3, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 2, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 56: { # 'Ж' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 1, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 2, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 1, # 'м' - 5: 0, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 2, # 'ю' - 16: 0, # 'я' - }, - 51: { # 'З' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 0, # 'г' - 13: 2, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 1, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 1, # 'я' - }, - 42: { # 'И' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 2, # 'Е' - 56: 1, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 2, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 1, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 2, # 'з' - 4: 1, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 1, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 1, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 60: { # 'Й' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 1, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 36: { # 'К' - 37: 2, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 2, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 1, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 0, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 49: { # 'Л' - 37: 2, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 1, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 0, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 0, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 1, # 'л' - 12: 0, # 'м' - 5: 1, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 2, # 'ю' - 16: 1, # 'я' - }, - 38: { # 'М' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 0, # 'Ь' - 47: 1, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 1, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 1, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 31: { # 'Н' - 37: 2, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 1, # 'З' - 42: 2, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 1, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 3, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 34: { # 'О' - 37: 0, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 2, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 1, # 'З' - 42: 1, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 2, # 'Л' - 38: 1, # 'М' - 31: 2, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 2, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 1, # 'Ф' - 55: 1, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 1, # 'а' - 21: 2, # 'б' - 10: 1, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 0, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 1, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 0, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 1, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 35: { # 'П' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 2, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 0, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 3, # 'р' - 7: 1, # 'с' - 6: 1, # 'т' - 14: 2, # 'у' - 39: 1, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 0, # 'ю' - 16: 2, # 'я' - }, - 45: { # 'Р' - 37: 2, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 2, # 'Е' - 56: 1, # 'Ж' - 51: 0, # 'З' - 42: 2, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 2, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 1, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 2, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 2, # 'я' - }, - 32: { # 'С' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 2, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 1, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 2, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 2, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 1, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 1, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 40: { # 'Т' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 2, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 1, # 'Ь' - 47: 1, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 1, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 52: { # 'У' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 1, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 1, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 1, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 2, # 'и' - 23: 1, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 1, # 'н' - 1: 2, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 0, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 53: { # 'Ф' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 1, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 55: { # 'Х' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 2, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 0, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 58: { # 'Ц' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 1, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 0, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 50: { # 'Ч' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 1, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 1, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 1, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 1, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 57: { # 'Ш' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 1, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 1, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 2, # 'о' - 15: 2, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 63: { # 'Щ' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 1, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 1, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 1, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 1, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 62: { # 'Ы' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 1, # 'Ц' - 50: 0, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 0, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 0, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 61: { # 'Ь' - 37: 0, # 'А' - 44: 1, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 1, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 1, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 1, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 1, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 0, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 0, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 47: { # 'Э' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 1, # 'Й' - 36: 1, # 'К' - 49: 1, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 1, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 0, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 2, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 59: { # 'Ю' - 37: 1, # 'А' - 44: 1, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 1, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 0, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 1, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 0, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 1, # 'п' - 9: 1, # 'р' - 7: 1, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 43: { # 'Я' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 1, # 'В' - 46: 1, # 'Г' - 41: 0, # 'Д' - 48: 1, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 1, # 'С' - 40: 1, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 1, # 'Х' - 58: 0, # 'Ц' - 50: 1, # 'Ч' - 57: 0, # 'Ш' - 63: 1, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 1, # 'Ю' - 43: 1, # 'Я' - 3: 0, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 0, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 1, # 'й' - 11: 1, # 'к' - 8: 1, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 1, # 'п' - 9: 1, # 'р' - 7: 1, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 3: { # 'а' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 1, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 3, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 3, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 21: { # 'б' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 1, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 1, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 0, # 'ф' - 26: 2, # 'х' - 28: 1, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 3, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 10: { # 'в' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 3, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 3, # 'ш' - 29: 2, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 3, # 'я' - }, - 19: { # 'г' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 3, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 13: { # 'д' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 3, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 2: { # 'е' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 2, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 3, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 24: { # 'ж' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 1, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 1, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 0, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 20: { # 'з' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 3, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 1, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 3, # 'я' - }, - 4: { # 'и' - 37: 1, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 3, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 3, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 23: { # 'й' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 1, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 2, # 'з' - 4: 1, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 2, # 'ф' - 26: 1, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 2, # 'я' - }, - 11: { # 'к' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 3, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 1, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 8: { # 'л' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 3, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 1, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 1, # 'ц' - 22: 3, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 12: { # 'м' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 1, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 2, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 3, # 'я' - }, - 5: { # 'н' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 3, # 'ц' - 22: 3, # 'ч' - 25: 2, # 'ш' - 29: 2, # 'щ' - 54: 1, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 1: { # 'о' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 3, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 15: { # 'п' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 3, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 3, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 0, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 1, # 'ш' - 29: 1, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 2, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 3, # 'я' - }, - 9: { # 'р' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 2, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 3, # 'ш' - 29: 2, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 2, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 7: { # 'с' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 1, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 2, # 'ш' - 29: 1, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 6: { # 'т' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 2, # 'щ' - 54: 2, # 'ъ' - 18: 3, # 'ы' - 17: 3, # 'ь' - 30: 2, # 'э' - 27: 2, # 'ю' - 16: 3, # 'я' - }, - 14: { # 'у' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 3, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 2, # 'и' - 23: 2, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 2, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 2, # 'э' - 27: 3, # 'ю' - 16: 2, # 'я' - }, - 39: { # 'ф' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 0, # 'в' - 19: 1, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 1, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 2, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 1, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 2, # 'ы' - 17: 1, # 'ь' - 30: 2, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 26: { # 'х' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 3, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 1, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 1, # 'п' - 9: 3, # 'р' - 7: 2, # 'с' - 6: 2, # 'т' - 14: 2, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 0, # 'щ' - 54: 1, # 'ъ' - 18: 0, # 'ы' - 17: 1, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 28: { # 'ц' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 1, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 2, # 'к' - 8: 1, # 'л' - 12: 1, # 'м' - 5: 1, # 'н' - 1: 3, # 'о' - 15: 0, # 'п' - 9: 1, # 'р' - 7: 0, # 'с' - 6: 1, # 'т' - 14: 3, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 1, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 3, # 'ы' - 17: 1, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 22: { # 'ч' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 2, # 'л' - 12: 1, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 3, # 'т' - 14: 3, # 'у' - 39: 1, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 1, # 'ч' - 25: 2, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 3, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 25: { # 'ш' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 1, # 'б' - 10: 2, # 'в' - 19: 1, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 2, # 'м' - 5: 3, # 'н' - 1: 3, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 1, # 'с' - 6: 2, # 'т' - 14: 3, # 'у' - 39: 2, # 'ф' - 26: 1, # 'х' - 28: 1, # 'ц' - 22: 1, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 3, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 0, # 'я' - }, - 29: { # 'щ' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 3, # 'а' - 21: 0, # 'б' - 10: 1, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 3, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 3, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 1, # 'м' - 5: 2, # 'н' - 1: 1, # 'о' - 15: 0, # 'п' - 9: 2, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 2, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 2, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 0, # 'я' - }, - 54: { # 'ъ' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 0, # 'б' - 10: 0, # 'в' - 19: 0, # 'г' - 13: 0, # 'д' - 2: 2, # 'е' - 24: 0, # 'ж' - 20: 0, # 'з' - 4: 0, # 'и' - 23: 0, # 'й' - 11: 0, # 'к' - 8: 0, # 'л' - 12: 0, # 'м' - 5: 0, # 'н' - 1: 0, # 'о' - 15: 0, # 'п' - 9: 0, # 'р' - 7: 0, # 'с' - 6: 0, # 'т' - 14: 0, # 'у' - 39: 0, # 'ф' - 26: 0, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 0, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 1, # 'ю' - 16: 2, # 'я' - }, - 18: { # 'ы' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 3, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 2, # 'и' - 23: 3, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 1, # 'о' - 15: 3, # 'п' - 9: 3, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 0, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 3, # 'ч' - 25: 3, # 'ш' - 29: 2, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 0, # 'ю' - 16: 2, # 'я' - }, - 17: { # 'ь' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 2, # 'б' - 10: 2, # 'в' - 19: 2, # 'г' - 13: 2, # 'д' - 2: 3, # 'е' - 24: 1, # 'ж' - 20: 3, # 'з' - 4: 2, # 'и' - 23: 0, # 'й' - 11: 3, # 'к' - 8: 0, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 2, # 'о' - 15: 2, # 'п' - 9: 1, # 'р' - 7: 3, # 'с' - 6: 2, # 'т' - 14: 0, # 'у' - 39: 2, # 'ф' - 26: 1, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 3, # 'ш' - 29: 2, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 3, # 'ю' - 16: 3, # 'я' - }, - 30: { # 'э' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 1, # 'М' - 31: 1, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 1, # 'Р' - 32: 1, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 1, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 1, # 'б' - 10: 1, # 'в' - 19: 1, # 'г' - 13: 2, # 'д' - 2: 1, # 'е' - 24: 0, # 'ж' - 20: 1, # 'з' - 4: 0, # 'и' - 23: 2, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 0, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 2, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 2, # 'ф' - 26: 1, # 'х' - 28: 0, # 'ц' - 22: 0, # 'ч' - 25: 1, # 'ш' - 29: 0, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 1, # 'ю' - 16: 1, # 'я' - }, - 27: { # 'ю' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 2, # 'а' - 21: 3, # 'б' - 10: 1, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 1, # 'е' - 24: 2, # 'ж' - 20: 2, # 'з' - 4: 1, # 'и' - 23: 1, # 'й' - 11: 2, # 'к' - 8: 2, # 'л' - 12: 2, # 'м' - 5: 2, # 'н' - 1: 1, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 0, # 'у' - 39: 1, # 'ф' - 26: 2, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 1, # 'э' - 27: 2, # 'ю' - 16: 1, # 'я' - }, - 16: { # 'я' - 37: 0, # 'А' - 44: 0, # 'Б' - 33: 0, # 'В' - 46: 0, # 'Г' - 41: 0, # 'Д' - 48: 0, # 'Е' - 56: 0, # 'Ж' - 51: 0, # 'З' - 42: 0, # 'И' - 60: 0, # 'Й' - 36: 0, # 'К' - 49: 0, # 'Л' - 38: 0, # 'М' - 31: 0, # 'Н' - 34: 0, # 'О' - 35: 0, # 'П' - 45: 0, # 'Р' - 32: 0, # 'С' - 40: 0, # 'Т' - 52: 0, # 'У' - 53: 0, # 'Ф' - 55: 0, # 'Х' - 58: 0, # 'Ц' - 50: 0, # 'Ч' - 57: 0, # 'Ш' - 63: 0, # 'Щ' - 62: 0, # 'Ы' - 61: 0, # 'Ь' - 47: 0, # 'Э' - 59: 0, # 'Ю' - 43: 0, # 'Я' - 3: 0, # 'а' - 21: 2, # 'б' - 10: 3, # 'в' - 19: 2, # 'г' - 13: 3, # 'д' - 2: 3, # 'е' - 24: 3, # 'ж' - 20: 3, # 'з' - 4: 2, # 'и' - 23: 2, # 'й' - 11: 3, # 'к' - 8: 3, # 'л' - 12: 3, # 'м' - 5: 3, # 'н' - 1: 0, # 'о' - 15: 2, # 'п' - 9: 2, # 'р' - 7: 3, # 'с' - 6: 3, # 'т' - 14: 1, # 'у' - 39: 1, # 'ф' - 26: 3, # 'х' - 28: 2, # 'ц' - 22: 2, # 'ч' - 25: 2, # 'ш' - 29: 3, # 'щ' - 54: 0, # 'ъ' - 18: 0, # 'ы' - 17: 0, # 'ь' - 30: 0, # 'э' - 27: 2, # 'ю' - 16: 2, # 'я' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -IBM866_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 37, # 'А' - 129: 44, # 'Б' - 130: 33, # 'В' - 131: 46, # 'Г' - 132: 41, # 'Д' - 133: 48, # 'Е' - 134: 56, # 'Ж' - 135: 51, # 'З' - 136: 42, # 'И' - 137: 60, # 'Й' - 138: 36, # 'К' - 139: 49, # 'Л' - 140: 38, # 'М' - 141: 31, # 'Н' - 142: 34, # 'О' - 143: 35, # 'П' - 144: 45, # 'Р' - 145: 32, # 'С' - 146: 40, # 'Т' - 147: 52, # 'У' - 148: 53, # 'Ф' - 149: 55, # 'Х' - 150: 58, # 'Ц' - 151: 50, # 'Ч' - 152: 57, # 'Ш' - 153: 63, # 'Щ' - 154: 70, # 'Ъ' - 155: 62, # 'Ы' - 156: 61, # 'Ь' - 157: 47, # 'Э' - 158: 59, # 'Ю' - 159: 43, # 'Я' - 160: 3, # 'а' - 161: 21, # 'б' - 162: 10, # 'в' - 163: 19, # 'г' - 164: 13, # 'д' - 165: 2, # 'е' - 166: 24, # 'ж' - 167: 20, # 'з' - 168: 4, # 'и' - 169: 23, # 'й' - 170: 11, # 'к' - 171: 8, # 'л' - 172: 12, # 'м' - 173: 5, # 'н' - 174: 1, # 'о' - 175: 15, # 'п' - 176: 191, # '░' - 177: 192, # '▒' - 178: 193, # '▓' - 179: 194, # '│' - 180: 195, # '┤' - 181: 196, # '╡' - 182: 197, # '╢' - 183: 198, # '╖' - 184: 199, # '╕' - 185: 200, # '╣' - 186: 201, # '║' - 187: 202, # '╗' - 188: 203, # '╝' - 189: 204, # '╜' - 190: 205, # '╛' - 191: 206, # '┐' - 192: 207, # '└' - 193: 208, # '┴' - 194: 209, # '┬' - 195: 210, # '├' - 196: 211, # '─' - 197: 212, # '┼' - 198: 213, # '╞' - 199: 214, # '╟' - 200: 215, # '╚' - 201: 216, # '╔' - 202: 217, # '╩' - 203: 218, # '╦' - 204: 219, # '╠' - 205: 220, # '═' - 206: 221, # '╬' - 207: 222, # '╧' - 208: 223, # '╨' - 209: 224, # '╤' - 210: 225, # '╥' - 211: 226, # '╙' - 212: 227, # '╘' - 213: 228, # '╒' - 214: 229, # '╓' - 215: 230, # '╫' - 216: 231, # '╪' - 217: 232, # '┘' - 218: 233, # '┌' - 219: 234, # '█' - 220: 235, # '▄' - 221: 236, # '▌' - 222: 237, # '▐' - 223: 238, # '▀' - 224: 9, # 'р' - 225: 7, # 'с' - 226: 6, # 'т' - 227: 14, # 'у' - 228: 39, # 'ф' - 229: 26, # 'х' - 230: 28, # 'ц' - 231: 22, # 'ч' - 232: 25, # 'ш' - 233: 29, # 'щ' - 234: 54, # 'ъ' - 235: 18, # 'ы' - 236: 17, # 'ь' - 237: 30, # 'э' - 238: 27, # 'ю' - 239: 16, # 'я' - 240: 239, # 'Ё' - 241: 68, # 'ё' - 242: 240, # 'Є' - 243: 241, # 'є' - 244: 242, # 'Ї' - 245: 243, # 'ї' - 246: 244, # 'Ў' - 247: 245, # 'ў' - 248: 246, # '°' - 249: 247, # '∙' - 250: 248, # '·' - 251: 249, # '√' - 252: 250, # '№' - 253: 251, # '¤' - 254: 252, # '■' - 255: 255, # '\xa0' -} - -IBM866_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="IBM866", - language="Russian", - char_to_order_map=IBM866_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -WINDOWS_1251_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 191, # 'Ђ' - 129: 192, # 'Ѓ' - 130: 193, # '‚' - 131: 194, # 'ѓ' - 132: 195, # '„' - 133: 196, # '…' - 134: 197, # '†' - 135: 198, # '‡' - 136: 199, # '€' - 137: 200, # '‰' - 138: 201, # 'Љ' - 139: 202, # '‹' - 140: 203, # 'Њ' - 141: 204, # 'Ќ' - 142: 205, # 'Ћ' - 143: 206, # 'Џ' - 144: 207, # 'ђ' - 145: 208, # '‘' - 146: 209, # '’' - 147: 210, # '“' - 148: 211, # '”' - 149: 212, # '•' - 150: 213, # '–' - 151: 214, # '—' - 152: 215, # None - 153: 216, # '™' - 154: 217, # 'љ' - 155: 218, # '›' - 156: 219, # 'њ' - 157: 220, # 'ќ' - 158: 221, # 'ћ' - 159: 222, # 'џ' - 160: 223, # '\xa0' - 161: 224, # 'Ў' - 162: 225, # 'ў' - 163: 226, # 'Ј' - 164: 227, # '¤' - 165: 228, # 'Ґ' - 166: 229, # '¦' - 167: 230, # '§' - 168: 231, # 'Ё' - 169: 232, # '©' - 170: 233, # 'Є' - 171: 234, # '«' - 172: 235, # '¬' - 173: 236, # '\xad' - 174: 237, # '®' - 175: 238, # 'Ї' - 176: 239, # '°' - 177: 240, # '±' - 178: 241, # 'І' - 179: 242, # 'і' - 180: 243, # 'ґ' - 181: 244, # 'µ' - 182: 245, # '¶' - 183: 246, # '·' - 184: 68, # 'ё' - 185: 247, # '№' - 186: 248, # 'є' - 187: 249, # '»' - 188: 250, # 'ј' - 189: 251, # 'Ѕ' - 190: 252, # 'ѕ' - 191: 253, # 'ї' - 192: 37, # 'А' - 193: 44, # 'Б' - 194: 33, # 'В' - 195: 46, # 'Г' - 196: 41, # 'Д' - 197: 48, # 'Е' - 198: 56, # 'Ж' - 199: 51, # 'З' - 200: 42, # 'И' - 201: 60, # 'Й' - 202: 36, # 'К' - 203: 49, # 'Л' - 204: 38, # 'М' - 205: 31, # 'Н' - 206: 34, # 'О' - 207: 35, # 'П' - 208: 45, # 'Р' - 209: 32, # 'С' - 210: 40, # 'Т' - 211: 52, # 'У' - 212: 53, # 'Ф' - 213: 55, # 'Х' - 214: 58, # 'Ц' - 215: 50, # 'Ч' - 216: 57, # 'Ш' - 217: 63, # 'Щ' - 218: 70, # 'Ъ' - 219: 62, # 'Ы' - 220: 61, # 'Ь' - 221: 47, # 'Э' - 222: 59, # 'Ю' - 223: 43, # 'Я' - 224: 3, # 'а' - 225: 21, # 'б' - 226: 10, # 'в' - 227: 19, # 'г' - 228: 13, # 'д' - 229: 2, # 'е' - 230: 24, # 'ж' - 231: 20, # 'з' - 232: 4, # 'и' - 233: 23, # 'й' - 234: 11, # 'к' - 235: 8, # 'л' - 236: 12, # 'м' - 237: 5, # 'н' - 238: 1, # 'о' - 239: 15, # 'п' - 240: 9, # 'р' - 241: 7, # 'с' - 242: 6, # 'т' - 243: 14, # 'у' - 244: 39, # 'ф' - 245: 26, # 'х' - 246: 28, # 'ц' - 247: 22, # 'ч' - 248: 25, # 'ш' - 249: 29, # 'щ' - 250: 54, # 'ъ' - 251: 18, # 'ы' - 252: 17, # 'ь' - 253: 30, # 'э' - 254: 27, # 'ю' - 255: 16, # 'я' -} - -WINDOWS_1251_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="windows-1251", - language="Russian", - char_to_order_map=WINDOWS_1251_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -IBM855_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 191, # 'ђ' - 129: 192, # 'Ђ' - 130: 193, # 'ѓ' - 131: 194, # 'Ѓ' - 132: 68, # 'ё' - 133: 195, # 'Ё' - 134: 196, # 'є' - 135: 197, # 'Є' - 136: 198, # 'ѕ' - 137: 199, # 'Ѕ' - 138: 200, # 'і' - 139: 201, # 'І' - 140: 202, # 'ї' - 141: 203, # 'Ї' - 142: 204, # 'ј' - 143: 205, # 'Ј' - 144: 206, # 'љ' - 145: 207, # 'Љ' - 146: 208, # 'њ' - 147: 209, # 'Њ' - 148: 210, # 'ћ' - 149: 211, # 'Ћ' - 150: 212, # 'ќ' - 151: 213, # 'Ќ' - 152: 214, # 'ў' - 153: 215, # 'Ў' - 154: 216, # 'џ' - 155: 217, # 'Џ' - 156: 27, # 'ю' - 157: 59, # 'Ю' - 158: 54, # 'ъ' - 159: 70, # 'Ъ' - 160: 3, # 'а' - 161: 37, # 'А' - 162: 21, # 'б' - 163: 44, # 'Б' - 164: 28, # 'ц' - 165: 58, # 'Ц' - 166: 13, # 'д' - 167: 41, # 'Д' - 168: 2, # 'е' - 169: 48, # 'Е' - 170: 39, # 'ф' - 171: 53, # 'Ф' - 172: 19, # 'г' - 173: 46, # 'Г' - 174: 218, # '«' - 175: 219, # '»' - 176: 220, # '░' - 177: 221, # '▒' - 178: 222, # '▓' - 179: 223, # '│' - 180: 224, # '┤' - 181: 26, # 'х' - 182: 55, # 'Х' - 183: 4, # 'и' - 184: 42, # 'И' - 185: 225, # '╣' - 186: 226, # '║' - 187: 227, # '╗' - 188: 228, # '╝' - 189: 23, # 'й' - 190: 60, # 'Й' - 191: 229, # '┐' - 192: 230, # '└' - 193: 231, # '┴' - 194: 232, # '┬' - 195: 233, # '├' - 196: 234, # '─' - 197: 235, # '┼' - 198: 11, # 'к' - 199: 36, # 'К' - 200: 236, # '╚' - 201: 237, # '╔' - 202: 238, # '╩' - 203: 239, # '╦' - 204: 240, # '╠' - 205: 241, # '═' - 206: 242, # '╬' - 207: 243, # '¤' - 208: 8, # 'л' - 209: 49, # 'Л' - 210: 12, # 'м' - 211: 38, # 'М' - 212: 5, # 'н' - 213: 31, # 'Н' - 214: 1, # 'о' - 215: 34, # 'О' - 216: 15, # 'п' - 217: 244, # '┘' - 218: 245, # '┌' - 219: 246, # '█' - 220: 247, # '▄' - 221: 35, # 'П' - 222: 16, # 'я' - 223: 248, # '▀' - 224: 43, # 'Я' - 225: 9, # 'р' - 226: 45, # 'Р' - 227: 7, # 'с' - 228: 32, # 'С' - 229: 6, # 'т' - 230: 40, # 'Т' - 231: 14, # 'у' - 232: 52, # 'У' - 233: 24, # 'ж' - 234: 56, # 'Ж' - 235: 10, # 'в' - 236: 33, # 'В' - 237: 17, # 'ь' - 238: 61, # 'Ь' - 239: 249, # '№' - 240: 250, # '\xad' - 241: 18, # 'ы' - 242: 62, # 'Ы' - 243: 20, # 'з' - 244: 51, # 'З' - 245: 25, # 'ш' - 246: 57, # 'Ш' - 247: 30, # 'э' - 248: 47, # 'Э' - 249: 29, # 'щ' - 250: 63, # 'Щ' - 251: 22, # 'ч' - 252: 50, # 'Ч' - 253: 251, # '§' - 254: 252, # '■' - 255: 255, # '\xa0' -} - -IBM855_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="IBM855", - language="Russian", - char_to_order_map=IBM855_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -KOI8_R_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 191, # '─' - 129: 192, # '│' - 130: 193, # '┌' - 131: 194, # '┐' - 132: 195, # '└' - 133: 196, # '┘' - 134: 197, # '├' - 135: 198, # '┤' - 136: 199, # '┬' - 137: 200, # '┴' - 138: 201, # '┼' - 139: 202, # '▀' - 140: 203, # '▄' - 141: 204, # '█' - 142: 205, # '▌' - 143: 206, # '▐' - 144: 207, # '░' - 145: 208, # '▒' - 146: 209, # '▓' - 147: 210, # '⌠' - 148: 211, # '■' - 149: 212, # '∙' - 150: 213, # '√' - 151: 214, # '≈' - 152: 215, # '≤' - 153: 216, # '≥' - 154: 217, # '\xa0' - 155: 218, # '⌡' - 156: 219, # '°' - 157: 220, # '²' - 158: 221, # '·' - 159: 222, # '÷' - 160: 223, # '═' - 161: 224, # '║' - 162: 225, # '╒' - 163: 68, # 'ё' - 164: 226, # '╓' - 165: 227, # '╔' - 166: 228, # '╕' - 167: 229, # '╖' - 168: 230, # '╗' - 169: 231, # '╘' - 170: 232, # '╙' - 171: 233, # '╚' - 172: 234, # '╛' - 173: 235, # '╜' - 174: 236, # '╝' - 175: 237, # '╞' - 176: 238, # '╟' - 177: 239, # '╠' - 178: 240, # '╡' - 179: 241, # 'Ё' - 180: 242, # '╢' - 181: 243, # '╣' - 182: 244, # '╤' - 183: 245, # '╥' - 184: 246, # '╦' - 185: 247, # '╧' - 186: 248, # '╨' - 187: 249, # '╩' - 188: 250, # '╪' - 189: 251, # '╫' - 190: 252, # '╬' - 191: 253, # '©' - 192: 27, # 'ю' - 193: 3, # 'а' - 194: 21, # 'б' - 195: 28, # 'ц' - 196: 13, # 'д' - 197: 2, # 'е' - 198: 39, # 'ф' - 199: 19, # 'г' - 200: 26, # 'х' - 201: 4, # 'и' - 202: 23, # 'й' - 203: 11, # 'к' - 204: 8, # 'л' - 205: 12, # 'м' - 206: 5, # 'н' - 207: 1, # 'о' - 208: 15, # 'п' - 209: 16, # 'я' - 210: 9, # 'р' - 211: 7, # 'с' - 212: 6, # 'т' - 213: 14, # 'у' - 214: 24, # 'ж' - 215: 10, # 'в' - 216: 17, # 'ь' - 217: 18, # 'ы' - 218: 20, # 'з' - 219: 25, # 'ш' - 220: 30, # 'э' - 221: 29, # 'щ' - 222: 22, # 'ч' - 223: 54, # 'ъ' - 224: 59, # 'Ю' - 225: 37, # 'А' - 226: 44, # 'Б' - 227: 58, # 'Ц' - 228: 41, # 'Д' - 229: 48, # 'Е' - 230: 53, # 'Ф' - 231: 46, # 'Г' - 232: 55, # 'Х' - 233: 42, # 'И' - 234: 60, # 'Й' - 235: 36, # 'К' - 236: 49, # 'Л' - 237: 38, # 'М' - 238: 31, # 'Н' - 239: 34, # 'О' - 240: 35, # 'П' - 241: 43, # 'Я' - 242: 45, # 'Р' - 243: 32, # 'С' - 244: 40, # 'Т' - 245: 52, # 'У' - 246: 56, # 'Ж' - 247: 33, # 'В' - 248: 61, # 'Ь' - 249: 62, # 'Ы' - 250: 51, # 'З' - 251: 57, # 'Ш' - 252: 47, # 'Э' - 253: 63, # 'Щ' - 254: 50, # 'Ч' - 255: 70, # 'Ъ' -} - -KOI8_R_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="KOI8-R", - language="Russian", - char_to_order_map=KOI8_R_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -MACCYRILLIC_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 37, # 'А' - 129: 44, # 'Б' - 130: 33, # 'В' - 131: 46, # 'Г' - 132: 41, # 'Д' - 133: 48, # 'Е' - 134: 56, # 'Ж' - 135: 51, # 'З' - 136: 42, # 'И' - 137: 60, # 'Й' - 138: 36, # 'К' - 139: 49, # 'Л' - 140: 38, # 'М' - 141: 31, # 'Н' - 142: 34, # 'О' - 143: 35, # 'П' - 144: 45, # 'Р' - 145: 32, # 'С' - 146: 40, # 'Т' - 147: 52, # 'У' - 148: 53, # 'Ф' - 149: 55, # 'Х' - 150: 58, # 'Ц' - 151: 50, # 'Ч' - 152: 57, # 'Ш' - 153: 63, # 'Щ' - 154: 70, # 'Ъ' - 155: 62, # 'Ы' - 156: 61, # 'Ь' - 157: 47, # 'Э' - 158: 59, # 'Ю' - 159: 43, # 'Я' - 160: 191, # '†' - 161: 192, # '°' - 162: 193, # 'Ґ' - 163: 194, # '£' - 164: 195, # '§' - 165: 196, # '•' - 166: 197, # '¶' - 167: 198, # 'І' - 168: 199, # '®' - 169: 200, # '©' - 170: 201, # '™' - 171: 202, # 'Ђ' - 172: 203, # 'ђ' - 173: 204, # '≠' - 174: 205, # 'Ѓ' - 175: 206, # 'ѓ' - 176: 207, # '∞' - 177: 208, # '±' - 178: 209, # '≤' - 179: 210, # '≥' - 180: 211, # 'і' - 181: 212, # 'µ' - 182: 213, # 'ґ' - 183: 214, # 'Ј' - 184: 215, # 'Є' - 185: 216, # 'є' - 186: 217, # 'Ї' - 187: 218, # 'ї' - 188: 219, # 'Љ' - 189: 220, # 'љ' - 190: 221, # 'Њ' - 191: 222, # 'њ' - 192: 223, # 'ј' - 193: 224, # 'Ѕ' - 194: 225, # '¬' - 195: 226, # '√' - 196: 227, # 'ƒ' - 197: 228, # '≈' - 198: 229, # '∆' - 199: 230, # '«' - 200: 231, # '»' - 201: 232, # '…' - 202: 233, # '\xa0' - 203: 234, # 'Ћ' - 204: 235, # 'ћ' - 205: 236, # 'Ќ' - 206: 237, # 'ќ' - 207: 238, # 'ѕ' - 208: 239, # '–' - 209: 240, # '—' - 210: 241, # '“' - 211: 242, # '”' - 212: 243, # '‘' - 213: 244, # '’' - 214: 245, # '÷' - 215: 246, # '„' - 216: 247, # 'Ў' - 217: 248, # 'ў' - 218: 249, # 'Џ' - 219: 250, # 'џ' - 220: 251, # '№' - 221: 252, # 'Ё' - 222: 68, # 'ё' - 223: 16, # 'я' - 224: 3, # 'а' - 225: 21, # 'б' - 226: 10, # 'в' - 227: 19, # 'г' - 228: 13, # 'д' - 229: 2, # 'е' - 230: 24, # 'ж' - 231: 20, # 'з' - 232: 4, # 'и' - 233: 23, # 'й' - 234: 11, # 'к' - 235: 8, # 'л' - 236: 12, # 'м' - 237: 5, # 'н' - 238: 1, # 'о' - 239: 15, # 'п' - 240: 9, # 'р' - 241: 7, # 'с' - 242: 6, # 'т' - 243: 14, # 'у' - 244: 39, # 'ф' - 245: 26, # 'х' - 246: 28, # 'ц' - 247: 22, # 'ч' - 248: 25, # 'ш' - 249: 29, # 'щ' - 250: 54, # 'ъ' - 251: 18, # 'ы' - 252: 17, # 'ь' - 253: 30, # 'э' - 254: 27, # 'ю' - 255: 255, # '€' -} - -MACCYRILLIC_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="MacCyrillic", - language="Russian", - char_to_order_map=MACCYRILLIC_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) - -ISO_8859_5_RUSSIAN_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 142, # 'A' - 66: 143, # 'B' - 67: 144, # 'C' - 68: 145, # 'D' - 69: 146, # 'E' - 70: 147, # 'F' - 71: 148, # 'G' - 72: 149, # 'H' - 73: 150, # 'I' - 74: 151, # 'J' - 75: 152, # 'K' - 76: 74, # 'L' - 77: 153, # 'M' - 78: 75, # 'N' - 79: 154, # 'O' - 80: 155, # 'P' - 81: 156, # 'Q' - 82: 157, # 'R' - 83: 158, # 'S' - 84: 159, # 'T' - 85: 160, # 'U' - 86: 161, # 'V' - 87: 162, # 'W' - 88: 163, # 'X' - 89: 164, # 'Y' - 90: 165, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 71, # 'a' - 98: 172, # 'b' - 99: 66, # 'c' - 100: 173, # 'd' - 101: 65, # 'e' - 102: 174, # 'f' - 103: 76, # 'g' - 104: 175, # 'h' - 105: 64, # 'i' - 106: 176, # 'j' - 107: 177, # 'k' - 108: 77, # 'l' - 109: 72, # 'm' - 110: 178, # 'n' - 111: 69, # 'o' - 112: 67, # 'p' - 113: 179, # 'q' - 114: 78, # 'r' - 115: 73, # 's' - 116: 180, # 't' - 117: 181, # 'u' - 118: 79, # 'v' - 119: 182, # 'w' - 120: 183, # 'x' - 121: 184, # 'y' - 122: 185, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 191, # '\x80' - 129: 192, # '\x81' - 130: 193, # '\x82' - 131: 194, # '\x83' - 132: 195, # '\x84' - 133: 196, # '\x85' - 134: 197, # '\x86' - 135: 198, # '\x87' - 136: 199, # '\x88' - 137: 200, # '\x89' - 138: 201, # '\x8a' - 139: 202, # '\x8b' - 140: 203, # '\x8c' - 141: 204, # '\x8d' - 142: 205, # '\x8e' - 143: 206, # '\x8f' - 144: 207, # '\x90' - 145: 208, # '\x91' - 146: 209, # '\x92' - 147: 210, # '\x93' - 148: 211, # '\x94' - 149: 212, # '\x95' - 150: 213, # '\x96' - 151: 214, # '\x97' - 152: 215, # '\x98' - 153: 216, # '\x99' - 154: 217, # '\x9a' - 155: 218, # '\x9b' - 156: 219, # '\x9c' - 157: 220, # '\x9d' - 158: 221, # '\x9e' - 159: 222, # '\x9f' - 160: 223, # '\xa0' - 161: 224, # 'Ё' - 162: 225, # 'Ђ' - 163: 226, # 'Ѓ' - 164: 227, # 'Є' - 165: 228, # 'Ѕ' - 166: 229, # 'І' - 167: 230, # 'Ї' - 168: 231, # 'Ј' - 169: 232, # 'Љ' - 170: 233, # 'Њ' - 171: 234, # 'Ћ' - 172: 235, # 'Ќ' - 173: 236, # '\xad' - 174: 237, # 'Ў' - 175: 238, # 'Џ' - 176: 37, # 'А' - 177: 44, # 'Б' - 178: 33, # 'В' - 179: 46, # 'Г' - 180: 41, # 'Д' - 181: 48, # 'Е' - 182: 56, # 'Ж' - 183: 51, # 'З' - 184: 42, # 'И' - 185: 60, # 'Й' - 186: 36, # 'К' - 187: 49, # 'Л' - 188: 38, # 'М' - 189: 31, # 'Н' - 190: 34, # 'О' - 191: 35, # 'П' - 192: 45, # 'Р' - 193: 32, # 'С' - 194: 40, # 'Т' - 195: 52, # 'У' - 196: 53, # 'Ф' - 197: 55, # 'Х' - 198: 58, # 'Ц' - 199: 50, # 'Ч' - 200: 57, # 'Ш' - 201: 63, # 'Щ' - 202: 70, # 'Ъ' - 203: 62, # 'Ы' - 204: 61, # 'Ь' - 205: 47, # 'Э' - 206: 59, # 'Ю' - 207: 43, # 'Я' - 208: 3, # 'а' - 209: 21, # 'б' - 210: 10, # 'в' - 211: 19, # 'г' - 212: 13, # 'д' - 213: 2, # 'е' - 214: 24, # 'ж' - 215: 20, # 'з' - 216: 4, # 'и' - 217: 23, # 'й' - 218: 11, # 'к' - 219: 8, # 'л' - 220: 12, # 'м' - 221: 5, # 'н' - 222: 1, # 'о' - 223: 15, # 'п' - 224: 9, # 'р' - 225: 7, # 'с' - 226: 6, # 'т' - 227: 14, # 'у' - 228: 39, # 'ф' - 229: 26, # 'х' - 230: 28, # 'ц' - 231: 22, # 'ч' - 232: 25, # 'ш' - 233: 29, # 'щ' - 234: 54, # 'ъ' - 235: 18, # 'ы' - 236: 17, # 'ь' - 237: 30, # 'э' - 238: 27, # 'ю' - 239: 16, # 'я' - 240: 239, # '№' - 241: 68, # 'ё' - 242: 240, # 'ђ' - 243: 241, # 'ѓ' - 244: 242, # 'є' - 245: 243, # 'ѕ' - 246: 244, # 'і' - 247: 245, # 'ї' - 248: 246, # 'ј' - 249: 247, # 'љ' - 250: 248, # 'њ' - 251: 249, # 'ћ' - 252: 250, # 'ќ' - 253: 251, # '§' - 254: 252, # 'ў' - 255: 255, # 'џ' -} - -ISO_8859_5_RUSSIAN_MODEL = SingleByteCharSetModel( - charset_name="ISO-8859-5", - language="Russian", - char_to_order_map=ISO_8859_5_RUSSIAN_CHAR_TO_ORDER, - language_model=RUSSIAN_LANG_MODEL, - typical_positive_ratio=0.976601, - keep_ascii_letters=False, - alphabet="ЁАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюяё", -) diff --git a/spaces/Truepic/ai-content-credentials/static/script.js b/spaces/Truepic/ai-content-credentials/static/script.js deleted file mode 100644 index 6d713b4773be0442a52de23751bf52d22c61e5c9..0000000000000000000000000000000000000000 --- a/spaces/Truepic/ai-content-credentials/static/script.js +++ /dev/null @@ -1,258 +0,0 @@ -const textGenForm = document.querySelector(".text-gen-form"); -const textGenInput = document.getElementById("text-gen-input"); -const model = document.getElementById("model"); -const textGenSubmit = document.getElementById("text-gen-submit"); -const spinner = document.getElementById("spinner"); -const placeholder = document.getElementById("placeholder"); -const downloadLink = document.getElementById("download-link"); -const verificationNav = document.getElementById("verification-nav"); -const certificateNav = document.getElementById("certificate-nav"); -const verification = document.querySelector(".verification"); -const verificationOutput = document.getElementById("verification-output"); -const certificateOutput = document.getElementById("certificate-output"); -const verificationDetails = document.querySelector(".verification-details"); -const parameters = document.querySelector(".parameters"); -const modelParam = document.querySelector(".parameters .model"); -const promptParam = document.querySelector(".parameters .prompt"); -const certificateList = document.getElementById("certificate-list"); -var certificates = []; - -[textGenInput, model].forEach((item) => { - item.addEventListener("change", async (event) => { - setButtonStatus(); - }); -}); - -const setButtonStatus = () => { - if (textGenInput.value && model.value) textGenSubmit.classList.add("active"); - else textGenSubmit.classList.remove("active"); -}; - -const generateImage = async (text, model) => { - const inferResponse = await fetch(`generate?prompt=${text}&model=${model}`); - const inferJson = await inferResponse.json(); - - return inferJson.response; -}; - -textGenForm.addEventListener("submit", async (event) => { - event.preventDefault(); - - if (!textGenInput.value || !model.value) return; - - verificationDetails.style.display = "none"; - parameters.style.display = "none"; - downloadLink.style.display = "none"; - - try { - if (placeholder) placeholder.remove(); - if (document.getElementById("result")) - document.getElementById("result").remove(); - spinner.style.display = "block"; - - const resp = await generateImage(textGenInput.value, model.value); - const path = "/" + resp; - - var resultsContainer = document.getElementById("image-container"); - - var truepicDisplay = document.createElement("truepic-display"); - truepicDisplay.addEventListener( - "validate", - setVerificationOutputFromValidation - ); - truepicDisplay.addEventListener( - "validate", - setCertificateOutputFromValidation - ); - - truepicDisplay.setAttribute("id", "result"); - truepicDisplay.setAttribute("active", ""); - var truepic = document.createElement("img"); - truepic.src = path; - - truepicDisplay.appendChild(truepic); - - spinner.style.display = "none"; - resultsContainer.appendChild(truepicDisplay); - - downloadLink.style.display = "block"; - downloadLink.href = path; - downloadLink.download = resp; - - modelParam.innerHTML = model.value; - promptParam.innerHTML = textGenInput.value; - parameters.style.display = "block"; - } catch (err) { - console.error(err); - } -}); - -function setVerificationOutputFromValidation(event) { - verificationDetails.style.display = "block"; - return setVerificationOutput(event.detail.manifestStore.toJSON()); -} - -function setCertificateOutputFromValidation(event) { - return setCertificateOutput(event.detail.manifestStore); -} - -function setVerificationOutput(output = null) { - verificationOutput.innerHTML = ""; - - if (!output) { - return; - } - - const viewer = new JSONViewer(); - - verificationOutput.appendChild(viewer.getContainer()); - - viewer.showJSON(output); -} - -function setCertificateOutput(manifestStore = null) { - const certificate = manifestStore?.activeManifest?.certificate; - - if (!certificate) { - return; - } - - certificates = [ - { - der: certificate.der, - name: certificate.subjectName, - decoded: new x509.X509Certificate(certificate.der), - }, - ...certificate.chain.map((certificate) => ({ - der: certificate.der, - decoded: new x509.X509Certificate(certificate.der), - })), - ]; - - certificates.forEach((certificate) => { - certificate.transformed = transformCert(certificate.decoded); - }); - - certificateList.innerHTML = ""; - - certificates.forEach((certificate, index) => { - var li = document.createElement("li"); - if (index == 0) li.classList.add("active"); - li.appendChild( - document.createTextNode(certificate.transformed.subjectCommonName) - ); - li.addEventListener("click", function (e) { - setCertificate(index); - const lis = document.querySelectorAll("#certificate-list li"); - - lis.forEach((element) => { - element.classList.remove("active"); - }); - - this.classList.add("active"); - }); - - certificateList.appendChild(li); - }); - - setCertificate(0); -} - -function transformCert(certificate) { - const { - issuer, - subject, - notAfter: expired, - notBefore: issued, - serialNumber, - publicKey: { - algorithm: { - name: algorithm, - modulusLength: modulus, - namedCurve: namedCurve, - }, - }, - } = certificate; - - const parsedSubject = parseCertificateValues(subject); - const parsedIssuer = parseCertificateValues(issuer); - - return { - issuerCommonName: parsedIssuer["CN"], - issuerOrganizationUnit: parsedIssuer["OU"], - issuerOrganization: parsedIssuer["O"], - issuerCountry: parsedIssuer["C"], - subjectCommonName: parsedSubject["CN"], - subjectOrganizationUnit: parsedSubject["OU"], - subjectOrganization: parsedSubject["O"], - subjectCountry: parsedSubject["C"], - issued, - expired, - serialNumber, - algorithm, - modulus, - namedCurve, - }; -} - -verificationNav.addEventListener("click", (event) => { - event.target.classList.add("active"); - certificateNav.classList.remove("active"); - - verification.style.display = "block"; - certificateOutput.style.display = "none"; -}); - -certificateNav.addEventListener("click", (event) => { - event.target.classList.add("active"); - verificationNav.classList.remove("active"); - - certificateOutput.style.display = "block"; - verification.style.display = "none"; -}); - -function setCertificate(ind) { - const certificate = certificates[ind].transformed; - - document.querySelector(".details .issuerCommonName").innerHTML = - certificate.issuerCommonName; - document.querySelector(".details .issuerOrganizationUnit").innerHTML = - certificate.issuerOrganizationUnit; - document.querySelector(".details .issuerOrganization").innerHTML = - certificate.issuerOrganization; - document.querySelector(".details .issuerCountry").innerHTML = - certificate.issuerCountry; - document.querySelector(".details .subjectCommonName").innerHTML = - certificate.subjectCommonName; - document.querySelector(".details .subjectOrganizationUnit").innerHTML = - certificate.subjectOrganizationUnit; - document.querySelector(".details .subjectOrganization").innerHTML = - certificate.subjectOrganization; - document.querySelector(".details .subjectCountry").innerHTML = - certificate.subjectCountry; - document.querySelector(".details .issued").innerHTML = certificate.issued; - document.querySelector(".details .expired").innerHTML = certificate.expired; - document.querySelector(".details .serialNumber").innerHTML = - certificate.serialNumber; - document.querySelector(".details .algorithm").innerHTML = - certificate.algorithm; - - if (certificate.namedCurve !== undefined) { - document.querySelector(".details .namedCurve").innerHTML = - certificate.namedCurve; - document.querySelector("#curveContainer").style.display = "block"; - } else { - document.querySelector("#curveContainer").style.display = "none"; - } -} - -function parseCertificateValues(input) { - const params = new URLSearchParams(input.replaceAll(",", "&")); - const responses = {}; - - for (const entry of params.entries()) { - responses[entry[0].trim()] = entry[1]; - } - - return responses; -} diff --git a/spaces/Truym/rvc-pendu/app.py b/spaces/Truym/rvc-pendu/app.py deleted file mode 100644 index d1d4fb32cf4b9622530b9fdba4af2ffea3a48c79..0000000000000000000000000000000000000000 --- a/spaces/Truym/rvc-pendu/app.py +++ /dev/null @@ -1,188 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "#
    RVC Models\n" - "##
    The input audio should be clean and pure voice without background music.\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=ardha27.Rvc-Models)\n\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/12rbZk9CoXD1m84dqBW5IKMBjiVY6tcoj?usp=share_link)\n\n" - "[![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/ardha27pi/rvc-models?duplicate=true)\n\n" - "[![Train Own Voice](https://badgen.net/badge/icon/github?icon=github&label=Train%20Voice)](https://github.com/ardha27/AI-Song-Cover-RVC)\n\n" - "[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/R6R7AH1FA)\n\n" - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
    ' - f'
    {title}
    \n'+ - (f'
    Model author: {author}
    ' if author else "")+ - (f'' if cover else "")+ - '
    ' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/Vegecken/sovits4dzl/app.py b/spaces/Vegecken/sovits4dzl/app.py deleted file mode 100644 index 89666b58dcf8f195d4690b519fbecf3ae27339d8..0000000000000000000000000000000000000000 --- a/spaces/Vegecken/sovits4dzl/app.py +++ /dev/null @@ -1,71 +0,0 @@ -import io -import os - -os.system("wget -P hubert/ https://huggingface.co/spaces/innnky/nanami/resolve/main/checkpoint_best_legacy_500.pt") -import gradio as gr -import librosa -import numpy as np -import soundfile -from inference.infer_tool import Svc -import logging - -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('markdown_it').setLevel(logging.WARNING) -logging.getLogger('urllib3').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -model = Svc("logs/44k/G_71200.pth", "configs/config.json", cluster_model_path="logs/44k/kmeans_10000.pt") -print("OK") - - - -def vc_fn(sid, input_audio, vc_transform, auto_f0,cluster_ratio, noise_scale): - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - # print(audio.shape,sampling_rate) - duration = audio.shape[0] / sampling_rate - if duration > 30: - return "这只是个DEMO只能有30s的长度", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - print(audio.shape) - out_wav_path = "temp.wav" - soundfile.write(out_wav_path, audio, 16000, format="wav") - print( cluster_ratio, auto_f0, noise_scale) - out_audio, out_sr = model.infer(sid, vc_transform, out_wav_path, - cluster_infer_ratio=cluster_ratio, - auto_predict_f0=auto_f0, - noice_scale=noise_scale - ) - return "Success", (44100, out_audio.numpy()) - - -app = gr.Blocks() -with app: - with gr.Tabs(): - with gr.TabItem("Basic"): - gr.Markdown(value=""" - sovits4.0 东知了在线demo - - - """) - spks = list(model.spk2id.keys()) - sid = gr.Dropdown(label="音色", choices=["dzl"], value="dzl") - vc_input3 = gr.Audio(label="上传音频(长度小于30秒)") - vc_transform = gr.Number(label="变调(整数,可以正负,半音数量,升高八度就是12)", value=0) - cluster_ratio = gr.Number(label="聚类模型混合比例,0-1之间,默认为0不启用聚类,能提升音色相似度,但会导致咬字下降(如果使用建议0.5左右)", value=0) - auto_f0 = gr.Checkbox(label="自动f0预测,配合聚类模型f0预测效果更好,会导致变调功能失效(仅限转换语音,歌声不要勾选此项会究极跑调)", value=False) - noise_scale = gr.Number(label="noise_scale 建议不要动,会影响音质,玄学参数", value=0.4) - vc_submit = gr.Button("转换", variant="primary") - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [sid, vc_input3, vc_transform,auto_f0,cluster_ratio, noise_scale], [vc_output1, vc_output2]) - - app.launch() - - - diff --git a/spaces/XzJosh/Gun-Bert-VITS2/monotonic_align/__init__.py b/spaces/XzJosh/Gun-Bert-VITS2/monotonic_align/__init__.py deleted file mode 100644 index 75603d26cf2b8d6196f5a68a89f9e49d8e519bc8..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Gun-Bert-VITS2/monotonic_align/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - -def maximum_path(neg_cent, mask): - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/XzJosh/JM-Bert-VITS2/attentions.py b/spaces/XzJosh/JM-Bert-VITS2/attentions.py deleted file mode 100644 index 1192dd7268c20c11010e73a6017ed09549695afe..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/JM-Bert-VITS2/attentions.py +++ /dev/null @@ -1,344 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import logging - -logger = logging.getLogger(__name__) - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, isflow = True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - #if isflow: - # cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1) - # self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1) - # self.cond_layer = weight_norm(cond_layer, name='weight') - # self.gin_channels = 256 - self.cond_layer_idx = self.n_layers - if 'gin_channels' in kwargs: - self.gin_channels = kwargs['gin_channels'] - if self.gin_channels != 0: - self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels) - # vits2 says 3rd block, so idx is 2 by default - self.cond_layer_idx = kwargs['cond_layer_idx'] if 'cond_layer_idx' in kwargs else 2 - logging.debug(self.gin_channels, self.cond_layer_idx) - assert self.cond_layer_idx < self.n_layers, 'cond_layer_idx should be less than n_layers' - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - def forward(self, x, x_mask, g=None): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - if i == self.cond_layer_idx and g is not None: - g = self.spk_emb_linear(g.transpose(1, 2)) - g = g.transpose(1, 2) - x = x + g - x = x * x_mask - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/modeling/text/file_utils.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/modeling/text/file_utils.py deleted file mode 100644 index 51918cf3857471e4ffb5b617d73ee8b9eed0989e..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/modeling/text/file_utils.py +++ /dev/null @@ -1,256 +0,0 @@ -# Utilities for working with the local dataset cache. -# This file is adapted from the AllenNLP library at https://github.com/allenai/allennlp -# Copyright by the AllenNLP authors. - -from __future__ import absolute_import, division, print_function, unicode_literals - -import sys -import json -import logging -import os -import shutil -import tempfile -import fnmatch -from functools import wraps -from hashlib import sha256 -from io import open - -import boto3 -import requests -from botocore.exceptions import ClientError -from tqdm import tqdm - -try: - from torch.hub import _get_torch_home - torch_cache_home = _get_torch_home() -except ImportError: - torch_cache_home = os.path.expanduser( - os.getenv('TORCH_HOME', os.path.join( - os.getenv('XDG_CACHE_HOME', '~/.cache'), 'torch'))) -default_cache_path = os.path.join(torch_cache_home, 'pytorch_transformers') - -try: - from urllib.parse import urlparse -except ImportError: - from urlparse import urlparse - -try: - from pathlib import Path - PYTORCH_PRETRAINED_BERT_CACHE = Path( - os.getenv('PYTORCH_PRETRAINED_BERT_CACHE', default_cache_path)) -except (AttributeError, ImportError): - PYTORCH_PRETRAINED_BERT_CACHE = os.getenv('PYTORCH_PRETRAINED_BERT_CACHE', - default_cache_path) - -logger = logging.getLogger(__name__) # pylint: disable=invalid-name - - -def url_to_filename(url, etag=None): - """ - Convert `url` into a hashed filename in a repeatable way. - If `etag` is specified, append its hash to the url's, delimited - by a period. - """ - url_bytes = url.encode('utf-8') - url_hash = sha256(url_bytes) - filename = url_hash.hexdigest() - - if etag: - etag_bytes = etag.encode('utf-8') - etag_hash = sha256(etag_bytes) - filename += '.' + etag_hash.hexdigest() - - return filename - - -def filename_to_url(filename, cache_dir=None): - """ - Return the url and etag (which may be ``None``) stored for `filename`. - Raise ``EnvironmentError`` if `filename` or its stored metadata do not exist. - """ - if cache_dir is None: - cache_dir = PYTORCH_PRETRAINED_BERT_CACHE - if sys.version_info[0] == 3 and isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - cache_path = os.path.join(cache_dir, filename) - if not os.path.exists(cache_path): - raise EnvironmentError("file {} not found".format(cache_path)) - - meta_path = cache_path + '.json' - if not os.path.exists(meta_path): - raise EnvironmentError("file {} not found".format(meta_path)) - - with open(meta_path, encoding="utf-8") as meta_file: - metadata = json.load(meta_file) - url = metadata['url'] - etag = metadata['etag'] - - return url, etag - - -def cached_path(url_or_filename, cache_dir=None): - """ - Given something that might be a URL (or might be a local path), - determine which. If it's a URL, download the file and cache it, and - return the path to the cached file. If it's already a local path, - make sure the file exists and then return the path. - """ - if cache_dir is None: - cache_dir = PYTORCH_PRETRAINED_BERT_CACHE - if sys.version_info[0] == 3 and isinstance(url_or_filename, Path): - url_or_filename = str(url_or_filename) - if sys.version_info[0] == 3 and isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - parsed = urlparse(url_or_filename) - - if parsed.scheme in ('http', 'https', 's3'): - # URL, so get it from the cache (downloading if necessary) - return get_from_cache(url_or_filename, cache_dir) - elif os.path.exists(url_or_filename): - # File, and it exists. - return url_or_filename - elif parsed.scheme == '': - # File, but it doesn't exist. - raise EnvironmentError("file {} not found".format(url_or_filename)) - else: - # Something unknown - raise ValueError("unable to parse {} as a URL or as a local path".format(url_or_filename)) - - -def split_s3_path(url): - """Split a full s3 path into the bucket name and path.""" - parsed = urlparse(url) - if not parsed.netloc or not parsed.path: - raise ValueError("bad s3 path {}".format(url)) - bucket_name = parsed.netloc - s3_path = parsed.path - # Remove '/' at beginning of path. - if s3_path.startswith("/"): - s3_path = s3_path[1:] - return bucket_name, s3_path - - -def s3_request(func): - """ - Wrapper function for s3 requests in order to create more helpful error - messages. - """ - - @wraps(func) - def wrapper(url, *args, **kwargs): - try: - return func(url, *args, **kwargs) - except ClientError as exc: - if int(exc.response["Error"]["Code"]) == 404: - raise EnvironmentError("file {} not found".format(url)) - else: - raise - - return wrapper - - -@s3_request -def s3_etag(url): - """Check ETag on S3 object.""" - s3_resource = boto3.resource("s3") - bucket_name, s3_path = split_s3_path(url) - s3_object = s3_resource.Object(bucket_name, s3_path) - return s3_object.e_tag - - -@s3_request -def s3_get(url, temp_file): - """Pull a file directly from S3.""" - s3_resource = boto3.resource("s3") - bucket_name, s3_path = split_s3_path(url) - s3_resource.Bucket(bucket_name).download_fileobj(s3_path, temp_file) - - -def http_get(url, temp_file): - req = requests.get(url, stream=True) - content_length = req.headers.get('Content-Length') - total = int(content_length) if content_length is not None else None - progress = tqdm(unit="B", total=total) - for chunk in req.iter_content(chunk_size=1024): - if chunk: # filter out keep-alive new chunks - progress.update(len(chunk)) - temp_file.write(chunk) - progress.close() - - -def get_from_cache(url, cache_dir=None): - """ - Given a URL, look for the corresponding dataset in the local cache. - If it's not there, download it. Then return the path to the cached file. - """ - if cache_dir is None: - cache_dir = PYTORCH_PRETRAINED_BERT_CACHE - if sys.version_info[0] == 3 and isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - if sys.version_info[0] == 2 and not isinstance(cache_dir, str): - cache_dir = str(cache_dir) - - if not os.path.exists(cache_dir): - os.makedirs(cache_dir) - - # Get eTag to add to filename, if it exists. - if url.startswith("s3://"): - etag = s3_etag(url) - else: - try: - response = requests.head(url, allow_redirects=True) - if response.status_code != 200: - etag = None - else: - etag = response.headers.get("ETag") - except EnvironmentError: - etag = None - - if sys.version_info[0] == 2 and etag is not None: - etag = etag.decode('utf-8') - filename = url_to_filename(url, etag) - - # get cache path to put the file - cache_path = os.path.join(cache_dir, filename) - - # If we don't have a connection (etag is None) and can't identify the file - # try to get the last downloaded one - if not os.path.exists(cache_path) and etag is None: - matching_files = fnmatch.filter(os.listdir(cache_dir), filename + '.*') - matching_files = list(filter(lambda s: not s.endswith('.json'), matching_files)) - if matching_files: - cache_path = os.path.join(cache_dir, matching_files[-1]) - - if not os.path.exists(cache_path): - # Download to temporary file, then copy to cache dir once finished. - # Otherwise you get corrupt cache entries if the download gets interrupted. - with tempfile.NamedTemporaryFile() as temp_file: - logger.info("%s not found in cache, downloading to %s", url, temp_file.name) - - # GET file object - if url.startswith("s3://"): - s3_get(url, temp_file) - else: - http_get(url, temp_file) - - # we are copying the file before closing it, so flush to avoid truncation - temp_file.flush() - # shutil.copyfileobj() starts at the current position, so go to the start - temp_file.seek(0) - - logger.info("copying %s to cache at %s", temp_file.name, cache_path) - with open(cache_path, 'wb') as cache_file: - shutil.copyfileobj(temp_file, cache_file) - - logger.info("creating metadata file for %s", cache_path) - meta = {'url': url, 'etag': etag} - meta_path = cache_path + '.json' - with open(meta_path, 'w') as meta_file: - output_string = json.dumps(meta) - meta_file.write(output_string) - - logger.info("removing temp file %s", temp_file.name) - - return cache_path diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py deleted file mode 100644 index 40844ddeb8d47ff58a6af49ab35bad84e14f5721..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py +++ /dev/null @@ -1,8 +0,0 @@ -from ..common.optim import SGD as optimizer -from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier -from ..common.data.coco import dataloader -from ..common.models.mask_rcnn_fpn import model -from ..common.train import train - -model.backbone.bottom_up.freeze_at = 2 -train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl" diff --git a/spaces/aadnk/whisper-webui/app-local.py b/spaces/aadnk/whisper-webui/app-local.py deleted file mode 100644 index c7717d096ca5f95177f0dba03cd62ca729bae9f3..0000000000000000000000000000000000000000 --- a/spaces/aadnk/whisper-webui/app-local.py +++ /dev/null @@ -1,5 +0,0 @@ -# Run the app with no audio file restrictions -from app import create_ui -from src.config import ApplicationConfig - -create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1)) \ No newline at end of file diff --git a/spaces/abby711/FaceRestoration/gfpgan/data/ffhq_degradation_dataset.py b/spaces/abby711/FaceRestoration/gfpgan/data/ffhq_degradation_dataset.py deleted file mode 100644 index 64e5755e1211f171cb2a883d47e8d253061f90aa..0000000000000000000000000000000000000000 --- a/spaces/abby711/FaceRestoration/gfpgan/data/ffhq_degradation_dataset.py +++ /dev/null @@ -1,230 +0,0 @@ -import cv2 -import math -import numpy as np -import os.path as osp -import torch -import torch.utils.data as data -from basicsr.data import degradations as degradations -from basicsr.data.data_util import paths_from_folder -from basicsr.data.transforms import augment -from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY -from torchvision.transforms.functional import (adjust_brightness, adjust_contrast, adjust_hue, adjust_saturation, - normalize) - - -@DATASET_REGISTRY.register() -class FFHQDegradationDataset(data.Dataset): - """FFHQ dataset for GFPGAN. - - It reads high resolution images, and then generate low-quality (LQ) images on-the-fly. - - Args: - opt (dict): Config for train datasets. It contains the following keys: - dataroot_gt (str): Data root path for gt. - io_backend (dict): IO backend type and other kwarg. - mean (list | tuple): Image mean. - std (list | tuple): Image std. - use_hflip (bool): Whether to horizontally flip. - Please see more options in the codes. - """ - - def __init__(self, opt): - super(FFHQDegradationDataset, self).__init__() - self.opt = opt - # file client (io backend) - self.file_client = None - self.io_backend_opt = opt['io_backend'] - - self.gt_folder = opt['dataroot_gt'] - self.mean = opt['mean'] - self.std = opt['std'] - self.out_size = opt['out_size'] - - self.crop_components = opt.get('crop_components', False) # facial components - self.eye_enlarge_ratio = opt.get('eye_enlarge_ratio', 1) # whether enlarge eye regions - - if self.crop_components: - # load component list from a pre-process pth files - self.components_list = torch.load(opt.get('component_path')) - - # file client (lmdb io backend) - if self.io_backend_opt['type'] == 'lmdb': - self.io_backend_opt['db_paths'] = self.gt_folder - if not self.gt_folder.endswith('.lmdb'): - raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}") - with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin: - self.paths = [line.split('.')[0] for line in fin] - else: - # disk backend: scan file list from a folder - self.paths = paths_from_folder(self.gt_folder) - - # degradation configurations - self.blur_kernel_size = opt['blur_kernel_size'] - self.kernel_list = opt['kernel_list'] - self.kernel_prob = opt['kernel_prob'] - self.blur_sigma = opt['blur_sigma'] - self.downsample_range = opt['downsample_range'] - self.noise_range = opt['noise_range'] - self.jpeg_range = opt['jpeg_range'] - - # color jitter - self.color_jitter_prob = opt.get('color_jitter_prob') - self.color_jitter_pt_prob = opt.get('color_jitter_pt_prob') - self.color_jitter_shift = opt.get('color_jitter_shift', 20) - # to gray - self.gray_prob = opt.get('gray_prob') - - logger = get_root_logger() - logger.info(f'Blur: blur_kernel_size {self.blur_kernel_size}, sigma: [{", ".join(map(str, self.blur_sigma))}]') - logger.info(f'Downsample: downsample_range [{", ".join(map(str, self.downsample_range))}]') - logger.info(f'Noise: [{", ".join(map(str, self.noise_range))}]') - logger.info(f'JPEG compression: [{", ".join(map(str, self.jpeg_range))}]') - - if self.color_jitter_prob is not None: - logger.info(f'Use random color jitter. Prob: {self.color_jitter_prob}, shift: {self.color_jitter_shift}') - if self.gray_prob is not None: - logger.info(f'Use random gray. Prob: {self.gray_prob}') - self.color_jitter_shift /= 255. - - @staticmethod - def color_jitter(img, shift): - """jitter color: randomly jitter the RGB values, in numpy formats""" - jitter_val = np.random.uniform(-shift, shift, 3).astype(np.float32) - img = img + jitter_val - img = np.clip(img, 0, 1) - return img - - @staticmethod - def color_jitter_pt(img, brightness, contrast, saturation, hue): - """jitter color: randomly jitter the brightness, contrast, saturation, and hue, in torch Tensor formats""" - fn_idx = torch.randperm(4) - for fn_id in fn_idx: - if fn_id == 0 and brightness is not None: - brightness_factor = torch.tensor(1.0).uniform_(brightness[0], brightness[1]).item() - img = adjust_brightness(img, brightness_factor) - - if fn_id == 1 and contrast is not None: - contrast_factor = torch.tensor(1.0).uniform_(contrast[0], contrast[1]).item() - img = adjust_contrast(img, contrast_factor) - - if fn_id == 2 and saturation is not None: - saturation_factor = torch.tensor(1.0).uniform_(saturation[0], saturation[1]).item() - img = adjust_saturation(img, saturation_factor) - - if fn_id == 3 and hue is not None: - hue_factor = torch.tensor(1.0).uniform_(hue[0], hue[1]).item() - img = adjust_hue(img, hue_factor) - return img - - def get_component_coordinates(self, index, status): - """Get facial component (left_eye, right_eye, mouth) coordinates from a pre-loaded pth file""" - components_bbox = self.components_list[f'{index:08d}'] - if status[0]: # hflip - # exchange right and left eye - tmp = components_bbox['left_eye'] - components_bbox['left_eye'] = components_bbox['right_eye'] - components_bbox['right_eye'] = tmp - # modify the width coordinate - components_bbox['left_eye'][0] = self.out_size - components_bbox['left_eye'][0] - components_bbox['right_eye'][0] = self.out_size - components_bbox['right_eye'][0] - components_bbox['mouth'][0] = self.out_size - components_bbox['mouth'][0] - - # get coordinates - locations = [] - for part in ['left_eye', 'right_eye', 'mouth']: - mean = components_bbox[part][0:2] - half_len = components_bbox[part][2] - if 'eye' in part: - half_len *= self.eye_enlarge_ratio - loc = np.hstack((mean - half_len + 1, mean + half_len)) - loc = torch.from_numpy(loc).float() - locations.append(loc) - return locations - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - # load gt image - # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32. - gt_path = self.paths[index] - img_bytes = self.file_client.get(gt_path) - img_gt = imfrombytes(img_bytes, float32=True) - - # random horizontal flip - img_gt, status = augment(img_gt, hflip=self.opt['use_hflip'], rotation=False, return_status=True) - h, w, _ = img_gt.shape - - # get facial component coordinates - if self.crop_components: - locations = self.get_component_coordinates(index, status) - loc_left_eye, loc_right_eye, loc_mouth = locations - - # ------------------------ generate lq image ------------------------ # - # blur - kernel = degradations.random_mixed_kernels( - self.kernel_list, - self.kernel_prob, - self.blur_kernel_size, - self.blur_sigma, - self.blur_sigma, [-math.pi, math.pi], - noise_range=None) - img_lq = cv2.filter2D(img_gt, -1, kernel) - # downsample - scale = np.random.uniform(self.downsample_range[0], self.downsample_range[1]) - img_lq = cv2.resize(img_lq, (int(w // scale), int(h // scale)), interpolation=cv2.INTER_LINEAR) - # noise - if self.noise_range is not None: - img_lq = degradations.random_add_gaussian_noise(img_lq, self.noise_range) - # jpeg compression - if self.jpeg_range is not None: - img_lq = degradations.random_add_jpg_compression(img_lq, self.jpeg_range) - - # resize to original size - img_lq = cv2.resize(img_lq, (w, h), interpolation=cv2.INTER_LINEAR) - - # random color jitter (only for lq) - if self.color_jitter_prob is not None and (np.random.uniform() < self.color_jitter_prob): - img_lq = self.color_jitter(img_lq, self.color_jitter_shift) - # random to gray (only for lq) - if self.gray_prob and np.random.uniform() < self.gray_prob: - img_lq = cv2.cvtColor(img_lq, cv2.COLOR_BGR2GRAY) - img_lq = np.tile(img_lq[:, :, None], [1, 1, 3]) - if self.opt.get('gt_gray'): # whether convert GT to gray images - img_gt = cv2.cvtColor(img_gt, cv2.COLOR_BGR2GRAY) - img_gt = np.tile(img_gt[:, :, None], [1, 1, 3]) # repeat the color channels - - # BGR to RGB, HWC to CHW, numpy to tensor - img_gt, img_lq = img2tensor([img_gt, img_lq], bgr2rgb=True, float32=True) - - # random color jitter (pytorch version) (only for lq) - if self.color_jitter_pt_prob is not None and (np.random.uniform() < self.color_jitter_pt_prob): - brightness = self.opt.get('brightness', (0.5, 1.5)) - contrast = self.opt.get('contrast', (0.5, 1.5)) - saturation = self.opt.get('saturation', (0, 1.5)) - hue = self.opt.get('hue', (-0.1, 0.1)) - img_lq = self.color_jitter_pt(img_lq, brightness, contrast, saturation, hue) - - # round and clip - img_lq = torch.clamp((img_lq * 255.0).round(), 0, 255) / 255. - - # normalize - normalize(img_gt, self.mean, self.std, inplace=True) - normalize(img_lq, self.mean, self.std, inplace=True) - - if self.crop_components: - return_dict = { - 'lq': img_lq, - 'gt': img_gt, - 'gt_path': gt_path, - 'loc_left_eye': loc_left_eye, - 'loc_right_eye': loc_right_eye, - 'loc_mouth': loc_mouth - } - return return_dict - else: - return {'lq': img_lq, 'gt': img_gt, 'gt_path': gt_path} - - def __len__(self): - return len(self.paths) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/box_iou_rotated.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/box_iou_rotated.py deleted file mode 100644 index 2d78015e9c2a9e7a52859b4e18f84a9aa63481a0..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/box_iou_rotated.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['box_iou_rotated']) - - -def box_iou_rotated(bboxes1, bboxes2, mode='iou', aligned=False): - """Return intersection-over-union (Jaccard index) of boxes. - - Both sets of boxes are expected to be in - (x_center, y_center, width, height, angle) format. - - If ``aligned`` is ``False``, then calculate the ious between each bbox - of bboxes1 and bboxes2, otherwise the ious between each aligned pair of - bboxes1 and bboxes2. - - Arguments: - boxes1 (Tensor): rotated bboxes 1. \ - It has shape (N, 5), indicating (x, y, w, h, theta) for each row. - Note that theta is in radian. - boxes2 (Tensor): rotated bboxes 2. \ - It has shape (M, 5), indicating (x, y, w, h, theta) for each row. - Note that theta is in radian. - mode (str): "iou" (intersection over union) or iof (intersection over - foreground). - - Returns: - ious(Tensor): shape (N, M) if aligned == False else shape (N,) - """ - assert mode in ['iou', 'iof'] - mode_dict = {'iou': 0, 'iof': 1} - mode_flag = mode_dict[mode] - rows = bboxes1.size(0) - cols = bboxes2.size(0) - if aligned: - ious = bboxes1.new_zeros(rows) - else: - ious = bboxes1.new_zeros((rows * cols)) - bboxes1 = bboxes1.contiguous() - bboxes2 = bboxes2.contiguous() - ext_module.box_iou_rotated( - bboxes1, bboxes2, ious, mode_flag=mode_flag, aligned=aligned) - if not aligned: - ious = ious.view(rows, cols) - return ious diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/models/quantize_cnn.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/models/quantize_cnn.py deleted file mode 100644 index b796772749efda9a225bdcb0e7262791a972a710..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/models/quantize_cnn.py +++ /dev/null @@ -1,415 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -class QuantizeEMAReset(nn.Module): - def __init__(self, nb_code, code_dim, args): - super().__init__() - self.nb_code = nb_code - self.code_dim = code_dim - self.mu = args.mu - self.reset_codebook() - - def reset_codebook(self): - self.init = False - self.code_sum = None - self.code_count = None - if torch.cuda.is_available(): - self.register_buffer('codebook', torch.zeros(self.nb_code, self.code_dim).cuda()) - else: - self.register_buffer('codebook', torch.zeros(self.nb_code, self.code_dim)) - - def _tile(self, x): - nb_code_x, code_dim = x.shape - if nb_code_x < self.nb_code: - n_repeats = (self.nb_code + nb_code_x - 1) // nb_code_x - std = 0.01 / np.sqrt(code_dim) - out = x.repeat(n_repeats, 1) - out = out + torch.randn_like(out) * std - else : - out = x - return out - - def init_codebook(self, x): - out = self._tile(x) - self.codebook = out[:self.nb_code] - self.code_sum = self.codebook.clone() - self.code_count = torch.ones(self.nb_code, device=self.codebook.device) - self.init = True - - @torch.no_grad() - def compute_perplexity(self, code_idx) : - # Calculate new centres - code_onehot = torch.zeros(self.nb_code, code_idx.shape[0], device=code_idx.device) # nb_code, N * L - code_onehot.scatter_(0, code_idx.view(1, code_idx.shape[0]), 1) - - code_count = code_onehot.sum(dim=-1) # nb_code - prob = code_count / torch.sum(code_count) - perplexity = torch.exp(-torch.sum(prob * torch.log(prob + 1e-7))) - return perplexity - - @torch.no_grad() - def update_codebook(self, x, code_idx): - - code_onehot = torch.zeros(self.nb_code, x.shape[0], device=x.device) # nb_code, N * L - code_onehot.scatter_(0, code_idx.view(1, x.shape[0]), 1) - - code_sum = torch.matmul(code_onehot, x) # nb_code, w - code_count = code_onehot.sum(dim=-1) # nb_code - - out = self._tile(x) - code_rand = out[:self.nb_code] - - # Update centres - self.code_sum = self.mu * self.code_sum + (1. - self.mu) * code_sum # w, nb_code - self.code_count = self.mu * self.code_count + (1. - self.mu) * code_count # nb_code - - usage = (self.code_count.view(self.nb_code, 1) >= 1.0).float() - code_update = self.code_sum.view(self.nb_code, self.code_dim) / self.code_count.view(self.nb_code, 1) - - self.codebook = usage * code_update + (1 - usage) * code_rand - prob = code_count / torch.sum(code_count) - perplexity = torch.exp(-torch.sum(prob * torch.log(prob + 1e-7))) - - - return perplexity - - def preprocess(self, x): - # NCT -> NTC -> [NT, C] - x = x.permute(0, 2, 1).contiguous() - x = x.view(-1, x.shape[-1]) - return x - - def quantize(self, x): - # Calculate latent code x_l - k_w = self.codebook.t() - distance = torch.sum(x ** 2, dim=-1, keepdim=True) - 2 * torch.matmul(x, k_w) + torch.sum(k_w ** 2, dim=0, - keepdim=True) # (N * L, b) - _, code_idx = torch.min(distance, dim=-1) - return code_idx - - def dequantize(self, code_idx): - x = F.embedding(code_idx, self.codebook) - return x - - - def forward(self, x): - N, width, T = x.shape - - # Preprocess - x = self.preprocess(x) - - # Init codebook if not inited - if self.training and not self.init: - self.init_codebook(x) - - # quantize and dequantize through bottleneck - code_idx = self.quantize(x) - x_d = self.dequantize(code_idx) - - # Update embeddings - if self.training: - perplexity = self.update_codebook(x, code_idx) - else : - perplexity = self.compute_perplexity(code_idx) - - # Loss - commit_loss = F.mse_loss(x, x_d.detach()) - - # Passthrough - x_d = x + (x_d - x).detach() - - # Postprocess - x_d = x_d.view(N, T, -1).permute(0, 2, 1).contiguous() #(N, DIM, T) - - return x_d, commit_loss, perplexity - - - -class Quantizer(nn.Module): - def __init__(self, n_e, e_dim, beta): - super(Quantizer, self).__init__() - - self.e_dim = e_dim - self.n_e = n_e - self.beta = beta - - self.embedding = nn.Embedding(self.n_e, self.e_dim) - self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - def forward(self, z): - - N, width, T = z.shape - z = self.preprocess(z) - assert z.shape[-1] == self.e_dim - z_flattened = z.contiguous().view(-1, self.e_dim) - - # B x V - d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \ - torch.sum(self.embedding.weight**2, dim=1) - 2 * \ - torch.matmul(z_flattened, self.embedding.weight.t()) - # B x 1 - min_encoding_indices = torch.argmin(d, dim=1) - z_q = self.embedding(min_encoding_indices).view(z.shape) - - # compute loss for embedding - loss = torch.mean((z_q - z.detach())**2) + self.beta * \ - torch.mean((z_q.detach() - z)**2) - - # preserve gradients - z_q = z + (z_q - z).detach() - z_q = z_q.view(N, T, -1).permute(0, 2, 1).contiguous() #(N, DIM, T) - - min_encodings = F.one_hot(min_encoding_indices, self.n_e).type(z.dtype) - e_mean = torch.mean(min_encodings, dim=0) - perplexity = torch.exp(-torch.sum(e_mean*torch.log(e_mean + 1e-10))) - return z_q, loss, perplexity - - def quantize(self, z): - - assert z.shape[-1] == self.e_dim - - # B x V - d = torch.sum(z ** 2, dim=1, keepdim=True) + \ - torch.sum(self.embedding.weight ** 2, dim=1) - 2 * \ - torch.matmul(z, self.embedding.weight.t()) - # B x 1 - min_encoding_indices = torch.argmin(d, dim=1) - return min_encoding_indices - - def dequantize(self, indices): - - index_flattened = indices.view(-1) - z_q = self.embedding(index_flattened) - z_q = z_q.view(indices.shape + (self.e_dim, )).contiguous() - return z_q - - def preprocess(self, x): - # NCT -> NTC -> [NT, C] - x = x.permute(0, 2, 1).contiguous() - x = x.view(-1, x.shape[-1]) - return x - - - -class QuantizeReset(nn.Module): - def __init__(self, nb_code, code_dim, args): - super().__init__() - self.nb_code = nb_code - self.code_dim = code_dim - self.reset_codebook() - self.codebook = nn.Parameter(torch.randn(nb_code, code_dim)) - - def reset_codebook(self): - self.init = False - self.code_count = None - - def _tile(self, x): - nb_code_x, code_dim = x.shape - if nb_code_x < self.nb_code: - n_repeats = (self.nb_code + nb_code_x - 1) // nb_code_x - std = 0.01 / np.sqrt(code_dim) - out = x.repeat(n_repeats, 1) - out = out + torch.randn_like(out) * std - else : - out = x - return out - - def init_codebook(self, x): - out = self._tile(x) - self.codebook = nn.Parameter(out[:self.nb_code]) - self.code_count = torch.ones(self.nb_code, device=self.codebook.device) - self.init = True - - @torch.no_grad() - def compute_perplexity(self, code_idx) : - # Calculate new centres - code_onehot = torch.zeros(self.nb_code, code_idx.shape[0], device=code_idx.device) # nb_code, N * L - code_onehot.scatter_(0, code_idx.view(1, code_idx.shape[0]), 1) - - code_count = code_onehot.sum(dim=-1) # nb_code - prob = code_count / torch.sum(code_count) - perplexity = torch.exp(-torch.sum(prob * torch.log(prob + 1e-7))) - return perplexity - - def update_codebook(self, x, code_idx): - - code_onehot = torch.zeros(self.nb_code, x.shape[0], device=x.device) # nb_code, N * L - code_onehot.scatter_(0, code_idx.view(1, x.shape[0]), 1) - - code_count = code_onehot.sum(dim=-1) # nb_code - - out = self._tile(x) - code_rand = out[:self.nb_code] - - # Update centres - self.code_count = code_count # nb_code - usage = (self.code_count.view(self.nb_code, 1) >= 1.0).float() - - self.codebook.data = usage * self.codebook.data + (1 - usage) * code_rand - prob = code_count / torch.sum(code_count) - perplexity = torch.exp(-torch.sum(prob * torch.log(prob + 1e-7))) - - - return perplexity - - def preprocess(self, x): - # NCT -> NTC -> [NT, C] - x = x.permute(0, 2, 1).contiguous() - x = x.view(-1, x.shape[-1]) - return x - - def quantize(self, x): - # Calculate latent code x_l - k_w = self.codebook.t() - distance = torch.sum(x ** 2, dim=-1, keepdim=True) - 2 * torch.matmul(x, k_w) + torch.sum(k_w ** 2, dim=0, - keepdim=True) # (N * L, b) - _, code_idx = torch.min(distance, dim=-1) - return code_idx - - def dequantize(self, code_idx): - x = F.embedding(code_idx, self.codebook) - return x - - - def forward(self, x): - N, width, T = x.shape - # Preprocess - x = self.preprocess(x) - # Init codebook if not inited - if self.training and not self.init: - self.init_codebook(x) - # quantize and dequantize through bottleneck - code_idx = self.quantize(x) - x_d = self.dequantize(code_idx) - # Update embeddings - if self.training: - perplexity = self.update_codebook(x, code_idx) - else : - perplexity = self.compute_perplexity(code_idx) - - # Loss - commit_loss = F.mse_loss(x, x_d.detach()) - - # Passthrough - x_d = x + (x_d - x).detach() - - # Postprocess - x_d = x_d.view(N, T, -1).permute(0, 2, 1).contiguous() #(N, DIM, T) - - return x_d, commit_loss, perplexity - -class QuantizeEMA(nn.Module): - def __init__(self, nb_code, code_dim, args): - super().__init__() - self.nb_code = nb_code - self.code_dim = code_dim - self.mu = 0.99 - self.reset_codebook() - - def reset_codebook(self): - self.init = False - self.code_sum = None - self.code_count = None - self.register_buffer('codebook', torch.zeros(self.nb_code, self.code_dim).cuda()) - - def _tile(self, x): - nb_code_x, code_dim = x.shape - if nb_code_x < self.nb_code: - n_repeats = (self.nb_code + nb_code_x - 1) // nb_code_x - std = 0.01 / np.sqrt(code_dim) - out = x.repeat(n_repeats, 1) - out = out + torch.randn_like(out) * std - else : - out = x - return out - - def init_codebook(self, x): - out = self._tile(x) - self.codebook = out[:self.nb_code] - self.code_sum = self.codebook.clone() - self.code_count = torch.ones(self.nb_code, device=self.codebook.device) - self.init = True - - @torch.no_grad() - def compute_perplexity(self, code_idx) : - # Calculate new centres - code_onehot = torch.zeros(self.nb_code, code_idx.shape[0], device=code_idx.device) # nb_code, N * L - code_onehot.scatter_(0, code_idx.view(1, code_idx.shape[0]), 1) - - code_count = code_onehot.sum(dim=-1) # nb_code - prob = code_count / torch.sum(code_count) - perplexity = torch.exp(-torch.sum(prob * torch.log(prob + 1e-7))) - return perplexity - - @torch.no_grad() - def update_codebook(self, x, code_idx): - - code_onehot = torch.zeros(self.nb_code, x.shape[0], device=x.device) # nb_code, N * L - code_onehot.scatter_(0, code_idx.view(1, x.shape[0]), 1) - - code_sum = torch.matmul(code_onehot, x) # nb_code, w - code_count = code_onehot.sum(dim=-1) # nb_code - - # Update centres - self.code_sum = self.mu * self.code_sum + (1. - self.mu) * code_sum # w, nb_code - self.code_count = self.mu * self.code_count + (1. - self.mu) * code_count # nb_code - - code_update = self.code_sum.view(self.nb_code, self.code_dim) / self.code_count.view(self.nb_code, 1) - - self.codebook = code_update - prob = code_count / torch.sum(code_count) - perplexity = torch.exp(-torch.sum(prob * torch.log(prob + 1e-7))) - - return perplexity - - def preprocess(self, x): - # NCT -> NTC -> [NT, C] - x = x.permute(0, 2, 1).contiguous() - x = x.view(-1, x.shape[-1]) - return x - - def quantize(self, x): - # Calculate latent code x_l - k_w = self.codebook.t() - distance = torch.sum(x ** 2, dim=-1, keepdim=True) - 2 * torch.matmul(x, k_w) + torch.sum(k_w ** 2, dim=0, - keepdim=True) # (N * L, b) - _, code_idx = torch.min(distance, dim=-1) - return code_idx - - def dequantize(self, code_idx): - x = F.embedding(code_idx, self.codebook) - return x - - - def forward(self, x): - N, width, T = x.shape - - # Preprocess - x = self.preprocess(x) - - # Init codebook if not inited - if self.training and not self.init: - self.init_codebook(x) - - # quantize and dequantize through bottleneck - code_idx = self.quantize(x) - x_d = self.dequantize(code_idx) - - # Update embeddings - if self.training: - perplexity = self.update_codebook(x, code_idx) - else : - perplexity = self.compute_perplexity(code_idx) - - # Loss - commit_loss = F.mse_loss(x, x_d.detach()) - - # Passthrough - x_d = x + (x_d - x).detach() - - # Postprocess - x_d = x_d.view(N, T, -1).permute(0, 2, 1).contiguous() #(N, DIM, T) - - return x_d, commit_loss, perplexity \ No newline at end of file diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/graphics/shader.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/graphics/shader.py deleted file mode 100644 index f14d0f638492577cb3cf856e7cb84fd38f7ea0b5..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/graphics/shader.py +++ /dev/null @@ -1,1020 +0,0 @@ -from ctypes import * -from weakref import proxy - -import pyglet - -from pyglet.gl import * -from pyglet.graphics.vertexbuffer import BufferObject - - -_debug_gl_shaders = pyglet.options['debug_gl_shaders'] - - -class ShaderException(BaseException): - pass - - -_c_types = { - GL_BYTE: c_byte, - GL_UNSIGNED_BYTE: c_ubyte, - GL_SHORT: c_short, - GL_UNSIGNED_SHORT: c_ushort, - GL_INT: c_int, - GL_UNSIGNED_INT: c_uint, - GL_FLOAT: c_float, - GL_DOUBLE: c_double, -} - -_shader_types = { - 'compute': GL_COMPUTE_SHADER, - 'fragment': GL_FRAGMENT_SHADER, - 'geometry': GL_GEOMETRY_SHADER, - 'tesscontrol': GL_TESS_CONTROL_SHADER, - 'tessevaluation': GL_TESS_EVALUATION_SHADER, - 'vertex': GL_VERTEX_SHADER, -} - -_uniform_getters = { - GLint: glGetUniformiv, - GLfloat: glGetUniformfv, - GLboolean: glGetUniformiv, -} - -_uniform_setters = { - # uniform: gl_type, legacy_setter, setter, length, count - GL_BOOL: (GLint, glUniform1iv, glProgramUniform1iv, 1, 1), - GL_BOOL_VEC2: (GLint, glUniform1iv, glProgramUniform1iv, 2, 1), - GL_BOOL_VEC3: (GLint, glUniform1iv, glProgramUniform1iv, 3, 1), - GL_BOOL_VEC4: (GLint, glUniform1iv, glProgramUniform1iv, 4, 1), - - GL_INT: (GLint, glUniform1iv, glProgramUniform1iv, 1, 1), - GL_INT_VEC2: (GLint, glUniform2iv, glProgramUniform2iv, 2, 1), - GL_INT_VEC3: (GLint, glUniform3iv, glProgramUniform3iv, 3, 1), - GL_INT_VEC4: (GLint, glUniform4iv, glProgramUniform4iv, 4, 1), - - GL_FLOAT: (GLfloat, glUniform1fv, glProgramUniform1fv, 1, 1), - GL_FLOAT_VEC2: (GLfloat, glUniform2fv, glProgramUniform2fv, 2, 1), - GL_FLOAT_VEC3: (GLfloat, glUniform3fv, glProgramUniform3fv, 3, 1), - GL_FLOAT_VEC4: (GLfloat, glUniform4fv, glProgramUniform4fv, 4, 1), - - GL_SAMPLER_1D: (GLint, glUniform1iv, glProgramUniform1iv, 1, 1), - GL_SAMPLER_2D: (GLint, glUniform1iv, glProgramUniform1iv, 1, 1), - GL_SAMPLER_2D_ARRAY: (GLint, glUniform1iv, glProgramUniform1iv, 1, 1), - - GL_SAMPLER_3D: (GLint, glUniform1iv, glProgramUniform1iv, 1, 1), - - GL_FLOAT_MAT2: (GLfloat, glUniformMatrix2fv, glProgramUniformMatrix2fv, 4, 1), - GL_FLOAT_MAT3: (GLfloat, glUniformMatrix3fv, glProgramUniformMatrix3fv, 6, 1), - GL_FLOAT_MAT4: (GLfloat, glUniformMatrix4fv, glProgramUniformMatrix4fv, 16, 1), - - # TODO: test/implement these: - # GL_FLOAT_MAT2x3: glUniformMatrix2x3fv, glProgramUniformMatrix2x3fv, - # GL_FLOAT_MAT2x4: glUniformMatrix2x4fv, glProgramUniformMatrix2x4fv, - # GL_FLOAT_MAT3x2: glUniformMatrix3x2fv, glProgramUniformMatrix3x2fv, - # GL_FLOAT_MAT3x4: glUniformMatrix3x4fv, glProgramUniformMatrix3x4fv, - # GL_FLOAT_MAT4x2: glUniformMatrix4x2fv, glProgramUniformMatrix4x2fv, - # GL_FLOAT_MAT4x3: glUniformMatrix4x3fv, glProgramUniformMatrix4x3fv, - - GL_IMAGE_1D: (GLint, glUniform1iv, glProgramUniform1iv, 1, 1), - GL_IMAGE_2D: (GLint, glUniform1iv, glProgramUniform1iv, 2, 1), - GL_IMAGE_2D_RECT: (GLint, glUniform1iv, glProgramUniform1iv, 3, 1), - GL_IMAGE_3D: (GLint, glUniform1iv, glProgramUniform1iv, 3, 1), - - GL_IMAGE_1D_ARRAY: (GLint, glUniform1iv, glProgramUniform1iv, 2, 1), - GL_IMAGE_2D_ARRAY: (GLint, glUniform1iv, glProgramUniform1iv, 3, 1), - - GL_IMAGE_2D_MULTISAMPLE: (GLint, glUniform1iv, glProgramUniform1iv, 2, 1), - GL_IMAGE_2D_MULTISAMPLE_ARRAY: (GLint, glUniform1iv, glProgramUniform1iv, 3, 1), - - GL_IMAGE_BUFFER: (GLint, glUniform1iv, glProgramUniform1iv, 3, 1), - GL_IMAGE_CUBE: (GLint, glUniform1iv, glProgramUniform1iv, 1, 1), - GL_IMAGE_CUBE_MAP_ARRAY: (GLint, glUniform1iv, glProgramUniform1iv, 3, 1), -} - -_attribute_types = { - GL_BOOL: (1, '?'), - GL_BOOL_VEC2: (2, '?'), - GL_BOOL_VEC3: (3, '?'), - GL_BOOL_VEC4: (4, '?'), - - GL_INT: (1, 'i'), - GL_INT_VEC2: (2, 'i'), - GL_INT_VEC3: (3, 'i'), - GL_INT_VEC4: (4, 'i'), - - GL_UNSIGNED_INT: (1, 'I'), - GL_UNSIGNED_INT_VEC2: (2, 'I'), - GL_UNSIGNED_INT_VEC3: (3, 'I'), - GL_UNSIGNED_INT_VEC4: (4, 'I'), - - GL_FLOAT: (1, 'f'), - GL_FLOAT_VEC2: (2, 'f'), - GL_FLOAT_VEC3: (3, 'f'), - GL_FLOAT_VEC4: (4, 'f'), - - GL_DOUBLE: (1, 'd'), - GL_DOUBLE_VEC2: (2, 'd'), - GL_DOUBLE_VEC3: (3, 'd'), - GL_DOUBLE_VEC4: (4, 'd'), -} - - -# Accessor classes: - -class Attribute: - """Abstract accessor for an attribute in a mapped buffer.""" - - def __init__(self, name, location, count, gl_type, normalize): - """Create the attribute accessor. - - :Parameters: - `name` : str - Name of the vertex attribute. - `location` : int - Location (index) of the vertex attribute. - `count` : int - Number of components in the attribute. - `gl_type` : int - OpenGL type enumerant; for example, ``GL_FLOAT`` - `normalize`: bool - True if OpenGL should normalize the values - - """ - self.name = name - self.location = location - self.count = count - - self.gl_type = gl_type - self.c_type = _c_types[gl_type] - self.normalize = normalize - - self.align = sizeof(self.c_type) - self.size = count * self.align - self.stride = self.size - - def enable(self): - """Enable the attribute.""" - glEnableVertexAttribArray(self.location) - - def set_pointer(self, ptr): - """Setup this attribute to point to the currently bound buffer at - the given offset. - - ``offset`` should be based on the currently bound buffer's ``ptr`` - member. - - :Parameters: - `offset` : int - Pointer offset to the currently bound buffer for this - attribute. - - """ - glVertexAttribPointer(self.location, self.count, self.gl_type, self.normalize, self.stride, ptr) - - def get_region(self, buffer, start, count): - """Map a buffer region using this attribute as an accessor. - - The returned region consists of a contiguous array of component - data elements. For example, if this attribute uses 3 floats per - vertex, and the `count` parameter is 4, the number of floats mapped - will be ``3 * 4 = 12``. - - :Parameters: - `buffer` : `AbstractMappable` - The buffer to map. - `start` : int - Offset of the first vertex to map. - `count` : int - Number of vertices to map - - :rtype: `AbstractBufferRegion` - """ - byte_start = self.stride * start - byte_size = self.stride * count - array_count = self.count * count - ptr_type = POINTER(self.c_type * array_count) - return buffer.get_region(byte_start, byte_size, ptr_type) - - def set_region(self, buffer, start, count, data): - """Set the data over a region of the buffer. - - :Parameters: - `buffer` : AbstractMappable` - The buffer to modify. - `start` : int - Offset of the first vertex to set. - `count` : int - Number of vertices to set. - `data` : A sequence of data components. - """ - byte_start = self.stride * start - byte_size = self.stride * count - array_count = self.count * count - data = (self.c_type * array_count)(*data) - buffer.set_data_region(data, byte_start, byte_size) - - def __repr__(self): - return f"Attribute(name='{self.name}', location={self.location}, count={self.count})" - - -class _Uniform: - __slots__ = 'program', 'name', 'type', 'location', 'length', 'count', 'get', 'set' - - def __init__(self, program, name, uniform_type, location, dsa): - self.program = program - self.name = name - self.type = uniform_type - self.location = location - - gl_type, gl_setter_legacy, gl_setter_dsa, length, count = _uniform_setters[uniform_type] - gl_setter = gl_setter_dsa if dsa else gl_setter_legacy - gl_getter = _uniform_getters[gl_type] - - self.length = length - self.count = count - - is_matrix = uniform_type in (GL_FLOAT_MAT2, GL_FLOAT_MAT2x3, GL_FLOAT_MAT2x4, - GL_FLOAT_MAT3, GL_FLOAT_MAT3x2, GL_FLOAT_MAT3x4, - GL_FLOAT_MAT4, GL_FLOAT_MAT4x2, GL_FLOAT_MAT4x3) - - c_array = (gl_type * length)() - ptr = cast(c_array, POINTER(gl_type)) - - self.get = self._create_getter_func(program, location, gl_getter, c_array, length) - self.set = self._create_setter_func(program, location, gl_setter, c_array, length, count, ptr, is_matrix, dsa) - - @staticmethod - def _create_getter_func(program, location, gl_getter, c_array, length): - """Factory function for creating simplified Uniform getters""" - - if length == 1: - def getter_func(): - gl_getter(program, location, c_array) - return c_array[0] - else: - def getter_func(): - gl_getter(program, location, c_array) - return c_array[:] - - return getter_func - - @staticmethod - def _create_setter_func(program, location, gl_setter, c_array, length, count, ptr, is_matrix, dsa): - """Factory function for creating simplified Uniform setters""" - if dsa: # Bindless updates: - - if is_matrix: - def setter_func(value): - c_array[:] = value - gl_setter(program, location, count, GL_FALSE, ptr) - elif length == 1 and count == 1: - def setter_func(value): - c_array[0] = value - gl_setter(program, location, count, ptr) - elif length > 1 and count == 1: - def setter_func(values): - c_array[:] = values - gl_setter(program, location, count, ptr) - else: - raise ShaderException("Uniform type not yet supported.") - - return setter_func - - else: - - if is_matrix: - def setter_func(value): - glUseProgram(program) - c_array[:] = value - gl_setter(location, count, GL_FALSE, ptr) - elif length == 1 and count == 1: - def setter_func(value): - glUseProgram(program) - c_array[0] = value - gl_setter(location, count, ptr) - elif length > 1 and count == 1: - def setter_func(values): - glUseProgram(program) - c_array[:] = values - gl_setter(location, count, ptr) - else: - raise ShaderException("Uniform type not yet supported.") - - return setter_func - - def __repr__(self): - return f"Uniform('{self.name}', location={self.location}, length={self.length}, count={self.count})" - - -class UniformBlock: - __slots__ = 'program', 'name', 'index', 'size', 'uniforms', 'view_cls' - - def __init__(self, program, name, index, size, uniforms): - self.program = proxy(program) - self.name = name - self.index = index - self.size = size - self.uniforms = uniforms - self.view_cls = None - - def create_ubo(self, index=0): - """ - Create a new UniformBufferObject from this uniform block. - - :Parameters: - `index` : int - The uniform buffer index the returned UBO will bind itself to. - By default, this is 0. - - :rtype: :py:class:`~pyglet.graphics.shader.UniformBufferObject` - """ - if self.view_cls is None: - self.view_cls = self._introspect_uniforms() - return UniformBufferObject(self.view_cls, self.size, index) - - def _introspect_uniforms(self): - """Introspect the block's structure and return a ctypes struct for - manipulating the uniform block's members. - """ - p_id = self.program.id - index = self.index - - active_count = len(self.uniforms) - - # Query the uniform index order and each uniform's offset: - indices = (GLuint * active_count)() - offsets = (GLint * active_count)() - indices_ptr = cast(addressof(indices), POINTER(GLint)) - offsets_ptr = cast(addressof(offsets), POINTER(GLint)) - glGetActiveUniformBlockiv(p_id, index, GL_UNIFORM_BLOCK_ACTIVE_UNIFORM_INDICES, indices_ptr) - glGetActiveUniformsiv(p_id, active_count, indices, GL_UNIFORM_OFFSET, offsets_ptr) - - # Offsets may be returned in non-ascending order, sort them with the corresponding index: - _oi = sorted(zip(offsets, indices), key=lambda x: x[0]) - offsets = [x[0] for x in _oi] + [self.size] - indices = (GLuint * active_count)(*(x[1] for x in _oi)) - - # # Query other uniform information: - # gl_types = (GLint * active_count)() - # mat_stride = (GLint * active_count)() - # gl_types_ptr = cast(addressof(gl_types), POINTER(GLint)) - # stride_ptr = cast(addressof(mat_stride), POINTER(GLint)) - # glGetActiveUniformsiv(p_id, active_count, indices, GL_UNIFORM_TYPE, gl_types_ptr) - # glGetActiveUniformsiv(p_id, active_count, indices, GL_UNIFORM_MATRIX_STRIDE, stride_ptr) - - view_fields = [] - for i in range(active_count): - u_name, gl_type, length = self.uniforms[indices[i]] - size = offsets[i+1] - offsets[i] - c_type_size = sizeof(gl_type) - actual_size = c_type_size * length - padding = size - actual_size - - # TODO: handle stride for multiple matrixes in the same UBO (crashes now) - # m_stride = mat_stride[i] - - arg = (u_name, gl_type * length) if length > 1 else (u_name, gl_type) - view_fields.append(arg) - - if padding > 0: - padding_bytes = padding // c_type_size - view_fields.append((f'_padding{i}', gl_type * padding_bytes)) - - # Custom ctypes Structure for Uniform access: - class View(Structure): - _fields_ = view_fields - - def __repr__(self): - return str(dict(self._fields_)) - - return View - - def __repr__(self): - return f"{self.__class__.__name__}(name={self.name}, index={self.index})" - - -class UniformBufferObject: - __slots__ = 'buffer', 'view', '_view_ptr', 'index' - - def __init__(self, view_class, buffer_size, index): - self.buffer = BufferObject(buffer_size) - self.view = view_class() - self._view_ptr = pointer(self.view) - self.index = index - - @property - def id(self): - return self.buffer.id - - def bind(self, index=None): - glBindBufferBase(GL_UNIFORM_BUFFER, self.index if index is None else index, self.buffer.id) - - def read(self): - """Read the byte contents of the buffer""" - glBindBuffer(GL_ARRAY_BUFFER, self.buffer.id) - ptr = glMapBufferRange(GL_ARRAY_BUFFER, 0, self.buffer.size, GL_MAP_READ_BIT) - data = string_at(ptr, size=self.buffer.size) - glUnmapBuffer(GL_ARRAY_BUFFER) - return data - - def __enter__(self): - # Return the view to the user in a `with` context: - return self.view - - def __exit__(self, exc_type, exc_val, exc_tb): - self.bind() - self.buffer.set_data(self._view_ptr) - - def __repr__(self): - return "{0}(id={1})".format(self.__class__.__name__, self.buffer.id) - - -# Utility functions: - -def _get_number(program_id: int, variable_type: int) -> int: - """Get the number of active variables of the passed GL type.""" - number = GLint(0) - glGetProgramiv(program_id, variable_type, byref(number)) - return number.value - - -def _query_attribute(program_id: int, index: int): - """Query the name, type, and size of an Attribute by index.""" - asize = GLint() - atype = GLenum() - buf_size = 192 - aname = create_string_buffer(buf_size) - try: - glGetActiveAttrib(program_id, index, buf_size, None, asize, atype, aname) - return aname.value.decode(), atype.value, asize.value - except GLException as exc: - raise ShaderException from exc - - -def _introspect_attributes(program_id: int) -> dict: - """Introspect a Program's Attributes, and return a dict of accessors.""" - attributes = {} - - for index in range(_get_number(program_id, GL_ACTIVE_ATTRIBUTES)): - a_name, a_type, a_size = _query_attribute(program_id, index) - loc = glGetAttribLocation(program_id, create_string_buffer(a_name.encode('utf-8'))) - count, fmt = _attribute_types[a_type] - attributes[a_name] = dict(type=a_type, size=a_size, location=loc, count=count, format=fmt) - - if _debug_gl_shaders: - for attribute in attributes.values(): - print(f" Found attribute: {attribute}") - - return attributes - - -def _link_program(*shaders) -> int: - """Link one or more Shaders into a ShaderProgram.""" - program_id = glCreateProgram() - for shader in shaders: - glAttachShader(program_id, shader.id) - glLinkProgram(program_id) - - # Check the link status of program - status = c_int() - glGetProgramiv(program_id, GL_LINK_STATUS, byref(status)) - if not status.value: - length = c_int() - glGetProgramiv(program_id, GL_INFO_LOG_LENGTH, length) - log = c_buffer(length.value) - glGetProgramInfoLog(program_id, len(log), None, log) - raise ShaderException("Error linking shader program:\n{}".format(log.value.decode())) - - # Shader objects no longer needed - for shader in shaders: - glDetachShader(program_id, shader.id) - - return program_id - - -def _get_program_log(program_id: int) -> str: - """Query a ShaderProgram link logs.""" - result = c_int(0) - glGetProgramiv(program_id, GL_INFO_LOG_LENGTH, byref(result)) - result_str = create_string_buffer(result.value) - glGetProgramInfoLog(program_id, result, None, result_str) - - if result_str.value: - return f"OpenGL returned the following message when linking the program: \n{result_str.value}" - else: - return f"Program '{program_id}' linked successfully." - - -def _query_uniform(program_id: int, index: int): - """Query the name, type, and size of a Uniform by index.""" - usize = GLint() - utype = GLenum() - buf_size = 192 - uname = create_string_buffer(buf_size) - try: - glGetActiveUniform(program_id, index, buf_size, None, usize, utype, uname) - return uname.value.decode(), utype.value, usize.value - - except GLException as exc: - raise ShaderException from exc - - -def _introspect_uniforms(program_id: int, have_dsa: bool) -> dict: - """Introspect a Program's uniforms, and return a dict of accessors.""" - uniforms = {} - - for index in range(_get_number(program_id, GL_ACTIVE_UNIFORMS)): - u_name, u_type, u_size = _query_uniform(program_id, index) - loc = glGetUniformLocation(program_id, create_string_buffer(u_name.encode('utf-8'))) - if loc == -1: # Skip uniforms that may be inside a Uniform Block - continue - uniforms[u_name] = _Uniform(program_id, u_name, u_type, loc, have_dsa) - - if _debug_gl_shaders: - for uniform in uniforms.values(): - print(f" Found uniform: {uniform}") - - return uniforms - - -def _get_uniform_block_name(program_id: int, index: int) -> str: - """Query the name of a Uniform Block, by index""" - buf_size = 128 - size = c_int(0) - name_buf = create_string_buffer(buf_size) - try: - glGetActiveUniformBlockName(program_id, index, buf_size, size, name_buf) - return name_buf.value.decode() - except GLException: - raise ShaderException(f"Unable to query UniformBlock name at index: {index}") - - -def _introspect_uniform_blocks(program) -> dict: - uniform_blocks = {} - program_id = program.id - - for index in range(_get_number(program_id, GL_ACTIVE_UNIFORM_BLOCKS)): - name = _get_uniform_block_name(program_id, index) - - num_active = GLint() - block_data_size = GLint() - - glGetActiveUniformBlockiv(program_id, index, GL_UNIFORM_BLOCK_ACTIVE_UNIFORMS, num_active) - glGetActiveUniformBlockiv(program_id, index, GL_UNIFORM_BLOCK_DATA_SIZE, block_data_size) - - indices = (GLuint * num_active.value)() - indices_ptr = cast(addressof(indices), POINTER(GLint)) - glGetActiveUniformBlockiv(program_id, index, GL_UNIFORM_BLOCK_ACTIVE_UNIFORM_INDICES, indices_ptr) - - uniforms = {} - - for block_uniform_index in indices: - uniform_name, u_type, u_size = _query_uniform(program_id, block_uniform_index) - - # Separate uniform name from block name (Only if instance name is provided on the Uniform Block) - try: - _, uniform_name = uniform_name.split(".") - except ValueError: - pass - - gl_type, _, _, length, _ = _uniform_setters[u_type] - uniforms[block_uniform_index] = (uniform_name, gl_type, length) - - uniform_blocks[name] = UniformBlock(program, name, index, block_data_size.value, uniforms) - # This might cause an error if index > GL_MAX_UNIFORM_BUFFER_BINDINGS, but surely no - # one would be crazy enough to use more than 36 uniform blocks, right? - glUniformBlockBinding(program_id, index, index) - - if _debug_gl_shaders: - for block in uniform_blocks.values(): - print(f" Found uniform block: {block}") - - return uniform_blocks - - -# Program definitions: - -class ShaderSource: - """GLSL source container for making source parsing simpler. - - We support locating out attributes and applying #defines values. - - NOTE: We do assume the source is neat enough to be parsed - this way and don't contain several statements in one line. - """ - - def __init__(self, source: str, source_type: GLenum): - """Create a shader source wrapper.""" - self._lines = source.strip().splitlines() - self._type = source_type - - if not self._lines: - raise ShaderException("Shader source is empty") - - self._version = self._find_glsl_version() - - if pyglet.gl.current_context.get_info().get_opengl_api() == "gles": - self._lines[0] = "#version 310 es" - self._lines.insert(1, "precision mediump float;") - - if self._type == GL_GEOMETRY_SHADER: - self._lines.insert(1, "#extension GL_EXT_geometry_shader : require") - - if self._type == GL_COMPUTE_SHADER: - self._lines.insert(1, "precision mediump image2D;") - - self._version = self._find_glsl_version() - - def validate(self) -> str: - """Return the validated shader source.""" - return "\n".join(self._lines) - - def _find_glsl_version(self) -> int: - if self._lines[0].strip().startswith("#version"): - try: - return int(self._lines[0].split()[1]) - except (ValueError, IndexError): - pass - - source = "\n".join(f"{str(i+1).zfill(3)}: {line} " for i, line in enumerate(self._lines)) - - raise ShaderException(("Cannot find #version flag in shader source. " - "A #version statement is required on the first line.\n" - "------------------------------------\n" - f"{source}")) - - -class Shader: - """OpenGL shader. - - Shader objects are compiled on instantiation. - You can reuse a Shader object in multiple ShaderPrograms. - - `shader_type` is one of ``'compute'``, ``'fragment'``, ``'geometry'``, - ``'tesscontrol'``, ``'tessevaluation'``, or ``'vertex'``. - """ - - def __init__(self, source_string: str, shader_type: str): - self._id = None - self.type = shader_type - - try: - shader_type = _shader_types[shader_type] - except KeyError as err: - raise ShaderException(f"shader_type '{shader_type}' is invalid." - f"Valid types are: {list(_shader_types)}") from err - - source_string = ShaderSource(source_string, shader_type).validate() - shader_source_utf8 = source_string.encode("utf8") - source_buffer_pointer = cast(c_char_p(shader_source_utf8), POINTER(c_char)) - source_length = c_int(len(shader_source_utf8)) - - shader_id = glCreateShader(shader_type) - glShaderSource(shader_id, 1, byref(source_buffer_pointer), source_length) - glCompileShader(shader_id) - - status = c_int(0) - glGetShaderiv(shader_id, GL_COMPILE_STATUS, byref(status)) - - if status.value != GL_TRUE: - source = self._get_shader_source(shader_id) - source_lines = "{0}".format("\n".join(f"{str(i+1).zfill(3)}: {line} " - for i, line in enumerate(source.split("\n")))) - - raise ShaderException(f"Shader compilation failed.\n" - f"{self._get_shader_log(shader_id)}" - "------------------------------------------------------------\n" - f"{source_lines}\n" - "------------------------------------------------------------") - - elif _debug_gl_shaders: - print(self._get_shader_log(shader_id)) - - self._id = shader_id - - @property - def id(self): - return self._id - - def _get_shader_log(self, shader_id): - log_length = c_int(0) - glGetShaderiv(shader_id, GL_INFO_LOG_LENGTH, byref(log_length)) - result_str = create_string_buffer(log_length.value) - glGetShaderInfoLog(shader_id, log_length, None, result_str) - if result_str.value: - return ("OpenGL returned the following message when compiling the '{0}' shader: " - "\n{1}".format(self.type, result_str.value.decode('utf8'))) - else: - return f"{self.type.capitalize()} Shader '{shader_id}' compiled successfully." - - @staticmethod - def _get_shader_source(shader_id): - """Get the shader source from the shader object""" - source_length = c_int(0) - glGetShaderiv(shader_id, GL_SHADER_SOURCE_LENGTH, source_length) - source_str = create_string_buffer(source_length.value) - glGetShaderSource(shader_id, source_length, None, source_str) - return source_str.value.decode('utf8') - - def __del__(self): - try: - glDeleteShader(self._id) - if _debug_gl_shaders: - print(f"Destroyed {self.type} Shader '{self._id}'") - - except Exception: - # Interpreter is shutting down, - # or Shader failed to compile. - pass - - def __repr__(self): - return "{0}(id={1}, type={2})".format(self.__class__.__name__, self.id, self.type) - - -class ShaderProgram: - """OpenGL shader program.""" - - __slots__ = '_id', '_context', '_attributes', '_uniforms', '_uniform_blocks', '__weakref__' - - def __init__(self, *shaders: Shader): - assert shaders, "At least one Shader object is required." - self._id = _link_program(*shaders) - self._context = pyglet.gl.current_context - - if _debug_gl_shaders: - print(_get_program_log(self._id)) - - # Query if Direct State Access is available: - have_dsa = gl_info.have_version(4, 1) or gl_info.have_extension("GL_ARB_separate_shader_objects") - self._attributes = _introspect_attributes(self._id) - self._uniforms = _introspect_uniforms(self._id, have_dsa) - self._uniform_blocks = _introspect_uniform_blocks(self) - - @property - def id(self): - return self._id - - @property - def attributes(self): - return self._attributes - - @property - def uniforms(self): - return self._uniforms - - @property - def uniform_blocks(self): - return self._uniform_blocks - - def use(self): - glUseProgram(self._id) - - @staticmethod - def stop(): - glUseProgram(0) - - __enter__ = use - bind = use - unbind = stop - - def __exit__(self, *_): - glUseProgram(0) - - def __del__(self): - try: - self._context.delete_shader_program(self.id) - except Exception: - # Interpreter is shutting down, - # or ShaderProgram failed to link. - pass - - def __setitem__(self, key, value): - try: - uniform = self._uniforms[key] - except KeyError as err: - raise ShaderException(f"A Uniform with the name `{key}` was not found.\n" - f"The spelling may be incorrect, or if not in use it " - f"may have been optimized out by the OpenGL driver.") from err - try: - uniform.set(value) - except GLException as err: - raise ShaderException from err - - def __getitem__(self, item): - try: - uniform = self._uniforms[item] - except KeyError as err: - raise ShaderException(f"A Uniform with the name `{item}` was not found.\n" - f"The spelling may be incorrect, or if not in use it " - f"may have been optimized out by the OpenGL driver.") from err - try: - return uniform.get() - except GLException as err: - raise ShaderException from err - - def vertex_list(self, count, mode, batch=None, group=None, **data): - """Create a VertexList. - - :Parameters: - `count` : int - The number of vertices in the list. - `mode` : int - OpenGL drawing mode enumeration; for example, one of - ``GL_POINTS``, ``GL_LINES``, ``GL_TRIANGLES``, etc. - This determines how the list is drawn in the given batch. - `batch` : `~pyglet.graphics.Batch` - Batch to add the VertexList to, or ``None`` if a Batch will not be used. - Using a Batch is strongly recommended. - `group` : `~pyglet.graphics.Group` - Group to add the VertexList to, or ``None`` if no group is required. - `**data` : str or tuple - Attribute formats and initial data for the vertex list. - - :rtype: :py:class:`~pyglet.graphics.vertexdomain.VertexList` - """ - attributes = self._attributes.copy() - initial_arrays = [] - - for name, fmt in data.items(): - try: - if isinstance(fmt, tuple): - fmt, array = fmt - initial_arrays.append((name, array)) - attributes[name] = {**attributes[name], **{'format': fmt}} - except KeyError: - raise ShaderException(f"\nThe attribute `{name}` doesn't exist. Valid names: \n{list(attributes)}") - - batch = batch or pyglet.graphics.get_default_batch() - domain = batch.get_domain(False, mode, group, self, attributes) - - # Create vertex list and initialize - vlist = domain.create(count) - - for name, array in initial_arrays: - vlist.set_attribute_data(name, array) - - return vlist - - def vertex_list_indexed(self, count, mode, indices, batch=None, group=None, **data): - """Create a IndexedVertexList. - - :Parameters: - `count` : int - The number of vertices in the list. - `mode` : int - OpenGL drawing mode enumeration; for example, one of - ``GL_POINTS``, ``GL_LINES``, ``GL_TRIANGLES``, etc. - This determines how the list is drawn in the given batch. - `indices` : sequence of int - Sequence of integers giving indices into the vertex list. - `batch` : `~pyglet.graphics.Batch` - Batch to add the VertexList to, or ``None`` if a Batch will not be used. - Using a Batch is strongly recommended. - `group` : `~pyglet.graphics.Group` - Group to add the VertexList to, or ``None`` if no group is required. - `**data` : str or tuple - Attribute formats and initial data for the vertex list. - - :rtype: :py:class:`~pyglet.graphics.vertexdomain.IndexedVertexList` - """ - attributes = self._attributes.copy() - initial_arrays = [] - - for name, fmt in data.items(): - try: - if isinstance(fmt, tuple): - fmt, array = fmt - initial_arrays.append((name, array)) - attributes[name] = {**attributes[name], **{'format': fmt}} - except KeyError: - raise ShaderException(f"\nThe attribute `{name}` doesn't exist. Valid names: \n{list(attributes)}") - - batch = batch or pyglet.graphics.get_default_batch() - domain = batch.get_domain(True, mode, group, self, attributes) - - # Create vertex list and initialize - vlist = domain.create(count, len(indices)) - start = vlist.start - vlist.indices = [i + start for i in indices] - - for name, array in initial_arrays: - vlist.set_attribute_data(name, array) - - return vlist - - def __repr__(self): - return "{0}(id={1})".format(self.__class__.__name__, self.id) - - -class ComputeShaderProgram: - """OpenGL Compute Shader Program""" - - __slots__ = '_shader', '_id', '_context', '_uniforms', '_uniform_blocks', '__weakref__', 'limits' - - def __init__(self, source: str): - """Create an OpenGL ComputeShaderProgram from source.""" - if not (gl_info.have_version(4, 3) or gl_info.have_extension("GL_ARB_compute_shader")): - raise ShaderException("Compute Shader not supported. OpenGL Context version must be at least " - "4.3 or higher, or 4.2 with the 'GL_ARB_compute_shader' extension.") - - self._shader = Shader(source, 'compute') - self._context = pyglet.gl.current_context - self._id = _link_program(self._shader) - - if _debug_gl_shaders: - print(_get_program_log(self._id)) - - self._uniforms = _introspect_uniforms(self._id, True) - self._uniform_blocks = _introspect_uniform_blocks(self) - - self.limits = { - 'work_group_count': self._get_tuple(GL_MAX_COMPUTE_WORK_GROUP_COUNT), - 'work_group_size': self._get_tuple(GL_MAX_COMPUTE_WORK_GROUP_SIZE), - 'work_group_invocations': self._get_value(GL_MAX_COMPUTE_WORK_GROUP_INVOCATIONS), - 'shared_memory_size': self._get_value(GL_MAX_COMPUTE_SHARED_MEMORY_SIZE), - } - - @staticmethod - def _get_tuple(parameter: int): - val_x = GLint() - val_y = GLint() - val_z = GLint() - for i, value in enumerate((val_x, val_y, val_z)): - glGetIntegeri_v(parameter, i, byref(value)) - return val_x.value, val_y.value, val_z.value - - @staticmethod - def _get_value(parameter: int) -> int: - val = GLint() - glGetIntegerv(parameter, byref(val)) - return val.value - - @staticmethod - def dispatch(x: int = 1, y: int = 1, z: int = 1, barrier: int = GL_ALL_BARRIER_BITS) -> None: - """Launch one or more compute work groups. - - The ComputeShaderProgram should be active (bound) before calling - this method. The x, y, and z parameters specify the number of local - work groups that will be dispatched in the X, Y and Z dimensions. - """ - glDispatchCompute(x, y, z) - if barrier: - glMemoryBarrier(barrier) - - @property - def id(self) -> int: - return self._id - - @property - def uniforms(self) -> dict: - return self._uniforms - - @property - def uniform_blocks(self) -> dict: - return self._uniform_blocks - - def use(self) -> None: - glUseProgram(self._id) - - @staticmethod - def stop(): - glUseProgram(0) - - __enter__ = use - bind = use - unbind = stop - - def __exit__(self, *_): - glUseProgram(0) - - def __del__(self): - try: - self._context.delete_shader_program(self.id) - except Exception: - # Interpreter is shutting down, - # or ShaderProgram failed to link. - pass - - def __setitem__(self, key, value): - try: - uniform = self._uniforms[key] - except KeyError as err: - raise ShaderException(f"A Uniform with the name `{key}` was not found.\n" - f"The spelling may be incorrect, or if not in use it " - f"may have been optimized out by the OpenGL driver.") from err - try: - uniform.set(value) - except GLException as err: - raise ShaderException from err - - def __getitem__(self, item): - try: - uniform = self._uniforms[item] - except KeyError as err: - raise ShaderException(f"A Uniform with the name `{item}` was not found.\n" - f"The spelling may be incorrect, or if not in use it " - f"may have been optimized out by the OpenGL driver.") from err - try: - return uniform.get() - except GLException as err: - raise ShaderException from err diff --git a/spaces/achyuth1344/stable-diffusion-web-ui/env_patch.py b/spaces/achyuth1344/stable-diffusion-web-ui/env_patch.py deleted file mode 100644 index bd0e40dd64274ce8679905df4e1ca9ff454de06d..0000000000000000000000000000000000000000 --- a/spaces/achyuth1344/stable-diffusion-web-ui/env_patch.py +++ /dev/null @@ -1,3 +0,0 @@ - -is_spaces = True if "SPACE_ID" in os.environ else False -is_shared_ui = True if "IS_SHARED_UI" in os.environ else False diff --git a/spaces/ahuang11/mapnstreets/app.py b/spaces/ahuang11/mapnstreets/app.py deleted file mode 100644 index a1d967c9f3f543b121946a1a3f5ba30fa1d9abdf..0000000000000000000000000000000000000000 --- a/spaces/ahuang11/mapnstreets/app.py +++ /dev/null @@ -1,171 +0,0 @@ -import os -from pathlib import Path -from urllib.request import urlretrieve - -import cartopy.crs as ccrs -import fugue.api as fa -import geopandas as gpd -import geoviews as gv -import panel as pn -import pandas as pd -import pyarrow as pa -from datasets import load_dataset_builder -from holoviews.streams import RangeXY -from shapely import wkt - -gv.extension("bokeh") -pn.extension("tabulator") - -INTRO = """ - *Have you ever looked at a street name and wondered how common it is?* - - Put your curiosity to rest with MapnStreets! By simply entering a name - in the provided box, you can discover the prevalence of a street name. - The map will display the locations of all streets with that name, - and for more detailed information, you can click on the table to - highlight their exact whereabouts. - - Uses [TIGER/Line® Edges](https://www2.census.gov/geo/tiger/TIGER_RD18/LAYER/EDGES/) - data provided by the US Census Bureau. - - Powered by OSS: - [Fugue](https://fugue-tutorials.readthedocs.io), - [Panel](https://panel.holoviz.org/), - [GeoPandas](https://geopandas.org/), - [GeoViews](https://geoviews.org/), - [Parquet](https://parquet.apache.org/), - [DuckDB](https://duckdb.org/), - [Ray](https://ray.io/), - and all their supporting dependencies. -""" - -DATA_DIR = Path.home() / ".cache" / "huggingface" / "datasets" -DATA_PATH = DATA_DIR / "edges.parquet" - -QUERY_FMT = """ - df = LOAD "{{data_path}}" - df_sel = SELECT STATEFP, COUNTYFP, FULLNAME, geometry \ - FROM df WHERE FULLNAME == '{{name}}' -""" - - -def download_hf(path: str, **kwargs): - builder = load_dataset_builder("ahuang11/tiger_layer_edges") - builder.download_and_prepare(DATA_PATH, file_format="parquet") - - -class MapnStreets: - def __init__(self): - self.gdf = None - self.name_input = pn.widgets.TextInput( - value="*Andrew St", - placeholder="Enter a name...", - margin=(9, 5, 5, 25), - ) - pn.bind(self.process_name, self.name_input, watch=True) - - features = gv.tile_sources.CartoDark() - self.holoviews_pane = pn.pane.HoloViews( - features, sizing_mode="stretch_both", min_height=800 - ) - self.tabulator = pn.widgets.Tabulator(width=225, disabled=True) - self.records_text = pn.widgets.StaticText(value="

    0 records found

    ") - pn.state.onload(self.onload) - - def onload(self): - download_hf("ahuang11/tiger_layer_edges") - self.name_input.param.trigger("value") - - range_xy = RangeXY() - line_strings = gv.DynamicMap( - self.refresh_line_strings, streams=[range_xy] - ).opts(responsive=True) - range_xy.source = line_strings - - points = gv.DynamicMap( - pn.bind(self.refresh_points, self.tabulator.param.selection) - ).opts(responsive=True) - - self.holoviews_pane.object *= line_strings * points - - def serialize_geom(self, df): - df["geometry"] = df["geometry"].apply(wkt.loads) - gdf = gpd.GeoDataFrame(df) - centroids = gdf["geometry"].centroid - gdf["Longitude"] = centroids.x - gdf["Latitude"] = centroids.y - return gdf - - def process_name(self, name): - try: - name = name.strip() - self.holoviews_pane.loading = True - query_fmt = QUERY_FMT - if "*" in name or "%" in name: - name = name.replace("*", "%") - query_fmt = query_fmt.replace("==", "LIKE") - if name == "%": - return - df = fa.as_pandas( - fa.fugue_sql( - query_fmt, - data_path=str(DATA_PATH.absolute()), - name=name, - engine="duckdb", - as_local=True, - ) - ) - self.gdf = self.serialize_geom(df) - county_gdf = self.gdf.drop_duplicates( - subset=["STATEFP", "COUNTYFP", "FULLNAME"] - ) - self.records_text.value = f"

    {len(county_gdf)} records found

    " - self.tabulator.value = ( - county_gdf["FULLNAME"] - .value_counts() - .rename_axis("Name") - .rename("Count") - .to_frame() - ) - self.refresh_line_strings() - finally: - self.holoviews_pane.loading = False - - def refresh_line_strings(self, x_range=None, y_range=None): - line_strings = gv.Polygons( - self.gdf[["geometry"]], - crs=ccrs.PlateCarree(), - ).opts(fill_alpha=0, line_color="white", line_width=8, alpha=0.6) - return line_strings.select(x=x_range, y=y_range) - - def refresh_points(self, selection): - gdf_selection = self.gdf[ - ["Longitude", "Latitude", "STATEFP", "COUNTYFP", "FULLNAME"] - ] - if self.tabulator.selection: - names = self.tabulator.value.iloc[selection].index.tolist() - gdf_selection = gdf_selection.loc[gdf_selection["FULLNAME"].isin(names)] - points = gv.Points( - gdf_selection, - kdims=["Longitude", "Latitude"], - vdims=["STATEFP", "COUNTYFP", "FULLNAME"], - crs=ccrs.PlateCarree(), - ).opts(marker="x", tools=["hover"], color="#FF4136", size=8) - return points - - def view(self): - template = pn.template.FastListTemplate( - header=[pn.Row(self.name_input, self.records_text)], - sidebar=[INTRO, self.tabulator], - main=[ - self.holoviews_pane, - ], - theme="dark", - title="MapnStreets", - sidebar_width=225, - ) - return template.servable() - - -mapn_streets = MapnStreets() -mapn_streets.view() \ No newline at end of file diff --git a/spaces/akhaliq/BlendGAN/op/upfirdn2d.cpp b/spaces/akhaliq/BlendGAN/op/upfirdn2d.cpp deleted file mode 100644 index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/BlendGAN/op/upfirdn2d.cpp +++ /dev/null @@ -1,23 +0,0 @@ -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} \ No newline at end of file diff --git a/spaces/akhaliq/SummerTime/model/multi_doc/multi_doc_separate_model.py b/spaces/akhaliq/SummerTime/model/multi_doc/multi_doc_separate_model.py deleted file mode 100644 index 5eab2288cf9b44580726360c9989b9c0214ab4c1..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/multi_doc/multi_doc_separate_model.py +++ /dev/null @@ -1,49 +0,0 @@ -from .base_multi_doc_model import MultiDocSummModel -from model.base_model import SummModel -from model.single_doc import TextRankModel -from typing import Union, List - - -class MultiDocSeparateModel(MultiDocSummModel): - - model_name = "Multi-document separate" - is_multi_document = True - - def __init__(self, model_backend: SummModel = TextRankModel, **kwargs): - super(MultiDocSeparateModel, self).__init__() - model = model_backend(**kwargs) - self.model = model - - def summarize( - self, - corpus: Union[List[str], List[List[str]]], - query: Union[List[str], List[List[str]]] = None, - ) -> List[str]: - self.assert_summ_input_type(corpus, None) - summaries = [] - for instance in corpus: - instance_summaries = self.model.summarize(instance) - summaries.append(" ".join(instance_summaries)) - - return summaries - - @classmethod - def generate_basic_description(cls) -> str: - basic_description = ( - "MultiDocSeparateModel performs multi-document summarization by" - " first performing single-document summarization on each document," - " and then concatenating the results." - ) - return basic_description - - @classmethod - def show_capability(cls): - basic_description = cls.generate_basic_description() - more_details = ( - "A multi-document summarization model." - " Allows for custom model backend selection at initialization." - " Performs single-document summarization on each document in corpus and returns concatenated result.\n" - "Strengths: \n - Allows for control of backend model.\n" - "Weaknesses: \n - Assumes all documents are equally weighted.\n - May produce redundant information for similar documents.\n" - ) - print(f"{basic_description}\n{'#' * 20}\n{more_details}") diff --git a/spaces/akhaliq/bizarre-pose-estimator/README.md b/spaces/akhaliq/bizarre-pose-estimator/README.md deleted file mode 100644 index 82f0bac7bcbd2c31c8ef1eb4d14da6f693af0372..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/bizarre-pose-estimator/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Bizarre Pose Estimator -emoji: 📚 -colorFrom: red -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akhaliq/lama/saicinpainting/training/visualizers/__init__.py b/spaces/akhaliq/lama/saicinpainting/training/visualizers/__init__.py deleted file mode 100644 index 4770d1f15a6790ab9606c7b9881f798c8e2d9545..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/saicinpainting/training/visualizers/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -import logging - -from saicinpainting.training.visualizers.directory import DirectoryVisualizer -from saicinpainting.training.visualizers.noop import NoopVisualizer - - -def make_visualizer(kind, **kwargs): - logging.info(f'Make visualizer {kind}') - - if kind == 'directory': - return DirectoryVisualizer(**kwargs) - if kind == 'noop': - return NoopVisualizer() - - raise ValueError(f'Unknown visualizer kind {kind}') diff --git a/spaces/alex-mindspace/gpt-agents/swarmai/utils/memory/DictSharedMemory.py b/spaces/alex-mindspace/gpt-agents/swarmai/utils/memory/DictSharedMemory.py deleted file mode 100644 index ce8874f14900be885c552418dcf9ac5da3d66904..0000000000000000000000000000000000000000 --- a/spaces/alex-mindspace/gpt-agents/swarmai/utils/memory/DictSharedMemory.py +++ /dev/null @@ -1,115 +0,0 @@ -import os -import threading -import json -import uuid -from pathlib import Path -import datetime -import pandas as pd -import matplotlib.pyplot as plt -import matplotlib -matplotlib.use('Agg') # need a different backend for multithreading -import numpy as np - -class DictSharedMemory(): - """The simplest most stupid shared memory implementation that uses json to store the entries. - """ - - def __init__(self, file_loc=None): - """Initialize the shared memory. In the current architecture the memory always consists of a set of soltuions or evaluations. - Moreover, the project is designed around LLMs for the proof of concepts, so we treat all entry content as a string. - """ - if file_loc is not None: - self.file_loc = Path(file_loc) - if not self.file_loc.exists(): - self.file_loc.touch() - - self.lock = threading.Lock() - - def add_entry(self, score, agent_id, agent_cycle, entry): - """Add an entry to the internal memory. - """ - with self.lock: - entry_id = str(uuid.uuid4()) - data = {} - epoch = datetime.datetime.utcfromtimestamp(0) - epoch = (datetime.datetime.utcnow() - epoch).total_seconds() - data[entry_id] = {"agent":agent_id, "epoch": epoch, "score": score, "cycle": agent_cycle, "content": entry} - status = self.write_to_file(data) - self.plot_performance() - return status - - def get_top_n(self, n): - """Get the top n entries from the internal memory. - """ - raise NotImplementedError - - def write_to_file(self, data): - """Write the internal memory to a file. - """ - if self.file_loc is not None: - with open(self.file_loc, "r") as f: - try: - file_data = json.load(f) - except: - file_data = {} - - file_data = file_data | data - with open(self.file_loc, "w") as f: - json.dump(file_data, f, indent=4) - - f.flush() - os.fsync(f.fileno()) - - - return True - - def plot_performance(self): - """Plot the performance of the swarm. - TODO: move it to the logger - """ - with open(self.file_loc, "r") as f: - shared_memory = json.load(f) - # f.flush() - # os.fsync(f.fileno()) - - df = pd.DataFrame.from_dict(shared_memory, orient="index") - df["agent"] = df["agent"].astype(int) - df["epoch"] = df["epoch"].astype(float) - df["score"] = df["score"].astype(float) - df["cycle"] = df["cycle"].astype(int) - df["content"] = df["content"].astype(str) - - fig = plt.figure(figsize=(20, 5)) - df = df.sort_values(by="epoch") - df = df.sort_values(by="epoch") - - x = df["epoch"].values - df["epoch"].min() - y = df["score"].values - - # apply moving average - if len(y) < 20: - window_size = len(y) - else: - window_size = len(y)//10 - try: - y_padded = np.pad(y, (window_size//2, window_size//2), mode="reflect") - y_ma = np.convolve(y_padded, np.ones(window_size)/window_size, mode="same") - y_ma = y_ma[window_size//2:-window_size//2] - - #moving max - y_max_t = [np.max(y[:i]) for i in range(1, len(y)+1)] - - plt.plot(x, y_ma, label="Average score of recently submitted solutions") - plt.plot(x, y_max_t, label="Best at time t") - plt.plot() - plt.ylim([0, 1.02]) - plt.xlabel("Time (s)") - plt.ylabel("Score") - plt.legend() - plt.title("Average score of recently submitted solutions") - plt.tight_layout() - plt.savefig(self.file_loc.parent / "performance.png") - except: - pass - - plt.close(fig) diff --git a/spaces/aliabid94/reverse_audio/run.py b/spaces/aliabid94/reverse_audio/run.py deleted file mode 100644 index 1ea48063d32d5fa6f37af327788594edb6821674..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/reverse_audio/run.py +++ /dev/null @@ -1,38 +0,0 @@ -import os -import numpy as np -import gradio as gr - -import subprocess -def get_ffmpeg_version(): - output = subprocess.check_output(['ffmpeg', '-version'], stderr=subprocess.STDOUT) - output = output.decode('utf-8') # Convert bytes to string - return output -print(get_ffmpeg_version()) - - - -def handle_audio(audio): - sr, y = audio - return sr, y.shape, audio - -with gr.Blocks() as demo: - with gr.Column(variant="panel"): - a1 = gr.Audio(source="microphone", type="numpy") - up1 = gr.Button() - with gr.Row(): - sr1 = gr.Textbox(label="sr") - len1 = gr.Textbox(label="len") - a1out = gr.Audio() - up1.click(handle_audio, a1, [sr1, len1, a1out]) - - with gr.Column(variant="panel"): - a2 = gr.Audio(source="upload", type="numpy") - up2 = gr.Button() - with gr.Row(): - sr2 = gr.Textbox(label="sr") - len2 = gr.Textbox(label="len") - a2out = gr.Audio() - up2.click(handle_audio, a2, [sr2, len2, a2out]) - -if __name__ == "__main__": - demo.queue().launch() \ No newline at end of file diff --git a/spaces/amagastya/SPARK/notebooks/chainlit_pinecone_demo.py b/spaces/amagastya/SPARK/notebooks/chainlit_pinecone_demo.py deleted file mode 100644 index 393f06590f221b16d25b26eb8611897c1dcaba42..0000000000000000000000000000000000000000 --- a/spaces/amagastya/SPARK/notebooks/chainlit_pinecone_demo.py +++ /dev/null @@ -1,81 +0,0 @@ -import os -from langchain.embeddings.cohere import CohereEmbeddings -from langchain.vectorstores import Pinecone -from langchain.chains import RetrievalQAWithSourcesChain -from langchain.chat_models import ChatOpenAI -import pinecone -import chainlit as cl - -pinecone.init( - api_key=os.environ.get("PINECONE_API_KEY"), - environment=os.environ.get("PINECONE_ENV"), -) - - -index_name = "spark" - -# Optional -namespace = None - -embeddings = CohereEmbeddings(model='embed-english-light-v2.0',cohere_api_key=os.environ.get("COHERE_API_KEY")) - -welcome_message = "Welcome to the Chainlit Pinecone demo! Ask anything about documents you vectorized and stored in your Pinecone DB." - - -@cl.langchain_factory(use_async=True) -async def langchain_factory(): - await cl.Message(content=welcome_message).send() - docsearch = Pinecone.from_existing_index( - index_name=index_name, embedding=embeddings, namespace=namespace - ) - - chain = RetrievalQAWithSourcesChain.from_chain_type( - ChatOpenAI(temperature=0, streaming=True, verbose=True), - chain_type="stuff", - retriever=docsearch.as_retriever(max_tokens_limit=4097), - return_source_documents=True, - verbose=True - ) - return chain - - -@cl.langchain_postprocess -async def process_response(res): - answer = res["answer"] - sources = res.get("sources", "").strip() # Use the get method with a default value - source_elements = [] - docs = res.get("source_documents", None) - - print('sources', sources) - if docs: - metadatas = [doc.metadata for doc in docs] - # Get the source names from the metadata - all_sources = [m["source"] for m in metadatas] - - if sources: - found_sources = [] - # For each source mentioned by the LLM - for source_index, source in enumerate(sources.split(",")): - # Remove the period and any whitespace - orig_source_name = source.strip().replace(".", "") - # The name that will be displayed in the UI - clean_source_name = f"source {source_index}" - try: - # Find the mentioned source in the list of all sources - found_index = all_sources.index(orig_source_name) - except ValueError: - continue - # Get the text from the source document - text = docs[found_index].page_content - - found_sources.append(clean_source_name) - source_elements.append(cl.Text(content=text, name=clean_source_name)) - - if found_sources: - # Add the sources to the answer, referencing the text elements - answer += f"\nSources: {', '.join(found_sources)}" - else: - answer += "\nNo sources found" - - # Send the answer and the text elements to the UI - await cl.Message(content=answer, elements=source_elements).send() \ No newline at end of file diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_converters.c b/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_converters.c deleted file mode 100644 index f76738c381a2666e5014cb7b49d656acbd0b4c2a..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_converters.c +++ /dev/null @@ -1,395 +0,0 @@ -/** @file patest_converters.c - @ingroup test_src - @brief Tests the converter functions in pa_converters.c - @author Ross Bencina - - Link with pa_dither.c and pa_converters.c - - see http://www.portaudio.com/trac/wiki/V19ConvertersStatus for a discussion of this. -*/ -/* - * $Id: $ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com/ - * Copyright (c) 1999-2008 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ -#include -#include -#include -#include - -#include "portaudio.h" -#include "pa_converters.h" -#include "pa_dither.h" -#include "pa_types.h" -#include "pa_endianness.h" - -#ifndef M_PI -#define M_PI (3.14159265) -#endif - -#define MAX_PER_CHANNEL_FRAME_COUNT (2048) -#define MAX_CHANNEL_COUNT (8) - - -#define SAMPLE_FORMAT_COUNT (6) - -static PaSampleFormat sampleFormats_[ SAMPLE_FORMAT_COUNT ] = - { paFloat32, paInt32, paInt24, paInt16, paInt8, paUInt8 }; /* all standard PA sample formats */ - -static const char* sampleFormatNames_[SAMPLE_FORMAT_COUNT] = - { "paFloat32", "paInt32", "paInt24", "paInt16", "paInt8", "paUInt8" }; - - -static const char* abbreviatedSampleFormatNames_[SAMPLE_FORMAT_COUNT] = - { "f32", "i32", "i24", "i16", " i8", "ui8" }; - - -PaError My_Pa_GetSampleSize( PaSampleFormat format ); - -/* - available flags are paClipOff and paDitherOff - clipping is usually applied for float -> int conversions - dither is usually applied for all downconversions (ie anything but 8bit->8bit conversions -*/ - -static int CanClip( PaSampleFormat sourceFormat, PaSampleFormat destinationFormat ) -{ - if( sourceFormat == paFloat32 && destinationFormat != sourceFormat ) - return 1; - else - return 0; -} - -static int CanDither( PaSampleFormat sourceFormat, PaSampleFormat destinationFormat ) -{ - if( sourceFormat < destinationFormat && sourceFormat != paInt8 ) - return 1; - else - return 0; -} - -static void GenerateOneCycleSineReference( double *out, int frameCount, int strideFrames ) -{ - int i; - for( i=0; i < frameCount; ++i ){ - *out = sin( ((double)i/(double)frameCount) * 2. * M_PI ); - out += strideFrames; - } -} - - -static void GenerateOneCycleSine( PaSampleFormat format, void *buffer, int frameCount, int strideFrames ) -{ - switch( format ){ - - case paFloat32: - { - int i; - float *out = (float*)buffer; - for( i=0; i < frameCount; ++i ){ - *out = (float).9 * sin( ((double)i/(double)frameCount) * 2. * M_PI ); - out += strideFrames; - } - } - break; - case paInt32: - { - int i; - PaInt32 *out = (PaInt32*)buffer; - for( i=0; i < frameCount; ++i ){ - *out = (PaInt32)(.9 * sin( ((double)i/(double)frameCount) * 2. * M_PI ) * 0x7FFFFFFF); - out += strideFrames; - } - } - break; - case paInt24: - { - int i; - unsigned char *out = (unsigned char*)buffer; - for( i=0; i < frameCount; ++i ){ - signed long temp = (PaInt32)(.9 * sin( ((double)i/(double)frameCount) * 2. * M_PI ) * 0x7FFFFFFF); - - #if defined(PA_LITTLE_ENDIAN) - out[0] = (unsigned char)(temp >> 8) & 0xFF; - out[1] = (unsigned char)(temp >> 16) & 0xFF; - out[2] = (unsigned char)(temp >> 24) & 0xFF; - #elif defined(PA_BIG_ENDIAN) - out[0] = (unsigned char)(temp >> 24) & 0xFF; - out[1] = (unsigned char)(temp >> 16) & 0xFF; - out[2] = (unsigned char)(temp >> 8) & 0xFF; - #endif - out += 3; - } - } - break; - case paInt16: - { - int i; - PaInt16 *out = (PaInt16*)buffer; - for( i=0; i < frameCount; ++i ){ - *out = (PaInt16)(.9 * sin( ((double)i/(double)frameCount) * 2. * M_PI ) * 0x7FFF ); - out += strideFrames; - } - } - break; - case paInt8: - { - int i; - signed char *out = (signed char*)buffer; - for( i=0; i < frameCount; ++i ){ - *out = (signed char)(.9 * sin( ((double)i/(double)frameCount) * 2. * M_PI ) * 0x7F ); - out += strideFrames; - } - } - break; - case paUInt8: - { - int i; - unsigned char *out = (unsigned char*)buffer; - for( i=0; i < frameCount; ++i ){ - *out = (unsigned char)( .5 * (1. + (.9 * sin( ((double)i/(double)frameCount) * 2. * M_PI ))) * 0xFF ); - out += strideFrames; - } - } - break; - } -} - -int TestNonZeroPresent( void *buffer, int size ) -{ - char *p = (char*)buffer; - int i; - - for( i=0; i < size; ++i ){ - - if( *p != 0 ) - return 1; - ++p; - } - - return 0; -} - -float MaximumAbsDifference( float* sourceBuffer, float* referenceBuffer, int count ) -{ - float result = 0; - float difference; - while( count-- ){ - difference = fabs( *sourceBuffer++ - *referenceBuffer++ ); - if( difference > result ) - result = difference; - } - - return result; -} - -int main( const char **argv, int argc ) -{ - PaUtilTriangularDitherGenerator ditherState; - PaUtilConverter *converter; - void *destinationBuffer, *sourceBuffer; - double *referenceBuffer; - int sourceFormatIndex, destinationFormatIndex; - PaSampleFormat sourceFormat, destinationFormat; - PaStreamFlags flags; - int passFailMatrix[SAMPLE_FORMAT_COUNT][SAMPLE_FORMAT_COUNT]; // [source][destination] - float noiseAmplitudeMatrix[SAMPLE_FORMAT_COUNT][SAMPLE_FORMAT_COUNT]; // [source][destination] - float amp; - -#define FLAG_COMBINATION_COUNT (4) - PaStreamFlags flagCombinations[FLAG_COMBINATION_COUNT] = { paNoFlag, paClipOff, paDitherOff, paClipOff | paDitherOff }; - const char *flagCombinationNames[FLAG_COMBINATION_COUNT] = { "paNoFlag", "paClipOff", "paDitherOff", "paClipOff | paDitherOff" }; - int flagCombinationIndex; - - PaUtil_InitializeTriangularDitherState( &ditherState ); - - /* allocate more than enough space, we use sizeof(float) but we need to fit any 32 bit datum */ - - destinationBuffer = (void*)malloc( MAX_PER_CHANNEL_FRAME_COUNT * MAX_CHANNEL_COUNT * sizeof(float) ); - sourceBuffer = (void*)malloc( MAX_PER_CHANNEL_FRAME_COUNT * MAX_CHANNEL_COUNT * sizeof(float) ); - referenceBuffer = (void*)malloc( MAX_PER_CHANNEL_FRAME_COUNT * MAX_CHANNEL_COUNT * sizeof(float) ); - - - /* the first round of tests simply iterates through the buffer combinations testing - that putting something in gives something out */ - - printf( "= Sine wave in, something out =\n" ); - - printf( "\n" ); - - GenerateOneCycleSine( paFloat32, referenceBuffer, MAX_PER_CHANNEL_FRAME_COUNT, 1 ); - - for( flagCombinationIndex = 0; flagCombinationIndex < FLAG_COMBINATION_COUNT; ++flagCombinationIndex ){ - flags = flagCombinations[flagCombinationIndex]; - - printf( "\n" ); - printf( "== flags = %s ==\n", flagCombinationNames[flagCombinationIndex] ); - - for( sourceFormatIndex = 0; sourceFormatIndex < SAMPLE_FORMAT_COUNT; ++sourceFormatIndex ){ - for( destinationFormatIndex = 0; destinationFormatIndex < SAMPLE_FORMAT_COUNT; ++destinationFormatIndex ){ - sourceFormat = sampleFormats_[sourceFormatIndex]; - destinationFormat = sampleFormats_[destinationFormatIndex]; - //printf( "%s -> %s ", sampleFormatNames_[ sourceFormatIndex ], sampleFormatNames_[ destinationFormatIndex ] ); - - converter = PaUtil_SelectConverter( sourceFormat, destinationFormat, flags ); - - /* source is a sinewave */ - GenerateOneCycleSine( sourceFormat, sourceBuffer, MAX_PER_CHANNEL_FRAME_COUNT, 1 ); - - /* zero destination */ - memset( destinationBuffer, 0, MAX_PER_CHANNEL_FRAME_COUNT * My_Pa_GetSampleSize( destinationFormat ) ); - - (*converter)( destinationBuffer, 1, sourceBuffer, 1, MAX_PER_CHANNEL_FRAME_COUNT, &ditherState ); - - /* - Other ways we could test this would be: - - pass a constant, check for a constant (wouldn't work with dither) - - pass alternating +/-, check for the same... - */ - if( TestNonZeroPresent( destinationBuffer, MAX_PER_CHANNEL_FRAME_COUNT * My_Pa_GetSampleSize( destinationFormat ) ) ){ - //printf( "PASSED\n" ); - passFailMatrix[sourceFormatIndex][destinationFormatIndex] = 1; - }else{ - //printf( "FAILED\n" ); - passFailMatrix[sourceFormatIndex][destinationFormatIndex] = 0; - } - - - /* try to measure the noise floor (comparing output signal to a float32 sine wave) */ - - if( passFailMatrix[sourceFormatIndex][destinationFormatIndex] ){ - - /* convert destination back to paFloat32 into source */ - converter = PaUtil_SelectConverter( destinationFormat, paFloat32, paNoFlag ); - - memset( sourceBuffer, 0, MAX_PER_CHANNEL_FRAME_COUNT * My_Pa_GetSampleSize( paFloat32 ) ); - (*converter)( sourceBuffer, 1, destinationBuffer, 1, MAX_PER_CHANNEL_FRAME_COUNT, &ditherState ); - - if( TestNonZeroPresent( sourceBuffer, MAX_PER_CHANNEL_FRAME_COUNT * My_Pa_GetSampleSize( paFloat32 ) ) ){ - - noiseAmplitudeMatrix[sourceFormatIndex][destinationFormatIndex] = MaximumAbsDifference( (float*)sourceBuffer, (float*)referenceBuffer, MAX_PER_CHANNEL_FRAME_COUNT ); - - }else{ - /* can't test noise floor because there is no conversion from dest format to float available */ - noiseAmplitudeMatrix[sourceFormatIndex][destinationFormatIndex] = -1; // mark as failed - } - }else{ - noiseAmplitudeMatrix[sourceFormatIndex][destinationFormatIndex] = -1; // mark as failed - } - } - } - - printf( "\n" ); - printf( "=== Output contains non-zero data ===\n" ); - printf( "Key: . - pass, X - fail\n" ); - printf( "{{{\n" ); // trac preformated text tag - printf( "in| out: " ); - for( destinationFormatIndex = 0; destinationFormatIndex < SAMPLE_FORMAT_COUNT; ++destinationFormatIndex ){ - printf( " %s ", abbreviatedSampleFormatNames_[destinationFormatIndex] ); - } - printf( "\n" ); - - for( sourceFormatIndex = 0; sourceFormatIndex < SAMPLE_FORMAT_COUNT; ++sourceFormatIndex ){ - printf( "%s ", abbreviatedSampleFormatNames_[sourceFormatIndex] ); - for( destinationFormatIndex = 0; destinationFormatIndex < SAMPLE_FORMAT_COUNT; ++destinationFormatIndex ){ - printf( " %s ", (passFailMatrix[sourceFormatIndex][destinationFormatIndex])? " ." : " X" ); - } - printf( "\n" ); - } - printf( "}}}\n" ); // trac preformated text tag - - printf( "\n" ); - printf( "=== Combined dynamic range (src->dest->float32) ===\n" ); - printf( "Key: Noise amplitude in dBfs, X - fail (either above failed or dest->float32 failed)\n" ); - printf( "{{{\n" ); // trac preformated text tag - printf( "in| out: " ); - for( destinationFormatIndex = 0; destinationFormatIndex < SAMPLE_FORMAT_COUNT; ++destinationFormatIndex ){ - printf( " %s ", abbreviatedSampleFormatNames_[destinationFormatIndex] ); - } - printf( "\n" ); - - for( sourceFormatIndex = 0; sourceFormatIndex < SAMPLE_FORMAT_COUNT; ++sourceFormatIndex ){ - printf( " %s ", abbreviatedSampleFormatNames_[sourceFormatIndex] ); - for( destinationFormatIndex = 0; destinationFormatIndex < SAMPLE_FORMAT_COUNT; ++destinationFormatIndex ){ - amp = noiseAmplitudeMatrix[sourceFormatIndex][destinationFormatIndex]; - if( amp < 0. ) - printf( " X " ); - else - printf( " % 6.1f ", 20.*log10(amp) ); - } - printf( "\n" ); - } - printf( "}}}\n" ); // trac preformated text tag - } - - - free( destinationBuffer ); - free( sourceBuffer ); - free( referenceBuffer ); -} - -// copied here for now otherwise we need to include the world just for this function. -PaError My_Pa_GetSampleSize( PaSampleFormat format ) -{ - int result; - - switch( format & ~paNonInterleaved ) - { - - case paUInt8: - case paInt8: - result = 1; - break; - - case paInt16: - result = 2; - break; - - case paInt24: - result = 3; - break; - - case paFloat32: - case paInt32: - result = 4; - break; - - default: - result = paSampleFormatNotSupported; - break; - } - - return (PaError) result; -} diff --git a/spaces/anaclaudia13ct/insect_detection/models/common.py b/spaces/anaclaudia13ct/insect_detection/models/common.py deleted file mode 100644 index 8b5ec1c786d8efbfdffa268a4d13b02a47338f8c..0000000000000000000000000000000000000000 --- a/spaces/anaclaudia13ct/insect_detection/models/common.py +++ /dev/null @@ -1,860 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Common modules -""" - -import ast -import contextlib -import json -import math -import platform -import warnings -import zipfile -from collections import OrderedDict, namedtuple -from copy import copy -from pathlib import Path -from urllib.parse import urlparse - -import cv2 -import numpy as np -import pandas as pd -import requests -import torch -import torch.nn as nn -from IPython.display import display -from PIL import Image -from torch.cuda import amp - -from utils import TryExcept -from utils.dataloaders import exif_transpose, letterbox -from utils.general import (LOGGER, ROOT, Profile, check_requirements, check_suffix, check_version, colorstr, - increment_path, is_notebook, make_divisible, non_max_suppression, scale_boxes, xywh2xyxy, - xyxy2xywh, yaml_load) -from utils.plots import Annotator, colors, save_one_box -from utils.torch_utils import copy_attr, smart_inference_mode - - -def autopad(k, p=None, d=1): # kernel, padding, dilation - # Pad to 'same' shape outputs - if d > 1: - k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size - if p is None: - p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad - return p - - -class Conv(nn.Module): - # Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation) - default_act = nn.SiLU() # default activation - - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True): - super().__init__() - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False) - self.bn = nn.BatchNorm2d(c2) - self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity() - - def forward(self, x): - return self.act(self.bn(self.conv(x))) - - def forward_fuse(self, x): - return self.act(self.conv(x)) - - -class DWConv(Conv): - # Depth-wise convolution - def __init__(self, c1, c2, k=1, s=1, d=1, act=True): # ch_in, ch_out, kernel, stride, dilation, activation - super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), d=d, act=act) - - -class DWConvTranspose2d(nn.ConvTranspose2d): - # Depth-wise transpose convolution - def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0): # ch_in, ch_out, kernel, stride, padding, padding_out - super().__init__(c1, c2, k, s, p1, p2, groups=math.gcd(c1, c2)) - - -class TransformerLayer(nn.Module): - # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance) - def __init__(self, c, num_heads): - super().__init__() - self.q = nn.Linear(c, c, bias=False) - self.k = nn.Linear(c, c, bias=False) - self.v = nn.Linear(c, c, bias=False) - self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads) - self.fc1 = nn.Linear(c, c, bias=False) - self.fc2 = nn.Linear(c, c, bias=False) - - def forward(self, x): - x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x - x = self.fc2(self.fc1(x)) + x - return x - - -class TransformerBlock(nn.Module): - # Vision Transformer https://arxiv.org/abs/2010.11929 - def __init__(self, c1, c2, num_heads, num_layers): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - self.linear = nn.Linear(c2, c2) # learnable position embedding - self.tr = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers))) - self.c2 = c2 - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - b, _, w, h = x.shape - p = x.flatten(2).permute(2, 0, 1) - return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h) - - -class Bottleneck(nn.Module): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2, 3, 1, g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class BottleneckCSP(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False) - self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False) - self.cv4 = Conv(2 * c_, c2, 1, 1) - self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3) - self.act = nn.SiLU() - self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(self.act(self.bn(torch.cat((y1, y2), 1)))) - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class C3(nn.Module): - # CSP Bottleneck with 3 convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2) - self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))) - - def forward(self, x): - return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1)) - - -class C3x(C3): - # C3 module with cross-convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = nn.Sequential(*(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n))) - - -class C3TR(C3): - # C3 module with TransformerBlock() - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = TransformerBlock(c_, c_, 4, n) - - -class C3SPP(C3): - # C3 module with SPP() - def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = SPP(c_, c_, k) - - -class C3Ghost(C3): - # C3 module with GhostBottleneck() - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n))) - - -class SPP(nn.Module): - # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729 - def __init__(self, c1, c2, k=(5, 9, 13)): - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - - def forward(self, x): - x = self.cv1(x) - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning - return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1)) - - -class SPPF(nn.Module): - # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher - def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13)) - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * 4, c2, 1, 1) - self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2) - - def forward(self, x): - x = self.cv1(x) - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning - y1 = self.m(x) - y2 = self.m(y1) - return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1)) - - -class Focus(nn.Module): - # Focus wh information into c-space - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.conv = Conv(c1 * 4, c2, k, s, p, g, act=act) - # self.contract = Contract(gain=2) - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return self.conv(torch.cat((x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]), 1)) - # return self.conv(self.contract(x)) - - -class GhostConv(nn.Module): - # Ghost Convolution https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups - super().__init__() - c_ = c2 // 2 # hidden channels - self.cv1 = Conv(c1, c_, k, s, None, g, act=act) - self.cv2 = Conv(c_, c_, 5, 1, None, c_, act=act) - - def forward(self, x): - y = self.cv1(x) - return torch.cat((y, self.cv2(y)), 1) - - -class GhostBottleneck(nn.Module): - # Ghost Bottleneck https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride - super().__init__() - c_ = c2 // 2 - self.conv = nn.Sequential( - GhostConv(c1, c_, 1, 1), # pw - DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw - GhostConv(c_, c2, 1, 1, act=False)) # pw-linear - self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), Conv(c1, c2, 1, 1, - act=False)) if s == 2 else nn.Identity() - - def forward(self, x): - return self.conv(x) + self.shortcut(x) - - -class Contract(nn.Module): - # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - b, c, h, w = x.size() # assert (h / s == 0) and (W / s == 0), 'Indivisible gain' - s = self.gain - x = x.view(b, c, h // s, s, w // s, s) # x(1,64,40,2,40,2) - x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40) - return x.view(b, c * s * s, h // s, w // s) # x(1,256,40,40) - - -class Expand(nn.Module): - # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - b, c, h, w = x.size() # assert C / s ** 2 == 0, 'Indivisible gain' - s = self.gain - x = x.view(b, s, s, c // s ** 2, h, w) # x(1,2,2,16,80,80) - x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2) - return x.view(b, c // s ** 2, h * s, w * s) # x(1,16,160,160) - - -class Concat(nn.Module): - # Concatenate a list of tensors along dimension - def __init__(self, dimension=1): - super().__init__() - self.d = dimension - - def forward(self, x): - return torch.cat(x, self.d) - - -class DetectMultiBackend(nn.Module): - # YOLOv5 MultiBackend class for python inference on various backends - def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False, data=None, fp16=False, fuse=True): - # Usage: - # PyTorch: weights = *.pt - # TorchScript: *.torchscript - # ONNX Runtime: *.onnx - # ONNX OpenCV DNN: *.onnx --dnn - # OpenVINO: *_openvino_model - # CoreML: *.mlmodel - # TensorRT: *.engine - # TensorFlow SavedModel: *_saved_model - # TensorFlow GraphDef: *.pb - # TensorFlow Lite: *.tflite - # TensorFlow Edge TPU: *_edgetpu.tflite - # PaddlePaddle: *_paddle_model - from models.experimental import attempt_download, attempt_load # scoped to avoid circular import - - super().__init__() - w = str(weights[0] if isinstance(weights, list) else weights) - pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, triton = self._model_type(w) - fp16 &= pt or jit or onnx or engine # FP16 - nhwc = coreml or saved_model or pb or tflite or edgetpu # BHWC formats (vs torch BCWH) - stride = 32 # default stride - cuda = torch.cuda.is_available() and device.type != 'cpu' # use CUDA - if not (pt or triton): - w = attempt_download(w) # download if not local - - if pt: # PyTorch - model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse) - stride = max(int(model.stride.max()), 32) # model stride - names = model.module.names if hasattr(model, 'module') else model.names # get class names - model.half() if fp16 else model.float() - self.model = model # explicitly assign for to(), cpu(), cuda(), half() - elif jit: # TorchScript - LOGGER.info(f'Loading {w} for TorchScript inference...') - extra_files = {'config.txt': ''} # model metadata - model = torch.jit.load(w, _extra_files=extra_files, map_location=device) - model.half() if fp16 else model.float() - if extra_files['config.txt']: # load metadata dict - d = json.loads(extra_files['config.txt'], - object_hook=lambda d: {int(k) if k.isdigit() else k: v - for k, v in d.items()}) - stride, names = int(d['stride']), d['names'] - elif dnn: # ONNX OpenCV DNN - LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...') - check_requirements('opencv-python>=4.5.4') - net = cv2.dnn.readNetFromONNX(w) - elif onnx: # ONNX Runtime - LOGGER.info(f'Loading {w} for ONNX Runtime inference...') - check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime')) - import onnxruntime - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider'] - session = onnxruntime.InferenceSession(w, providers=providers) - output_names = [x.name for x in session.get_outputs()] - meta = session.get_modelmeta().custom_metadata_map # metadata - if 'stride' in meta: - stride, names = int(meta['stride']), eval(meta['names']) - elif xml: # OpenVINO - LOGGER.info(f'Loading {w} for OpenVINO inference...') - check_requirements('openvino') # requires openvino-dev: https://pypi.org/project/openvino-dev/ - from openvino.runtime import Core, Layout, get_batch - ie = Core() - if not Path(w).is_file(): # if not *.xml - w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir - network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin')) - if network.get_parameters()[0].get_layout().empty: - network.get_parameters()[0].set_layout(Layout("NCHW")) - batch_dim = get_batch(network) - if batch_dim.is_static: - batch_size = batch_dim.get_length() - executable_network = ie.compile_model(network, device_name="CPU") # device_name="MYRIAD" for Intel NCS2 - stride, names = self._load_metadata(Path(w).with_suffix('.yaml')) # load metadata - elif engine: # TensorRT - LOGGER.info(f'Loading {w} for TensorRT inference...') - import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download - check_version(trt.__version__, '7.0.0', hard=True) # require tensorrt>=7.0.0 - if device.type == 'cpu': - device = torch.device('cuda:0') - Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr')) - logger = trt.Logger(trt.Logger.INFO) - with open(w, 'rb') as f, trt.Runtime(logger) as runtime: - model = runtime.deserialize_cuda_engine(f.read()) - context = model.create_execution_context() - bindings = OrderedDict() - output_names = [] - fp16 = False # default updated below - dynamic = False - for i in range(model.num_bindings): - name = model.get_binding_name(i) - dtype = trt.nptype(model.get_binding_dtype(i)) - if model.binding_is_input(i): - if -1 in tuple(model.get_binding_shape(i)): # dynamic - dynamic = True - context.set_binding_shape(i, tuple(model.get_profile_shape(0, i)[2])) - if dtype == np.float16: - fp16 = True - else: # output - output_names.append(name) - shape = tuple(context.get_binding_shape(i)) - im = torch.from_numpy(np.empty(shape, dtype=dtype)).to(device) - bindings[name] = Binding(name, dtype, shape, im, int(im.data_ptr())) - binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items()) - batch_size = bindings['images'].shape[0] # if dynamic, this is instead max batch size - elif coreml: # CoreML - LOGGER.info(f'Loading {w} for CoreML inference...') - import coremltools as ct - model = ct.models.MLModel(w) - elif saved_model: # TF SavedModel - LOGGER.info(f'Loading {w} for TensorFlow SavedModel inference...') - import tensorflow as tf - keras = False # assume TF1 saved_model - model = tf.keras.models.load_model(w) if keras else tf.saved_model.load(w) - elif pb: # GraphDef https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt - LOGGER.info(f'Loading {w} for TensorFlow GraphDef inference...') - import tensorflow as tf - - def wrap_frozen_graph(gd, inputs, outputs): - x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=""), []) # wrapped - ge = x.graph.as_graph_element - return x.prune(tf.nest.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs)) - - def gd_outputs(gd): - name_list, input_list = [], [] - for node in gd.node: # tensorflow.core.framework.node_def_pb2.NodeDef - name_list.append(node.name) - input_list.extend(node.input) - return sorted(f'{x}:0' for x in list(set(name_list) - set(input_list)) if not x.startswith('NoOp')) - - gd = tf.Graph().as_graph_def() # TF GraphDef - with open(w, 'rb') as f: - gd.ParseFromString(f.read()) - frozen_func = wrap_frozen_graph(gd, inputs="x:0", outputs=gd_outputs(gd)) - elif tflite or edgetpu: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python - try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu - from tflite_runtime.interpreter import Interpreter, load_delegate - except ImportError: - import tensorflow as tf - Interpreter, load_delegate = tf.lite.Interpreter, tf.lite.experimental.load_delegate, - if edgetpu: # TF Edge TPU https://coral.ai/software/#edgetpu-runtime - LOGGER.info(f'Loading {w} for TensorFlow Lite Edge TPU inference...') - delegate = { - 'Linux': 'libedgetpu.so.1', - 'Darwin': 'libedgetpu.1.dylib', - 'Windows': 'edgetpu.dll'}[platform.system()] - interpreter = Interpreter(model_path=w, experimental_delegates=[load_delegate(delegate)]) - else: # TFLite - LOGGER.info(f'Loading {w} for TensorFlow Lite inference...') - interpreter = Interpreter(model_path=w) # load TFLite model - interpreter.allocate_tensors() # allocate - input_details = interpreter.get_input_details() # inputs - output_details = interpreter.get_output_details() # outputs - # load metadata - with contextlib.suppress(zipfile.BadZipFile): - with zipfile.ZipFile(w, "r") as model: - meta_file = model.namelist()[0] - meta = ast.literal_eval(model.read(meta_file).decode("utf-8")) - stride, names = int(meta['stride']), meta['names'] - elif tfjs: # TF.js - raise NotImplementedError('ERROR: YOLOv5 TF.js inference is not supported') - elif paddle: # PaddlePaddle - LOGGER.info(f'Loading {w} for PaddlePaddle inference...') - check_requirements('paddlepaddle-gpu' if cuda else 'paddlepaddle') - import paddle.inference as pdi - if not Path(w).is_file(): # if not *.pdmodel - w = next(Path(w).rglob('*.pdmodel')) # get *.pdmodel file from *_paddle_model dir - weights = Path(w).with_suffix('.pdiparams') - config = pdi.Config(str(w), str(weights)) - if cuda: - config.enable_use_gpu(memory_pool_init_size_mb=2048, device_id=0) - predictor = pdi.create_predictor(config) - input_handle = predictor.get_input_handle(predictor.get_input_names()[0]) - output_names = predictor.get_output_names() - elif triton: # NVIDIA Triton Inference Server - LOGGER.info(f'Using {w} as Triton Inference Server...') - check_requirements('tritonclient[all]') - from utils.triton import TritonRemoteModel - model = TritonRemoteModel(url=w) - nhwc = model.runtime.startswith("tensorflow") - else: - raise NotImplementedError(f'ERROR: {w} is not a supported format') - - # class names - if 'names' not in locals(): - names = yaml_load(data)['names'] if data else {i: f'class{i}' for i in range(999)} - if names[0] == 'n01440764' and len(names) == 1000: # ImageNet - names = yaml_load(ROOT / 'data/ImageNet.yaml')['names'] # human-readable names - - self.__dict__.update(locals()) # assign all variables to self - - def forward(self, im, augment=False, visualize=False): - # YOLOv5 MultiBackend inference - b, ch, h, w = im.shape # batch, channel, height, width - if self.fp16 and im.dtype != torch.float16: - im = im.half() # to FP16 - if self.nhwc: - im = im.permute(0, 2, 3, 1) # torch BCHW to numpy BHWC shape(1,320,192,3) - - if self.pt: # PyTorch - y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im) - elif self.jit: # TorchScript - y = self.model(im) - elif self.dnn: # ONNX OpenCV DNN - im = im.cpu().numpy() # torch to numpy - self.net.setInput(im) - y = self.net.forward() - elif self.onnx: # ONNX Runtime - im = im.cpu().numpy() # torch to numpy - y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im}) - elif self.xml: # OpenVINO - im = im.cpu().numpy() # FP32 - y = list(self.executable_network([im]).values()) - elif self.engine: # TensorRT - if self.dynamic and im.shape != self.bindings['images'].shape: - i = self.model.get_binding_index('images') - self.context.set_binding_shape(i, im.shape) # reshape if dynamic - self.bindings['images'] = self.bindings['images']._replace(shape=im.shape) - for name in self.output_names: - i = self.model.get_binding_index(name) - self.bindings[name].data.resize_(tuple(self.context.get_binding_shape(i))) - s = self.bindings['images'].shape - assert im.shape == s, f"input size {im.shape} {'>' if self.dynamic else 'not equal to'} max model size {s}" - self.binding_addrs['images'] = int(im.data_ptr()) - self.context.execute_v2(list(self.binding_addrs.values())) - y = [self.bindings[x].data for x in sorted(self.output_names)] - elif self.coreml: # CoreML - im = im.cpu().numpy() - im = Image.fromarray((im[0] * 255).astype('uint8')) - # im = im.resize((192, 320), Image.ANTIALIAS) - y = self.model.predict({'image': im}) # coordinates are xywh normalized - if 'confidence' in y: - box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels - conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float) - y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1) - else: - y = list(reversed(y.values())) # reversed for segmentation models (pred, proto) - elif self.paddle: # PaddlePaddle - im = im.cpu().numpy().astype(np.float32) - self.input_handle.copy_from_cpu(im) - self.predictor.run() - y = [self.predictor.get_output_handle(x).copy_to_cpu() for x in self.output_names] - elif self.triton: # NVIDIA Triton Inference Server - y = self.model(im) - else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU) - im = im.cpu().numpy() - if self.saved_model: # SavedModel - y = self.model(im, training=False) if self.keras else self.model(im) - elif self.pb: # GraphDef - y = self.frozen_func(x=self.tf.constant(im)) - else: # Lite or Edge TPU - input = self.input_details[0] - int8 = input['dtype'] == np.uint8 # is TFLite quantized uint8 model - if int8: - scale, zero_point = input['quantization'] - im = (im / scale + zero_point).astype(np.uint8) # de-scale - self.interpreter.set_tensor(input['index'], im) - self.interpreter.invoke() - y = [] - for output in self.output_details: - x = self.interpreter.get_tensor(output['index']) - if int8: - scale, zero_point = output['quantization'] - x = (x.astype(np.float32) - zero_point) * scale # re-scale - y.append(x) - y = [x if isinstance(x, np.ndarray) else x.numpy() for x in y] - y[0][..., :4] *= [w, h, w, h] # xywh normalized to pixels - - if isinstance(y, (list, tuple)): - return self.from_numpy(y[0]) if len(y) == 1 else [self.from_numpy(x) for x in y] - else: - return self.from_numpy(y) - - def from_numpy(self, x): - return torch.from_numpy(x).to(self.device) if isinstance(x, np.ndarray) else x - - def warmup(self, imgsz=(1, 3, 640, 640)): - # Warmup model by running inference once - warmup_types = self.pt, self.jit, self.onnx, self.engine, self.saved_model, self.pb, self.triton - if any(warmup_types) and (self.device.type != 'cpu' or self.triton): - im = torch.empty(*imgsz, dtype=torch.half if self.fp16 else torch.float, device=self.device) # input - for _ in range(2 if self.jit else 1): # - self.forward(im) # warmup - - @staticmethod - def _model_type(p='path/to/model.pt'): - # Return model type from model path, i.e. path='path/to/model.onnx' -> type=onnx - # types = [pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle] - from export import export_formats - from utils.downloads import is_url - sf = list(export_formats().Suffix) # export suffixes - if not is_url(p, check=False): - check_suffix(p, sf) # checks - url = urlparse(p) # if url may be Triton inference server - types = [s in Path(p).name for s in sf] - types[8] &= not types[9] # tflite &= not edgetpu - triton = not any(types) and all([any(s in url.scheme for s in ["http", "grpc"]), url.netloc]) - return types + [triton] - - @staticmethod - def _load_metadata(f=Path('path/to/meta.yaml')): - # Load metadata from meta.yaml if it exists - if f.exists(): - d = yaml_load(f) - return d['stride'], d['names'] # assign stride, names - return None, None - - -class AutoShape(nn.Module): - # YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS - conf = 0.25 # NMS confidence threshold - iou = 0.45 # NMS IoU threshold - agnostic = False # NMS class-agnostic - multi_label = False # NMS multiple labels per box - classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs - max_det = 1000 # maximum number of detections per image - amp = False # Automatic Mixed Precision (AMP) inference - - def __init__(self, model, verbose=True): - super().__init__() - if verbose: - LOGGER.info('Adding AutoShape... ') - copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=()) # copy attributes - self.dmb = isinstance(model, DetectMultiBackend) # DetectMultiBackend() instance - self.pt = not self.dmb or model.pt # PyTorch model - self.model = model.eval() - if self.pt: - m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect() - m.inplace = False # Detect.inplace=False for safe multithread inference - m.export = True # do not output loss values - - def _apply(self, fn): - # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - self = super()._apply(fn) - if self.pt: - m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect() - m.stride = fn(m.stride) - m.grid = list(map(fn, m.grid)) - if isinstance(m.anchor_grid, list): - m.anchor_grid = list(map(fn, m.anchor_grid)) - return self - - @smart_inference_mode() - def forward(self, ims, size=640, augment=False, profile=False): - # Inference from various sources. For size(height=640, width=1280), RGB images example inputs are: - # file: ims = 'data/images/zidane.jpg' # str or PosixPath - # URI: = 'https://ultralytics.com/images/zidane.jpg' - # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3) - # PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3) - # numpy: = np.zeros((640,1280,3)) # HWC - # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values) - # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images - - dt = (Profile(), Profile(), Profile()) - with dt[0]: - if isinstance(size, int): # expand - size = (size, size) - p = next(self.model.parameters()) if self.pt else torch.empty(1, device=self.model.device) # param - autocast = self.amp and (p.device.type != 'cpu') # Automatic Mixed Precision (AMP) inference - if isinstance(ims, torch.Tensor): # torch - with amp.autocast(autocast): - return self.model(ims.to(p.device).type_as(p), augment=augment) # inference - - # Pre-process - n, ims = (len(ims), list(ims)) if isinstance(ims, (list, tuple)) else (1, [ims]) # number, list of images - shape0, shape1, files = [], [], [] # image and inference shapes, filenames - for i, im in enumerate(ims): - f = f'image{i}' # filename - if isinstance(im, (str, Path)): # filename or uri - im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im - im = np.asarray(exif_transpose(im)) - elif isinstance(im, Image.Image): # PIL Image - im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f - files.append(Path(f).with_suffix('.jpg').name) - if im.shape[0] < 5: # image in CHW - im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) - im = im[..., :3] if im.ndim == 3 else cv2.cvtColor(im, cv2.COLOR_GRAY2BGR) # enforce 3ch input - s = im.shape[:2] # HWC - shape0.append(s) # image shape - g = max(size) / max(s) # gain - shape1.append([int(y * g) for y in s]) - ims[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update - shape1 = [make_divisible(x, self.stride) for x in np.array(shape1).max(0)] # inf shape - x = [letterbox(im, shape1, auto=False)[0] for im in ims] # pad - x = np.ascontiguousarray(np.array(x).transpose((0, 3, 1, 2))) # stack and BHWC to BCHW - x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32 - - with amp.autocast(autocast): - # Inference - with dt[1]: - y = self.model(x, augment=augment) # forward - - # Post-process - with dt[2]: - y = non_max_suppression(y if self.dmb else y[0], - self.conf, - self.iou, - self.classes, - self.agnostic, - self.multi_label, - max_det=self.max_det) # NMS - for i in range(n): - scale_boxes(shape1, y[i][:, :4], shape0[i]) - - return Detections(ims, y, files, dt, self.names, x.shape) - - -class Detections: - # YOLOv5 detections class for inference results - def __init__(self, ims, pred, files, times=(0, 0, 0), names=None, shape=None): - super().__init__() - d = pred[0].device # device - gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in ims] # normalizations - self.ims = ims # list of images as numpy arrays - self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) - self.names = names # class names - self.files = files # image filenames - self.times = times # profiling times - self.xyxy = pred # xyxy pixels - self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels - self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized - self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized - self.n = len(self.pred) # number of images (batch size) - self.t = tuple(x.t / self.n * 1E3 for x in times) # timestamps (ms) - self.s = tuple(shape) # inference BCHW shape - - def _run(self, pprint=False, show=False, save=False, crop=False, render=False, labels=True, save_dir=Path('')): - s, crops = '', [] - for i, (im, pred) in enumerate(zip(self.ims, self.pred)): - s += f'\nimage {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string - if pred.shape[0]: - for c in pred[:, -1].unique(): - n = (pred[:, -1] == c).sum() # detections per class - s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string - s = s.rstrip(', ') - if show or save or render or crop: - annotator = Annotator(im, example=str(self.names)) - for *box, conf, cls in reversed(pred): # xyxy, confidence, class - label = f'{self.names[int(cls)]} {conf:.2f}' - if crop: - file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None - crops.append({ - 'box': box, - 'conf': conf, - 'cls': cls, - 'label': label, - 'im': save_one_box(box, im, file=file, save=save)}) - else: # all others - annotator.box_label(box, label if labels else '', color=colors(cls)) - im = annotator.im - else: - s += '(no detections)' - - im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np - if show: - display(im) if is_notebook() else im.show(self.files[i]) - if save: - f = self.files[i] - im.save(save_dir / f) # save - if i == self.n - 1: - LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}") - if render: - self.ims[i] = np.asarray(im) - if pprint: - s = s.lstrip('\n') - return f'{s}\nSpeed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {self.s}' % self.t - if crop: - if save: - LOGGER.info(f'Saved results to {save_dir}\n') - return crops - - @TryExcept('Showing images is not supported in this environment') - def show(self, labels=True): - self._run(show=True, labels=labels) # show results - - def save(self, labels=True, save_dir='runs/detect/exp', exist_ok=False): - save_dir = increment_path(save_dir, exist_ok, mkdir=True) # increment save_dir - self._run(save=True, labels=labels, save_dir=save_dir) # save results - - def crop(self, save=True, save_dir='runs/detect/exp', exist_ok=False): - save_dir = increment_path(save_dir, exist_ok, mkdir=True) if save else None - return self._run(crop=True, save=save, save_dir=save_dir) # crop results - - def render(self, labels=True): - self._run(render=True, labels=labels) # render results - return self.ims - - def pandas(self): - # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0]) - new = copy(self) # return copy - ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns - cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns - for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]): - a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update - setattr(new, k, [pd.DataFrame(x, columns=c) for x in a]) - return new - - def tolist(self): - # return a list of Detections objects, i.e. 'for result in results.tolist():' - r = range(self.n) # iterable - x = [Detections([self.ims[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r] - # for d in x: - # for k in ['ims', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']: - # setattr(d, k, getattr(d, k)[0]) # pop out of list - return x - - def print(self): - LOGGER.info(self.__str__()) - - def __len__(self): # override len(results) - return self.n - - def __str__(self): # override print(results) - return self._run(pprint=True) # print results - - def __repr__(self): - return f'YOLOv5 {self.__class__} instance\n' + self.__str__() - - -class Proto(nn.Module): - # YOLOv5 mask Proto module for segmentation models - def __init__(self, c1, c_=256, c2=32): # ch_in, number of protos, number of masks - super().__init__() - self.cv1 = Conv(c1, c_, k=3) - self.upsample = nn.Upsample(scale_factor=2, mode='nearest') - self.cv2 = Conv(c_, c_, k=3) - self.cv3 = Conv(c_, c2) - - def forward(self, x): - return self.cv3(self.cv2(self.upsample(self.cv1(x)))) - - -class Classify(nn.Module): - # YOLOv5 classification head, i.e. x(b,c1,20,20) to x(b,c2) - def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - c_ = 1280 # efficientnet_b0 size - self.conv = Conv(c1, c_, k, s, autopad(k, p), g) - self.pool = nn.AdaptiveAvgPool2d(1) # to x(b,c_,1,1) - self.drop = nn.Dropout(p=0.0, inplace=True) - self.linear = nn.Linear(c_, c2) # to x(b,c2) - - def forward(self, x): - if isinstance(x, list): - x = torch.cat(x, 1) - return self.linear(self.drop(self.pool(self.conv(x)).flatten(1))) diff --git a/spaces/anon9i9/finetuned_diffusion_test/README.md b/spaces/anon9i9/finetuned_diffusion_test/README.md deleted file mode 100644 index bca1a00cd251d1c13fc3fe72baad06e256245d3e..0000000000000000000000000000000000000000 --- a/spaces/anon9i9/finetuned_diffusion_test/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Finetuned Diffusion -emoji: 🪄🖼️ -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -license: mit -duplicated_from: anzorq/finetuned_diffusion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/arkaprav0/gpt-transcript-plugin/README.md b/spaces/arkaprav0/gpt-transcript-plugin/README.md deleted file mode 100644 index d1bc9529ee68cb209c1efc72baedf5c059e8381e..0000000000000000000000000000000000000000 --- a/spaces/arkaprav0/gpt-transcript-plugin/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gpt Llm -emoji: 🏆 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.44.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/feed_forward/decoder.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/feed_forward/decoder.py deleted file mode 100644 index 0376e2e3926e65254c3a81d085d48c97df033958..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/feed_forward/decoder.py +++ /dev/null @@ -1,228 +0,0 @@ -import torch -from torch import nn - -from TTS.tts.layers.generic.res_conv_bn import Conv1dBN, Conv1dBNBlock, ResidualConv1dBNBlock -from TTS.tts.layers.generic.transformer import FFTransformerBlock -from TTS.tts.layers.generic.wavenet import WNBlocks -from TTS.tts.layers.glow_tts.transformer import RelativePositionTransformer - - -class WaveNetDecoder(nn.Module): - """WaveNet based decoder with a prenet and a postnet. - - prenet: conv1d_1x1 - postnet: 3 x [conv1d_1x1 -> relu] -> conv1d_1x1 - - TODO: Integrate speaker conditioning vector. - - Note: - default wavenet parameters; - params = { - "num_blocks": 12, - "hidden_channels":192, - "kernel_size": 5, - "dilation_rate": 1, - "num_layers": 4, - "dropout_p": 0.05 - } - - Args: - in_channels (int): number of input channels. - out_channels (int): number of output channels. - hidden_channels (int): number of hidden channels for prenet and postnet. - params (dict): dictionary for residual convolutional blocks. - """ - - def __init__(self, in_channels, out_channels, hidden_channels, c_in_channels, params): - super().__init__() - # prenet - self.prenet = torch.nn.Conv1d(in_channels, params["hidden_channels"], 1) - # wavenet layers - self.wn = WNBlocks(params["hidden_channels"], c_in_channels=c_in_channels, **params) - # postnet - self.postnet = [ - torch.nn.Conv1d(params["hidden_channels"], hidden_channels, 1), - torch.nn.ReLU(), - torch.nn.Conv1d(hidden_channels, hidden_channels, 1), - torch.nn.ReLU(), - torch.nn.Conv1d(hidden_channels, hidden_channels, 1), - torch.nn.ReLU(), - torch.nn.Conv1d(hidden_channels, out_channels, 1), - ] - self.postnet = nn.Sequential(*self.postnet) - - def forward(self, x, x_mask=None, g=None): - x = self.prenet(x) * x_mask - x = self.wn(x, x_mask, g) - o = self.postnet(x) * x_mask - return o - - -class RelativePositionTransformerDecoder(nn.Module): - """Decoder with Relative Positional Transformer. - - Note: - Default params - params={ - 'hidden_channels_ffn': 128, - 'num_heads': 2, - "kernel_size": 3, - "dropout_p": 0.1, - "num_layers": 8, - "rel_attn_window_size": 4, - "input_length": None - } - - Args: - in_channels (int): number of input channels. - out_channels (int): number of output channels. - hidden_channels (int): number of hidden channels including Transformer layers. - params (dict): dictionary for residual convolutional blocks. - """ - - def __init__(self, in_channels, out_channels, hidden_channels, params): - super().__init__() - self.prenet = Conv1dBN(in_channels, hidden_channels, 1, 1) - self.rel_pos_transformer = RelativePositionTransformer(in_channels, out_channels, hidden_channels, **params) - - def forward(self, x, x_mask=None, g=None): # pylint: disable=unused-argument - o = self.prenet(x) * x_mask - o = self.rel_pos_transformer(o, x_mask) - return o - - -class FFTransformerDecoder(nn.Module): - """Decoder with FeedForwardTransformer. - - Default params - params={ - 'hidden_channels_ffn': 1024, - 'num_heads': 2, - "dropout_p": 0.1, - "num_layers": 6, - } - - Args: - in_channels (int): number of input channels. - out_channels (int): number of output channels. - hidden_channels (int): number of hidden channels including Transformer layers. - params (dict): dictionary for residual convolutional blocks. - """ - - def __init__(self, in_channels, out_channels, params): - super().__init__() - self.transformer_block = FFTransformerBlock(in_channels, **params) - self.postnet = nn.Conv1d(in_channels, out_channels, 1) - - def forward(self, x, x_mask=None, g=None): # pylint: disable=unused-argument - # TODO: handle multi-speaker - x_mask = 1 if x_mask is None else x_mask - o = self.transformer_block(x) * x_mask - o = self.postnet(o) * x_mask - return o - - -class ResidualConv1dBNDecoder(nn.Module): - """Residual Convolutional Decoder as in the original Speedy Speech paper - - TODO: Integrate speaker conditioning vector. - - Note: - Default params - params = { - "kernel_size": 4, - "dilations": 4 * [1, 2, 4, 8] + [1], - "num_conv_blocks": 2, - "num_res_blocks": 17 - } - - Args: - in_channels (int): number of input channels. - out_channels (int): number of output channels. - hidden_channels (int): number of hidden channels including ResidualConv1dBNBlock layers. - params (dict): dictionary for residual convolutional blocks. - """ - - def __init__(self, in_channels, out_channels, hidden_channels, params): - super().__init__() - self.res_conv_block = ResidualConv1dBNBlock(in_channels, hidden_channels, hidden_channels, **params) - self.post_conv = nn.Conv1d(hidden_channels, hidden_channels, 1) - self.postnet = nn.Sequential( - Conv1dBNBlock( - hidden_channels, hidden_channels, hidden_channels, params["kernel_size"], 1, num_conv_blocks=2 - ), - nn.Conv1d(hidden_channels, out_channels, 1), - ) - - def forward(self, x, x_mask=None, g=None): # pylint: disable=unused-argument - o = self.res_conv_block(x, x_mask) - o = self.post_conv(o) + x - return self.postnet(o) * x_mask - - -class Decoder(nn.Module): - """Decodes the expanded phoneme encoding into spectrograms - Args: - out_channels (int): number of output channels. - in_hidden_channels (int): input and hidden channels. Model keeps the input channels for the intermediate layers. - decoder_type (str): decoder layer types. 'transformers' or 'residual_conv_bn'. Default 'residual_conv_bn'. - decoder_params (dict): model parameters for specified decoder type. - c_in_channels (int): number of channels for conditional input. - - Shapes: - - input: (B, C, T) - """ - - # pylint: disable=dangerous-default-value - def __init__( - self, - out_channels, - in_hidden_channels, - decoder_type="residual_conv_bn", - decoder_params={ - "kernel_size": 4, - "dilations": 4 * [1, 2, 4, 8] + [1], - "num_conv_blocks": 2, - "num_res_blocks": 17, - }, - c_in_channels=0, - ): - super().__init__() - - if decoder_type.lower() == "relative_position_transformer": - self.decoder = RelativePositionTransformerDecoder( - in_channels=in_hidden_channels, - out_channels=out_channels, - hidden_channels=in_hidden_channels, - params=decoder_params, - ) - elif decoder_type.lower() == "residual_conv_bn": - self.decoder = ResidualConv1dBNDecoder( - in_channels=in_hidden_channels, - out_channels=out_channels, - hidden_channels=in_hidden_channels, - params=decoder_params, - ) - elif decoder_type.lower() == "wavenet": - self.decoder = WaveNetDecoder( - in_channels=in_hidden_channels, - out_channels=out_channels, - hidden_channels=in_hidden_channels, - c_in_channels=c_in_channels, - params=decoder_params, - ) - elif decoder_type.lower() == "fftransformer": - self.decoder = FFTransformerDecoder(in_hidden_channels, out_channels, decoder_params) - else: - raise ValueError(f"[!] Unknown decoder type - {decoder_type}") - - def forward(self, x, x_mask, g=None): # pylint: disable=unused-argument - """ - Args: - x: [B, C, T] - x_mask: [B, 1, T] - g: [B, C_g, 1] - """ - # TODO: implement multi-speaker - o = self.decoder(x, x_mask, g) - return o diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests2/test_feed_forward_layers.py b/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests2/test_feed_forward_layers.py deleted file mode 100644 index 6b26b88f382a1876fd197b632c9bd2b4aca1e06f..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests2/test_feed_forward_layers.py +++ /dev/null @@ -1,107 +0,0 @@ -import torch - -from TTS.tts.layers.feed_forward.decoder import Decoder -from TTS.tts.layers.feed_forward.encoder import Encoder -from TTS.tts.utils.helpers import sequence_mask - -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - - -def test_encoder(): - input_dummy = torch.rand(8, 14, 37).to(device) - input_lengths = torch.randint(31, 37, (8,)).long().to(device) - input_lengths[-1] = 37 - input_mask = torch.unsqueeze(sequence_mask(input_lengths, input_dummy.size(2)), 1).to(device) - # relative positional transformer encoder - layer = Encoder( - out_channels=11, - in_hidden_channels=14, - encoder_type="relative_position_transformer", - encoder_params={ - "hidden_channels_ffn": 768, - "num_heads": 2, - "kernel_size": 3, - "dropout_p": 0.1, - "num_layers": 6, - "rel_attn_window_size": 4, - "input_length": None, - }, - ).to(device) - output = layer(input_dummy, input_mask) - assert list(output.shape) == [8, 11, 37] - # residual conv bn encoder - layer = Encoder( - out_channels=11, - in_hidden_channels=14, - encoder_type="residual_conv_bn", - encoder_params={"kernel_size": 4, "dilations": 4 * [1, 2, 4] + [1], "num_conv_blocks": 2, "num_res_blocks": 13}, - ).to(device) - output = layer(input_dummy, input_mask) - assert list(output.shape) == [8, 11, 37] - # FFTransformer encoder - layer = Encoder( - out_channels=14, - in_hidden_channels=14, - encoder_type="fftransformer", - encoder_params={"hidden_channels_ffn": 31, "num_heads": 2, "num_layers": 2, "dropout_p": 0.1}, - ).to(device) - output = layer(input_dummy, input_mask) - assert list(output.shape) == [8, 14, 37] - - -def test_decoder(): - input_dummy = torch.rand(8, 128, 37).to(device) - input_lengths = torch.randint(31, 37, (8,)).long().to(device) - input_lengths[-1] = 37 - - input_mask = torch.unsqueeze(sequence_mask(input_lengths, input_dummy.size(2)), 1).to(device) - # residual bn conv decoder - layer = Decoder(out_channels=11, in_hidden_channels=128).to(device) - output = layer(input_dummy, input_mask) - assert list(output.shape) == [8, 11, 37] - # transformer decoder - layer = Decoder( - out_channels=11, - in_hidden_channels=128, - decoder_type="relative_position_transformer", - decoder_params={ - "hidden_channels_ffn": 128, - "num_heads": 2, - "kernel_size": 3, - "dropout_p": 0.1, - "num_layers": 8, - "rel_attn_window_size": 4, - "input_length": None, - }, - ).to(device) - output = layer(input_dummy, input_mask) - assert list(output.shape) == [8, 11, 37] - # wavenet decoder - layer = Decoder( - out_channels=11, - in_hidden_channels=128, - decoder_type="wavenet", - decoder_params={ - "num_blocks": 12, - "hidden_channels": 192, - "kernel_size": 5, - "dilation_rate": 1, - "num_layers": 4, - "dropout_p": 0.05, - }, - ).to(device) - output = layer(input_dummy, input_mask) - # FFTransformer decoder - layer = Decoder( - out_channels=11, - in_hidden_channels=128, - decoder_type="fftransformer", - decoder_params={ - "hidden_channels_ffn": 31, - "num_heads": 2, - "dropout_p": 0.1, - "num_layers": 2, - }, - ).to(device) - output = layer(input_dummy, input_mask) - assert list(output.shape) == [8, 11, 37] diff --git a/spaces/artificialguybr/video-dubbing/Wav2Lip/preprocess.py b/spaces/artificialguybr/video-dubbing/Wav2Lip/preprocess.py deleted file mode 100644 index 5322012ac60e91fefa47338d0e253c3f912ab7f2..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/Wav2Lip/preprocess.py +++ /dev/null @@ -1,113 +0,0 @@ -import sys - -if sys.version_info[0] < 3 and sys.version_info[1] < 2: - raise Exception("Must be using >= Python 3.2") - -from os import listdir, path - -if not path.isfile('face_detection/detection/sfd/s3fd.pth'): - raise FileNotFoundError('Save the s3fd model to face_detection/detection/sfd/s3fd.pth \ - before running this script!') - -import multiprocessing as mp -from concurrent.futures import ThreadPoolExecutor, as_completed -import numpy as np -import argparse, os, cv2, traceback, subprocess -from tqdm import tqdm -from glob import glob -import audio -from hparams import hparams as hp - -import face_detection - -parser = argparse.ArgumentParser() - -parser.add_argument('--ngpu', help='Number of GPUs across which to run in parallel', default=1, type=int) -parser.add_argument('--batch_size', help='Single GPU Face detection batch size', default=32, type=int) -parser.add_argument("--data_root", help="Root folder of the LRS2 dataset", required=True) -parser.add_argument("--preprocessed_root", help="Root folder of the preprocessed dataset", required=True) - -args = parser.parse_args() - -fa = [face_detection.FaceAlignment(face_detection.LandmarksType._2D, flip_input=False, - device='cuda:{}'.format(id)) for id in range(args.ngpu)] - -template = 'ffmpeg -loglevel panic -y -i {} -strict -2 {}' -# template2 = 'ffmpeg -hide_banner -loglevel panic -threads 1 -y -i {} -async 1 -ac 1 -vn -acodec pcm_s16le -ar 16000 {}' - -def process_video_file(vfile, args, gpu_id): - video_stream = cv2.VideoCapture(vfile) - - frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - frames.append(frame) - - vidname = os.path.basename(vfile).split('.')[0] - dirname = vfile.split('/')[-2] - - fulldir = path.join(args.preprocessed_root, dirname, vidname) - os.makedirs(fulldir, exist_ok=True) - - batches = [frames[i:i + args.batch_size] for i in range(0, len(frames), args.batch_size)] - - i = -1 - for fb in batches: - preds = fa[gpu_id].get_detections_for_batch(np.asarray(fb)) - - for j, f in enumerate(preds): - i += 1 - if f is None: - continue - - x1, y1, x2, y2 = f - cv2.imwrite(path.join(fulldir, '{}.jpg'.format(i)), fb[j][y1:y2, x1:x2]) - -def process_audio_file(vfile, args): - vidname = os.path.basename(vfile).split('.')[0] - dirname = vfile.split('/')[-2] - - fulldir = path.join(args.preprocessed_root, dirname, vidname) - os.makedirs(fulldir, exist_ok=True) - - wavpath = path.join(fulldir, 'audio.wav') - - command = template.format(vfile, wavpath) - subprocess.call(command, shell=True) - - -def mp_handler(job): - vfile, args, gpu_id = job - try: - process_video_file(vfile, args, gpu_id) - except KeyboardInterrupt: - exit(0) - except: - traceback.print_exc() - -def main(args): - print('Started processing for {} with {} GPUs'.format(args.data_root, args.ngpu)) - - filelist = glob(path.join(args.data_root, '*/*.mp4')) - - jobs = [(vfile, args, i%args.ngpu) for i, vfile in enumerate(filelist)] - p = ThreadPoolExecutor(args.ngpu) - futures = [p.submit(mp_handler, j) for j in jobs] - _ = [r.result() for r in tqdm(as_completed(futures), total=len(futures))] - - print('Dumping audios...') - - for vfile in tqdm(filelist): - try: - process_audio_file(vfile, args) - except KeyboardInterrupt: - exit(0) - except: - traceback.print_exc() - continue - -if __name__ == '__main__': - main(args) \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_SHA384.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_SHA384.py deleted file mode 100644 index c682eb439145314e62ee814b8caf1d5e6d76bc8f..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_SHA384.py +++ /dev/null @@ -1,61 +0,0 @@ -# -*- coding: utf-8 -*- -# -# SelfTest/Hash/test_SHA.py: Self-test for the SHA-384 hash function -# -# Written in 2008 by Dwayne C. Litzenberger -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -"""Self-test suite for Crypto.Hash.SHA384""" - -# Test vectors from various sources -# This is a list of (expected_result, input[, description]) tuples. -test_data = [ - - # RFC 4634: Section Page 8.4, "Test 1" - ('cb00753f45a35e8bb5a03d699ac65007272c32ab0eded1631a8b605a43ff5bed8086072ba1e7cc2358baeca134c825a7', 'abc'), - - # RFC 4634: Section Page 8.4, "Test 2.2" - ('09330c33f71147e83d192fc782cd1b4753111b173b3b05d22fa08086e3b0f712fcc7c71a557e2db966c3e9fa91746039', 'abcdefghbcdefghicdefghijdefghijkefghijklfghijklmghijklmnhijklmnoijklmnopjklmnopqklmnopqrlmnopqrsmnopqrstnopqrstu'), - - # RFC 4634: Section Page 8.4, "Test 3" - ('9d0e1809716474cb086e834e310a4a1ced149e9c00f248527972cec5704c2a5b07b8b3dc38ecc4ebae97ddd87f3d8985', 'a' * 10**6, "'a' * 10**6"), - - # Taken from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm - ('38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b', ''), - - # Example from http://de.wikipedia.org/wiki/Secure_Hash_Algorithm - ('71e8383a4cea32d6fd6877495db2ee353542f46fa44bc23100bca48f3366b84e809f0708e81041f427c6d5219a286677', - 'Franz jagt im komplett verwahrlosten Taxi quer durch Bayern'), - -] - -def get_tests(config={}): - from Crypto.Hash import SHA384 - from .common import make_hash_tests - return make_hash_tests(SHA384, "SHA384", test_data, - digest_size=48, - oid='2.16.840.1.101.3.4.2.2') - -if __name__ == '__main__': - import unittest - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') - -# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/binarizer.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/binarizer.py deleted file mode 100644 index 6f03d7a2cbb16db6aa218713211c1323adbc7d45..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/binarizer.py +++ /dev/null @@ -1,381 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import typing as tp -from abc import ABC, abstractmethod -from collections import Counter -from dataclasses import dataclass -from multiprocessing import Pool - -import torch - -from fairseq.data import Dictionary, indexed_dataset -from fairseq.file_chunker_utils import Chunker, find_offsets -from fairseq.file_io import PathManager -from fairseq.tokenizer import tokenize_line - -logger = logging.getLogger("binarizer") - - -@dataclass -class BinarizeSummary: - """ - Keep track of what's going on in the binarizer - """ - - num_seq: int = 0 - replaced: tp.Optional[Counter] = None - num_tok: int = 0 - - @property - def num_replaced(self) -> int: - if self.replaced is None: - return 0 - return sum(self.replaced.values()) - - @property - def replaced_percent(self) -> float: - return 100 * self.num_replaced / self.num_tok - - def __str__(self) -> str: - base = f"{self.num_seq} sents, {self.num_tok} tokens" - if self.replaced is None: - return base - - return f"{base}, {self.replaced_percent:.3}% replaced" - - def merge(self, other: "BinarizeSummary"): - replaced = None - if self.replaced is not None: - replaced = self.replaced - if other.replaced is not None: - if replaced is None: - replaced = other.replaced - else: - replaced += other.replaced - self.replaced = replaced - self.num_seq += other.num_seq - self.num_tok += other.num_tok - - -class Binarizer(ABC): - """ - a binarizer describes how to take a string and build a tensor out of it - """ - - @abstractmethod - def binarize_line( - self, - line: str, - summary: BinarizeSummary, - ) -> torch.IntTensor: - ... - - -def _worker_prefix(output_prefix: str, worker_id: int): - return f"{output_prefix}.pt{worker_id}" - - -class FileBinarizer: - """ - An file binarizer can take a file, tokenize it, and binarize each line to a tensor - """ - - @classmethod - def multiprocess_dataset( - cls, - input_file: str, - dataset_impl: str, - binarizer: Binarizer, - output_prefix: str, - vocab_size=None, - num_workers=1, - ) -> BinarizeSummary: - final_summary = BinarizeSummary() - - offsets = find_offsets(input_file, num_workers) - # find_offsets returns a list of position [pos1, pos2, pos3, pos4] but we would want pairs: - # [(pos1, pos2), (pos2, pos3), (pos3, pos4)] to process the chunks with start/end info - # we zip the list with itself shifted by one to get all the pairs. - (first_chunk, *more_chunks) = zip(offsets, offsets[1:]) - pool = None - if num_workers > 1: - pool = Pool(processes=num_workers - 1) - worker_results = [ - pool.apply_async( - cls._binarize_chunk_and_finalize, - args=( - binarizer, - input_file, - start_offset, - end_offset, - _worker_prefix( - output_prefix, - worker_id, - ), - dataset_impl, - ), - kwds={ - "vocab_size": vocab_size, - } - if vocab_size is not None - else {}, - ) - for worker_id, (start_offset, end_offset) in enumerate( - more_chunks, start=1 - ) - ] - - pool.close() - pool.join() - for r in worker_results: - summ = r.get() - final_summary.merge(summ) - - # do not close the bin file as we need to merge the worker results in - final_ds, summ = cls._binarize_file_chunk( - binarizer, - input_file, - offset_start=first_chunk[0], - offset_end=first_chunk[1], - output_prefix=output_prefix, - dataset_impl=dataset_impl, - vocab_size=vocab_size if vocab_size is not None else None, - ) - final_summary.merge(summ) - - if num_workers > 1: - for worker_id in range(1, num_workers): - # merge the worker outputs - worker_output_prefix = _worker_prefix( - output_prefix, - worker_id, - ) - final_ds.merge_file_(worker_output_prefix) - try: - os.remove(indexed_dataset.data_file_path(worker_output_prefix)) - os.remove(indexed_dataset.index_file_path(worker_output_prefix)) - except Exception as e: - logger.error( - f"couldn't remove {worker_output_prefix}.*", exc_info=e - ) - - # now we can close the file - idx_file = indexed_dataset.index_file_path(output_prefix) - final_ds.finalize(idx_file) - return final_summary - - @staticmethod - def _binarize_file_chunk( - binarizer: Binarizer, - filename: str, - offset_start: int, - offset_end: int, - output_prefix: str, - dataset_impl: str, - vocab_size=None, - ) -> tp.Tuple[tp.Any, BinarizeSummary]: # (dataset builder, BinarizeSummary) - """ - creates a dataset builder and append binarized items to it. This function does not - finalize the builder, this is useful if you want to do other things with your bin file - like appending/merging other files - """ - bin_file = indexed_dataset.data_file_path(output_prefix) - ds = indexed_dataset.make_builder( - bin_file, - impl=dataset_impl, - vocab_size=vocab_size, - ) - summary = BinarizeSummary() - - with Chunker( - PathManager.get_local_path(filename), offset_start, offset_end - ) as line_iterator: - for line in line_iterator: - ds.add_item(binarizer.binarize_line(line, summary)) - - return ds, summary - - @classmethod - def _binarize_chunk_and_finalize( - cls, - binarizer: Binarizer, - filename: str, - offset_start: int, - offset_end: int, - output_prefix: str, - dataset_impl: str, - vocab_size=None, - ): - """ - same as above, but also finalizes the builder - """ - ds, summ = cls._binarize_file_chunk( - binarizer, - filename, - offset_start, - offset_end, - output_prefix, - dataset_impl, - vocab_size=vocab_size, - ) - - idx_file = indexed_dataset.index_file_path(output_prefix) - ds.finalize(idx_file) - - return summ - - -class VocabularyDatasetBinarizer(Binarizer): - """ - Takes a Dictionary/Vocabulary, assign ids to each - token using the dictionary encode_line function. - """ - - def __init__( - self, - dict: Dictionary, - tokenize: tp.Callable[[str], tp.List[str]] = tokenize_line, - append_eos: bool = True, - reverse_order: bool = False, - already_numberized: bool = False, - ) -> None: - self.dict = dict - self.tokenize = tokenize - self.append_eos = append_eos - self.reverse_order = reverse_order - self.already_numberized = already_numberized - super().__init__() - - def binarize_line( - self, - line: str, - summary: BinarizeSummary, - ): - if summary.replaced is None: - summary.replaced = Counter() - - def replaced_consumer(word, idx): - if idx == self.dict.unk_index and word != self.dict.unk_word: - summary.replaced.update([word]) - - if self.already_numberized: - id_strings = line.strip().split() - id_list = [int(id_string) for id_string in id_strings] - if self.reverse_order: - id_list.reverse() - if self.append_eos: - id_list.append(self.dict.eos()) - ids = torch.IntTensor(id_list) - else: - ids = self.dict.encode_line( - line=line, - line_tokenizer=self.tokenize, - add_if_not_exist=False, - consumer=replaced_consumer, - append_eos=self.append_eos, - reverse_order=self.reverse_order, - ) - - summary.num_seq += 1 - summary.num_tok += len(ids) - return ids - - -class AlignmentDatasetBinarizer(Binarizer): - """ - binarize by parsing a set of alignments and packing - them in a tensor (see utils.parse_alignment) - """ - - def __init__( - self, - alignment_parser: tp.Callable[[str], torch.IntTensor], - ) -> None: - super().__init__() - self.alignment_parser = alignment_parser - - def binarize_line( - self, - line: str, - summary: BinarizeSummary, - ): - ids = self.alignment_parser(line) - summary.num_seq += 1 - summary.num_tok += len(ids) - return ids - - -class LegacyBinarizer: - @classmethod - def binarize( - cls, - filename: str, - dico: Dictionary, - consumer: tp.Callable[[torch.IntTensor], None], - tokenize: tp.Callable[[str], tp.List[str]] = tokenize_line, - append_eos: bool = True, - reverse_order: bool = False, - offset: int = 0, - end: int = -1, - already_numberized: bool = False, - ) -> tp.Dict[str, int]: - binarizer = VocabularyDatasetBinarizer( - dict=dico, - tokenize=tokenize, - append_eos=append_eos, - reverse_order=reverse_order, - already_numberized=already_numberized, - ) - return cls._consume_file( - filename, - binarizer, - consumer, - offset_start=offset, - offset_end=end, - ) - - @classmethod - def binarize_alignments( - cls, - filename: str, - alignment_parser: tp.Callable[[str], torch.IntTensor], - consumer: tp.Callable[[torch.IntTensor], None], - offset: int = 0, - end: int = -1, - ) -> tp.Dict[str, int]: - binarizer = AlignmentDatasetBinarizer(alignment_parser) - return cls._consume_file( - filename, - binarizer, - consumer, - offset_start=offset, - offset_end=end, - ) - - @staticmethod - def _consume_file( - filename: str, - binarizer: Binarizer, - consumer: tp.Callable[[torch.IntTensor], None], - offset_start: int, - offset_end: int, - ) -> tp.Dict[str, int]: - summary = BinarizeSummary() - - with Chunker( - PathManager.get_local_path(filename), offset_start, offset_end - ) as line_iterator: - for line in line_iterator: - consumer(binarizer.binarize_line(line, summary)) - - return { - "nseq": summary.num_seq, - "nunk": summary.num_replaced, - "ntok": summary.num_tok, - "replaced": summary.replaced, - } diff --git a/spaces/astoken/weather_checker/app.py b/spaces/astoken/weather_checker/app.py deleted file mode 100644 index a93645310ad90e9360d393f61d89bdac9cc820af..0000000000000000000000000000000000000000 --- a/spaces/astoken/weather_checker/app.py +++ /dev/null @@ -1,256 +0,0 @@ -import requests -import pandas as pd -import datetime -import time -import gradio as gr -import os - -########### -# other API's of interest: https://medium.com/@imdipto/best-free-alternatives-to-the-wunderground-weather-api-21acb22450e6 -########## -OPENWEATHER_API_KEY = os.environ.get('OPENWEATHER_API_KEY') -WEATHERAPI_KEY = os.environ.get('WEATHERAPI_KEY') - - -def openweather_to_result(lat, lon, gmt_time): - """ - API docs: https://openweathermap.org/api/one-call-api#current - - Parameters - ------------ - lat [float]: decimal valued latitude - lon [float]: decimal valued longitude - gmt_time [datetime object]: time of desired forecast, in gmt and as python datetime object - - Returns - -------- - cloud_pct Tuple(List, List): list of cloud percent and corresponding time for times within 1.5 hours of input GMT time - """ - exclude_parts = 'current,minutely,daily,alerts' - request_url = f'https://api.openweathermap.org/data/2.5/onecall?lat={lat}&lon={lon}&exclude={exclude_parts}&appid={OPENWEATHER_API_KEY}' - - response = requests.get(request_url) - - data = response.json() - - cloud_pct = [] - forecast_times = [] - - # timeframe around input time to check cloud % for - timeframe = datetime.timedelta(hours=1, minutes=30) - for hour in data['hourly']: - # dt property is unix utc time of forecasted data - convert this to python datetime object - forecast_time = datetime.datetime.fromtimestamp( - hour['dt'], tz=datetime.timezone.utc) - if abs(forecast_time - gmt_time) <= timeframe: - # cloud pct is stored in each hour at top level - cloud_pct.append(hour['clouds']) - forecast_times.append(forecast_time) - - return cloud_pct, forecast_times - - -def weatherapi_to_result(lat, lon, gmt_time): - """ - API docs: https://www.weatherapi.com/docs/ - TODO: implement wrapper instead https://github.com/weatherapicom/weatherapi-Python - - Parameters - ------------ - lat [float]: decimal valued latitude - lon [float]: decimal values longitude - gmt_time [datetime object]: time of desired forecast, in gmt and as python datetime object - - Returns - -------- - cloud_pct Tuple(List, List): list of cloud percent and corresponding time for times within 1.5 hours of input GMT time - """ - request_url = f'http://api.weatherapi.com/v1/forecast.json?key={WEATHERAPI_KEY}&q={lat},{lon}&days=2&alerts=no' - response = requests.get(request_url) - - data = response.json() - - timezone = data['location']['tz_id'] - - cloud_pct = [] - forecast_times = [] - - # quick error handling to make sure input time python object has "timezone" property attached - try: - gmt_time = gmt_time.astimezone(datetime.timezone.utc) - except: - gmt_time = gmt_time.tz_localize('utc') - - # timeframe around input time to check cloud % for - timeframe = datetime.timedelta(hours=1, minutes=30) - - # this api is first divided into days, then hours - for day in data['forecast']['forecastday']: - for hour in day['hour']: - # time_epoch contains unix epoch time in GMT/UTC - #forecast_time = datetime.datetime.fromtimestamp(hour['time_epoch'], ZoneInfo(timezone)) - forecast_time = datetime.datetime.fromtimestamp( - hour['time_epoch'], datetime.timezone.utc) - if abs(forecast_time - gmt_time) <= timeframe: - cloud_pct.append(hour['cloud']) - forecast_times.append( - forecast_time.astimezone(datetime.timezone.utc)) - - return cloud_pct, forecast_times - - -def met_to_result(lat, lon, gmt_time): - """ - API doc: https://api.met.no/weatherapi/locationforecast/2.0/documentation - How to: https://api.met.no/doc/locationforecast/HowTO - - Parameters - ------------ - lat [float]: decimal valued latitude - lon [float]: decimal values longitude - gmt_time [datetime object]: time of desired forecast, in gmt and as python datetime object - - Returns - -------- - cloud_pct Tuple(List, List): list of cloud percent and corresponding time for times within 1.5 hours of input GMT time - """ - - # set user agent https://stackoverflow.com/questions/10606133/sending-user-agent-using-requests-library-in-python - # must be unique per API Terms of Service https://api.met.no/doc/TermsOfService - headers = { - 'User-Agent': 'NASAEarthScienceRemoteSensingUnit alex.h.stoken@nasa.gov'} - - request_url = f'https://api.met.no/weatherapi/locationforecast/2.0/compact?lat={lat}&lon={lon}' - - response = requests.get(request_url, headers=headers) - - data = response.json() - - cloud_pct = [] - forecast_times = [] - - # timeframe around input time to check cloud % for - timeframe = datetime.timedelta(hours=1, minutes=30) - - # walk through json return - for hour in data['properties']['timeseries']: - # time is utc formatted time https://api.met.no/doc/locationforecast/FAQ - forecast_time = datetime.datetime.strptime( - hour['time'], '%Y-%m-%dT%H:%M:%SZ').replace(tzinfo=datetime.timezone.utc) - # check if time of forecast is withing "timeframe" of desired time - if abs(forecast_time - gmt_time) <= timeframe: - # grab cloud pct from location within the nested json, add to list - cloud_pct.append(hour['data']['instant'] - ['details']['cloud_area_fraction']) - # add time of forecast to list. Should be an "on the hour" time - forecast_times.append(forecast_time) - - return cloud_pct, forecast_times - -################ -# generate text -################ - - -def file_to_cloud_listing(input_file, services): - """ - - Args: - input_file (Union[str, gradio FileType]): input csv file with LAT, LON, SITE, GMT cols - services (List): list of weather api servies to check - - Returns: - str: formatted string with weather predictions for locations - """ - # this works if the input is from gradio. Then the file has an name property - try: - sites = pd.read_csv(input_file.name, parse_dates=['GMT']) - using_gradio = True - except: - # this is for input from a script or command line - sites = pd.read_csv(input_file, parse_dates=['GMT']) - using_gradio = False - start = time.perf_counter() - date_format = "%H:%M" - text = '' - # each row is a site. Get weather data and then print it for each service for each site. - for row_idx, row in sites.iterrows(): - #time_of_interest = datetime.datetime.strptime(row.GMT, '%m/%d/%y %H:%M') - text += check_row(row, services, date_format) - text += f'{"="*60}\n' - - return text - - -def check_row(row, services, date_format="%H:%M"): - """Check a row of data (a pd.Series with LAT, LON, GMT, SITE cols) - - Args: - row (pd.Series): pd.Series with LAT, LON, GMT, SITE cols) - services (List): List of weather services (['OpenWeather', 'MET (Norwegian)', 'WeatherAPI'] or subset) - date_format (str, optional): Format for printing time of site pass over. Defaults to "%H:%M". - - Returns: - str: formatted str of text for weather vals - """ - text = "" - - text += f'{"Location":13}:\t\t{row.SITE} @ {row["GMT"].strftime(date_format)} GMT\n' - - if not isinstance(row.GMT, datetime.datetime): - GMT = row["GMT"].to_pydatetime() - else: - GMT = row["GMT"] - GMT = GMT.replace(tzinfo=datetime.timezone.utc) - if 'OpenWeather' in services: - try: - cldp, times = openweather_to_result(row.LAT, row.LON, GMT) - text += format_cldp_and_time("OpenWeather", cldp=cldp, times=times) - except Exception as e: - text += f'OpenWeather:\t\tError {e} in API processing\n' - if 'MET (Norwegian)' in services: - try: - cldp, times = met_to_result(row.LAT, row.LON, GMT) - text += format_cldp_and_time("Norwegian", cldp=cldp) - except Exception as e: - text += f'Norwegian:\t\tError {e} in API processing\n' - if 'WeatherAPI' in services: - try: - cldp, times = weatherapi_to_result(row.LAT, row.LON, GMT) - text += format_cldp_and_time("WeatherAPI", cldp=cldp) - except Exception as e: - text += f'WeatherAPI:\t\tError {e} in API processing\n' - - return text - - -def format_cldp_and_time(api_name, cldp, times=None): - """Formats output text for lists of cloud percents and forecast times - - Args: - api_name ([type]): Name of weather source. - cldp (List): List of floating point cloud percentage values. - times (List, optional): List of forecast times, as datetime objects. Defaults to None. - - Returns: - str: formatted text for printing - """ - text = '' - date_format = "%H:%M" - if times is not None: - text += f'{"Forecast Time:":13}\t\t' + ' '.join(time.strftime(date_format) - for time in times) + "\n" - - text += f'{api_name:13}:\t\t{" ".join(f"{p:<6.0f}" for p in cldp)}\n' - return text - - -inputs = [gr.inputs.File(label='Site File with Lat/Lon and GMT Time'), gr.inputs.CheckboxGroup(label='Weather Services', - choices=['OpenWeather', 'MET (Norwegian)', 'WeatherAPI'], default=['OpenWeather', 'MET (Norwegian)'])] -outputs = gr.outputs.Textbox(label ='Cloud % for hour before, hour of, hour after') -css = """* {font-family: "Lucida Console", "Courier New", monospace !important;/* <-- fonts */ - }""" - -gr.Interface(fn=file_to_cloud_listing, inputs=inputs, css=css, outputs=outputs, - allow_screenshot=False).launch() - diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Samuel Bradley-Kelly.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Samuel Bradley-Kelly.html deleted file mode 100644 index 4c4776a496b125b44ba07f5eac29b3bb2202c09c..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Samuel Bradley-Kelly.html +++ /dev/null @@ -1,134 +0,0 @@ - - - - Samuel Bradley-Kelly - - - - -
    -

    Samuel Bradley-Kelly

    - -
    -

    I have been a professional mentor to both undergraduate & graduate students for the past 7 years.  These are students I mentored have entered into careers in software engineering, data science / machine learning and product (tech focus). Based on my previous success with my mentees having landed internships & full-time jobs, I believe this would greatly benefit a serious mentee candidate who wants to partner with a mentor like myself in order to maximize their chances of success :) 

    Interview


    How did you hear about SM?
    • popped up somwhere on LinkedIn and researched it
    Career
    • DS for 8 years
    • HBO max - recommender systems for the past 3-4 years
    • activley interviewing to level up 
    Mentorship experience?
    • I've been mentoring college students for 8 years at UoW, and UChicago
    • part of some grad programs
    • mentor 2 or 3 students every year, help them prep and get a job after college
    • some informal mentorships, but mostly gets paired
    • about 6 months, apply, interview prep, resume review, get some projects off the ground
    What are beginners lacking?
    • getting in the habit of programming / working on something
    • creating a habit of coding every day!! 
    • and getting in the ritual/habit of applying for jobs 
      • come up with some agreement with yourself
      • hold yourself accountable
    • This last person applied through N applications
    And how can you add value as a mentor?
    • always there if they want interview prep (tech, behavioural)
    • resume review
    • "I'm happy if you want to share a job position or two with me for a an honest assessment"
    • assess their qualifications for ertain jobs
    • try to refer mentees to jobs at his company
    • likes long term relationships!!
    -
    -

    Questions about SM?
    • Can I reach out to potential mentees?
    • Does the relationship end after the mentorship period?
    • Can mentees come back when they want another job? 
    • What if the relationship dies out?
    • What is SM working on now?
    -
    - -
    - - - \ No newline at end of file diff --git a/spaces/augmentedimaginationhackathon/paperstocode/frontend/src/main.ts b/spaces/augmentedimaginationhackathon/paperstocode/frontend/src/main.ts deleted file mode 100644 index c58dc05cbc6d094a9ed44203c6b69b74e5294452..0000000000000000000000000000000000000000 --- a/spaces/augmentedimaginationhackathon/paperstocode/frontend/src/main.ts +++ /dev/null @@ -1,7 +0,0 @@ -import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; - -import { AppModule } from './app/app.module'; - - -platformBrowserDynamic().bootstrapModule(AppModule) - .catch(err => console.error(err)); diff --git a/spaces/awacke1/AI.Dashboard.Mermaid.Model.HTML5/index.html b/spaces/awacke1/AI.Dashboard.Mermaid.Model.HTML5/index.html deleted file mode 100644 index 66c7ac0516cb47848e339006985c57cfc0c153c4..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AI.Dashboard.Mermaid.Model.HTML5/index.html +++ /dev/null @@ -1,97 +0,0 @@ - - - - - - My static Space - - - - - - - - - - - - - - - - - - - -
    -journey - title Create AI - section Training - Format DataSet Inputs Files, Data Splits: 5: Teacher - Model Build w/ SKLearn, TF, Pytorch: 3: Student - Determine Model Performance: 1: Teacher, Student - section Deploy - Web Deploy Local and Cloud: 5: Teacher - Architecture Spaces Gradio Streamlit Heroku AWS Azure and GCCP: 5: Teacher - section Testing - Test Model with Input Datasets: 5: Teacher - Examples. Inputs that Work, Inputs That Break Model: 5: Teacher - Governance - Analyze, Publish Fairness, Equity, Bias for Datasets and Outputs: 5: Teacher -
    - -
    -sequenceDiagram - participant Alice - participant Bob - Alice->>John: Hello John, how are you? - loop Healthcheck - John->>John: Fight against hypochondria - end - Note right of John: Rational thoughts
    prevail... - John-->>Alice: Great! - John->>Bob: How about you? - Bob-->>John: Jolly good! -
    - -
    -

    Welcome to the Mermaid Modeler Tip Sheet

    -

    - You can use Mermaid inside HTML5 by including the script and a div with the class or mermaid. -

    -

    - Documentation is located here: - Mermaid documentation. -

    -
    - - - diff --git a/spaces/awacke1/NLPAutoAI/app.py b/spaces/awacke1/NLPAutoAI/app.py deleted file mode 100644 index 817cfcf0a4dedcc813ac1625b020f84cca72d3ff..0000000000000000000000000000000000000000 --- a/spaces/awacke1/NLPAutoAI/app.py +++ /dev/null @@ -1,80 +0,0 @@ -import streamlit as st -import firebase_admin -from firebase_admin import credentials -from firebase_admin import firestore -import datetime -from transformers import pipeline -import gradio as gr - -@st.experimental_singleton -def get_db_firestore(): - cred = credentials.Certificate('test.json') - firebase_admin.initialize_app(cred, {'projectId': u'clinical-nlp-b9117',}) - db = firestore.client() - return db - - -db = get_db_firestore() -asr = pipeline("automatic-speech-recognition", "facebook/wav2vec2-base-960h") - -def transcribe(audio): - text = asr(audio)["text"] - return text - -classifier = pipeline("text-classification") - -def speech_to_text(speech): - text = asr(speech)["text"] - return text - -def text_to_sentiment(text): - sentiment = classifier(text)[0]["label"] - return sentiment - -def upsert(text): - date_time =str(datetime.datetime.today()) - doc_ref = db.collection('Text2SpeechSentimentSave').document(date_time) - doc_ref.set({u'firefield': 'Recognize Speech', u'first': 'https://huggingface.co/spaces/awacke1/Text2SpeechSentimentSave', u'last': text, u'born': date_time,}) - saved = select('Text2SpeechSentimentSave', date_time) - # check it here: https://console.firebase.google.com/u/0/project/clinical-nlp-b9117/firestore/data/~2FStreamlitSpaces - return saved - -def select(collection, document): - doc_ref = db.collection(collection).document(document) - doc = doc_ref.get() - docid = ("The id is: ", doc.id) - contents = ("The contents are: ", doc.to_dict()) - return contents - -def selectall(text): - docs = db.collection('Text2SpeechSentimentSave').stream() - doclist='' - for doc in docs: - #docid=doc.id - #dict=doc.to_dict() - #doclist+=doc.to_dict() - r=(f'{doc.id} => {doc.to_dict()}') - doclist += r - return doclist - -demo = gr.Blocks() - -with demo: - #audio_file = gr.Audio(type="filepath") - audio_file = gr.inputs.Audio(source="microphone", type="filepath") - text = gr.Textbox() - label = gr.Label() - saved = gr.Textbox() - savedAll = gr.Textbox() - - b1 = gr.Button("Recognize Speech") - b2 = gr.Button("Classify Sentiment") - b3 = gr.Button("Save Speech to Text") - b4 = gr.Button("Retrieve All") - - b1.click(speech_to_text, inputs=audio_file, outputs=text) - b2.click(text_to_sentiment, inputs=text, outputs=label) - b3.click(upsert, inputs=text, outputs=saved) - b4.click(selectall, inputs=text, outputs=savedAll) - -demo.launch(share=True) \ No newline at end of file diff --git a/spaces/awacke1/StreamlitPydeckMapVisualViewStateForLatitudeLongitude/app.py b/spaces/awacke1/StreamlitPydeckMapVisualViewStateForLatitudeLongitude/app.py deleted file mode 100644 index 7e7481b0ea4cb98d472184f87215f64bc92eb47d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/StreamlitPydeckMapVisualViewStateForLatitudeLongitude/app.py +++ /dev/null @@ -1,118 +0,0 @@ -import streamlit as st -import pydeck as pdk - -# Define a GeoJSON data source -geojson_data = { - "type": "FeatureCollection", - "features": [ - { - "type": "Feature", - "geometry": { - "type": "Point", - "coordinates": [15.8277, -0.2280] # Republic of Congo latitude and longitude - }, - "properties": { - "name": "Republic of Congo" - } - } - ] -} - -# Define the line geometry -line_geojson_data = { - "type": "FeatureCollection", - "features": [ - { - "type": "Feature", - "geometry": { - "type": "LineString", - "coordinates": [ - [8.7811, -0.7193], # Port-Gentil latitude and longitude - [15.8277, -0.2280] # Republic of Congo latitude and longitude - ] - } - } - ] -} - -# Define the polygon geometry -polygon_geojson_data = { - "type": "FeatureCollection", - "features": [ - { - "type": "Feature", - "geometry": { - "type": "Polygon", - "coordinates": [ - [ - [16.0315, -0.3797], # Polygon coordinates - [16.0315, -0.4515], - [15.9199, -0.4515], - [15.9199, -0.3797], - [16.0315, -0.3797] - ] - ] - }, - "properties": { - "name": "Population: 200,000" - } - } - ] -} - - -# Define the PyDeck layer -layer = pdk.Layer( - "GeoJsonLayer", - data=geojson_data, - get_position="geometry.coordinates", - get_radius=100000, - get_fill_color=[255, 0, 0], - pickable=True -) - -# Define the PyDeck layer for the line geometry -line_layer = pdk.Layer( - "GeoJsonLayer", - data=line_geojson_data, - get_source_position="geometry.coordinates", - get_target_position=lambda feature: feature["geometry"]["coordinates"][-1], - get_color=[255, 165, 0], - get_width=30000, - pickable=True -) - -# Define the PyDeck layer for the polygon geometry -polygon_layer = pdk.Layer( - "GeoJsonLayer", - data=polygon_geojson_data, - get_fill_color=[0, 255, 255, 128], - get_line_color=[0, 0, 0], - get_line_width=3, - get_polygon="geometry.coordinates", - get_text="properties.name", - get_text_anchor="middle", - get_text_offset=[0, 20], - get_text_color=[255, 255, 255], - pickable=True -) - -# Define the PyDeck view state -view_state = pdk.ViewState( - latitude=geojson_data['features'][0]['geometry']['coordinates'][1], - longitude=geojson_data['features'][0]['geometry']['coordinates'][0], - zoom=5 -) - -# Set the Mapbox API key -pdk.settings.api_key = "pk.eyJ1IjoiYWFyb253YWNrZXIiLCJhIjoiY2xlOGV2enN3MGV0YzN2bzZjMm96eXhyOSJ9.SqZugs5uIpIBvMM_Hioyvg" - -# Define the PyDeck deck -deck = pdk.Deck( - layers=[layer], - initial_view_state=view_state, - map_style="mapbox://styles/mapbox/light-v9" -) - -# Render the PyDeck deck using Streamlit -st.pydeck_chart(deck) \ No newline at end of file diff --git a/spaces/awacke1/WikipediaProfilerTestforDatasets/index.html b/spaces/awacke1/WikipediaProfilerTestforDatasets/index.html deleted file mode 100644 index f6c76f60a8b275ee570b8be9fce21011b7496117..0000000000000000000000000000000000000000 --- a/spaces/awacke1/WikipediaProfilerTestforDatasets/index.html +++ /dev/null @@ -1,3074 +0,0 @@ -WikipediaProfilerTestforDatasets Report

    Overview

    -

    Download Stata 14 for Mac: A Comprehensive Guide

    -

    If you are looking for a powerful and versatile software for data analysis, statistics, and graphics, you might want to consider downloading Stata 14 for Mac. Stata 14 is one of the most popular and widely used software packages in the fields of economics, sociology, political science, biostatistics, epidemiology, and many others. It offers a range of features and benefits that can help you perform complex data manipulation, estimation, testing, forecasting, simulation, and visualization tasks with ease and accuracy.

    -

    In this article, we will provide you with a comprehensive guide on how to download Stata 14 for Mac. We will also explain what Stata 14 is and why you need it, what are its main features and benefits, how to install and use it on your Mac computer, and some frequently asked questions about it. By the end of this article, you will have a clear idea of whether Stata 14 is the right software for you and how to get started with it.

    -

    Download Stata 14 For Mac


    Downloadhttps://byltly.com/2uKx8D



    -

    What is Stata 14 and why do you need it?

    -

    Stata 14 is a software package that was released in April 2015 by StataCorp, a company that has been developing and distributing statistical software since 1985. Stata 14 is the latest version of Stata as of June 2021, although there have been several updates and bug fixes since then. The current update is Stata 14.2.

    -

    Stata 14 is a software that can handle both cross-sectional and longitudinal data, as well as panel data and multilevel data. It can also deal with both continuous and discrete variables, as well as categorical and ordinal variables. It can perform various types of analysis, such as linear and nonlinear regression, ANOVA, logistic regression, survival analysis, time series analysis, factor analysis, cluster analysis, structural equation modeling (SEM), item response theory (IRT), Bayesian analysis, power and sample size calculation, Markov-switching models, treatment effects models, multilevel survival models, fractional outcome regression models, and many more.

    -

    Stata 14 also has a user-friendly interface that allows you to interact with the software using either menus or commands. You can also customize your preferences and settings according to your needs. You can also create your own commands or programs using the built-in programming language of Stata. You can also access thousands of user-written commands or programs from the internet or from the official Stata Journal.

    -

    Stata 14 also has a powerful graphics engine that can produce high-quality graphs and charts that can be customized in various ways. You can also export your graphs to different formats such as PDF, PNG, EPS, SVG, etc. You can also integrate your graphs with other applications such as Microsoft Word or PowerPoint.

    -

    Stata 14 also has a comprehensive documentation that includes manuals, tutorials, examples, FAQs, glossaries, references, etc. You can also get support from the official website of StataCorp or from the online community of Stata users called St

    ataList. You can also get training courses or webinars from StataCorp or from other authorized providers.

    -

    Stata 14 is a software that can help you with your data analysis needs, whether you are a student, a researcher, a teacher, a consultant, or a professional. It can help you save time and effort, improve your accuracy and reliability, enhance your presentation and communication, and expand your knowledge and skills. It can also help you collaborate with other Stata users around the world and share your insights and discoveries.

    -

    Features and benefits of Stata 14

    -

    Stata 14 has many features and benefits that make it a superior software for data analysis. Here are some of the most notable ones:

    -

    Bayesian analysis

    -

    Stata 14 introduces a new command called bayes that allows you to perform Bayesian analysis using Markov chain Monte Carlo (MCMC) methods. You can specify any likelihood function and any prior distribution for the parameters, and Stata will generate posterior samples and summaries for you. You can also use predefined models such as linear regression, logistic regression, Poisson regression, etc. You can also compare models using Bayes factors or posterior predictive checks. You can also visualize your results using trace plots, density plots, interval plots, etc.

    -

    -

    IRT (item response theory)

    -

    Stata 14 also introduces a new command called irt that allows you to perform item response theory (IRT) analysis using maximum likelihood estimation (MLE) methods. You can fit various IRT models such as Rasch model, one-parameter logistic model (1PL), two-parameter logistic model (2PL), three-parameter logistic model (3PL), graded response model (GRM), partial credit model (PCM), etc. You can also test the assumptions of IRT models such as unidimensionality, local independence, monotonicity, etc. You can also assess the reliability and validity of your instruments using Cronbach's alpha, test information function (TIF), item information function (IIF), etc.

    -

    Unicode

    -

    Stata 14 supports Unicode encoding, which means that you can use any character set or language in your data, commands, output, graphs, etc. You can also import and export data files that use Unicode encoding. You can also use Unicode characters in your variable names, labels, values, etc. This feature makes Stata 14 more accessible and compatible with different cultures and languages.

    -

    Integration with Excel

    -

    Stata 14 has improved its integration with Excel, which means that you can easily import and export data between Stata and Excel. You can also use the new command called import excel to import data from Excel files directly into Stata without saving them as CSV files first. You can also use the new command called export excel to export data from Stata to Excel files with various options such as sheet name, cell range, variable names, labels, formats, etc.

    -

    Treatment effects

    -

    Stata 14 has expanded its treatment effects capabilities by adding new commands such as teffects ipwra for inverse probability weighting with regression adjustment (IPWRA), teffects ipw for inverse probability weighting (IPW), teffects psmatch for propensity score matching (PSM), teffects nnmatch for nearest neighbor matching (NNM), teffects overlap for overlap weights (OW), teffects ra for regression adjustment (RA), teffects endogenous for endogenous treatment effects models (ETE), etc. These commands allow you to estimate the causal effects of treatments or interventions on outcomes using various methods that account for selection bias or confounding factors.

    -

    Multilevel survival models

    -

    Stata 14 has added new commands such as mestreg for multilevel survival models with random effects at different levels of hierarchy. You can specify various types of random effects such as intercepts, slopes, frailties, etc. You can also specify various types of survival distributions such as exponential, Weibull, lognormal, log-logistic, gamma, Gompertz , etc. You can also test various hypotheses and assumptions using likelihood ratio tests, Wald tests, Schoenfeld residuals, etc.

    -

    SEM (structural equation modeling)

    -

    Stata 14 has improved its SEM capabilities by adding new features such as latent class analysis (LCA), latent transition analysis (LTA), latent profile analysis (LPA), latent growth curve models (LGCM), multilevel SEM, generalized SEM, dynamic SEM, etc. You can also use the new command called sembuilder to create and modify SEM diagrams using a graphical user interface (GUI). You can also use the new command called estat gof to calculate various goodness-of-fit measures such as chi-square, RMSEA, CFI, TLI, SRMR, etc.

    -

    Power and sample size

    -

    Stata 14 has enhanced its power and sample size capabilities by adding new commands such as power twoproportions for two-sample tests of proportions, power logrank for log-rank tests of survival curves, power cox for Cox proportional hazards models, power oneway for one-way ANOVA, power repeated for repeated-measures ANOVA, power cluster for cluster randomized trials, power bootstrap for bootstrap-based power analysis, etc. These commands allow you to calculate the required sample size or the achieved power for various types of statistical tests or models.

    -

    Markov-switching models

    -

    Stata 14 has introduced a new command called mswitch that allows you to estimate Markov-switching models for time series data. These models allow you to capture regime changes or structural breaks in the data by allowing the parameters to switch between different states or regimes according to a Markov process. You can specify various types of Markov-switching models such as Hamilton's model, Kim's model, Goldfeld-Quandt's model, etc. You can also test for the number of regimes, the duration of regimes, the transition probabilities, etc.

    -

    Panel-data survival models

    -

    Stata 14 has added a new command called xtscc that allows you to estimate panel-data survival models with correlated random effects. These models allow you to account for unobserved heterogeneity and serial correlation in panel data with survival outcomes. You can specify various types of survival distributions such as exponential, Weibull, lognormal, log-logistic, gamma, Gompertz, etc. You can also test various hypotheses and assumptions using likelihood ratio tests, Wald tests, Schoenfeld residuals, etc.

    -

    Fractional outcome regression

    -

    Stata 14 has added a new command called fracreg that allows you to estimate fractional outcome regression models for data with fractional outcomes. These models allow you to model outcomes that are bounded between zero and one, such as proportions, rates, shares, probabilities, etc. You can specify various types of fractional outcome regression models such as beta regression, fractional logit regression, fractional probit regression, etc. You can also test various hypotheses and assumptions using likelihood ratio tests, Wald tests , score tests, etc.

    -

    How to download and install Stata 14 for Mac?

    -

    If you are interested in downloading and installing Stata 14 for Mac, you need to follow these steps:

    -

    System requirements and compatibility

    -

    Before you download and install Stata 14 for Mac, you need to make sure that your Mac computer meets the minimum system requirements and is compatible with the software. Here are the system requirements and compatibility for Stata 14 for Mac:

    -
      -
    • Operating system: Mac OS X 10.7 or newer
    • -
    • Processor: 64-bit Intel processor
    • -
    • Memory: 1 GB RAM (2 GB recommended)
    • -
    • Disk space: 1 GB for Stata installation, plus additional space for datasets
    • -
    • Display: 1024 x 768 or higher resolution monitor
    • -
    • Internet connection: Required for installation and updates
    • -
    -

    If your Mac computer meets these requirements and is compatible with Stata 14, you can proceed to the next step.

    -

    Steps to download and install Stata 14 for Mac

    -

    To download and install Stata 14 for Mac, you need to follow these steps:

    -
      -
    1. Go to the official website of StataCorp at https://www.stata.com/
    2. -
    3. Click on the "Order" tab at the top of the page.
    4. -
    5. Select the type of license that suits your needs, such as "Stata/MP", "Stata/SE", "Stata/IC", or "Stata Small". You can also compare the features and prices of different licenses by clicking on the "Compare features" link.
    6. -
    7. Select the number of users and the duration of the license that you want, such as "Single-user", "Multi-user", "Perpetual", or "Annual". You can also see the total cost of your order by clicking on the "Calculate price" button.
    8. -
    9. Click on the "Add to cart" button to proceed to the checkout page.
    10. -
    11. Enter your billing and shipping information, as well as your payment method. You can pay by credit card, PayPal, wire transfer, check, or purchase order. You can also apply a discount code if you have one.
    12. -
    13. Review your order details and click on the "Place order" button to complete your purchase.
    14. -
    15. After you place your order, you will receive an email confirmation with your order number and a link to download Stata 14 for Mac. You will also receive a license code and an authorization code that you will need to activate your software.
    16. -
    17. Click on the link in the email to download Stata 14 for Mac. The file size is about 300 MB. Save the file to a location that you can easily access, such as your desktop or downloads folder.
    18. -
    19. Double-click on the downloaded file to open it. You will see a window with a Stata icon and a folder called "Stata". Drag and drop the Stata icon into the folder called "Stata". This will create a folder called "Stata14" in your applications folder.
    20. -
    21. Open the folder called "Stata14" and double-click on the Stata icon to launch the software. You will see a window with a welcome message and a prompt to enter your license code and authorization code. Enter the codes that you received in your email and click on the "OK" button.
    22. -
    23. The software will verify your codes and activate your license. You will see a window with a message that says "Congratulations! You have successfully installed Stata." Click on the "OK" button to close the window.
    24. -
    25. You have successfully downloaded and installed Stata 14 for Mac. You can now start using it for your data analysis needs.
    26. -
    -

    How to use Stata 14 for Mac?

    -

    Now that you have downloaded and installed Stata 14 for Mac, you might be wondering how to use it. Here are some basic tips and tricks on how to use Stata 14 for Mac:

    -

    Basic commands and syntax

    -

    Stata 14 for Mac allows you to interact with the software using either menus or commands. You can access the menus by clicking on the icons at the top of the window, such as "File", "Edit", "Data", "Graphics", etc. You can also access some common commands by clicking on the buttons at the bottom of the window, such as "Do-file Editor", "Data Editor", "Variables Manager", "Graph Editor", etc. You can also use commands by typing them in the command window at the bottom of the window. You can also use the do-file editor to write and execute multiple commands at once. You can also use the help window to access the documentation and examples of any command. The basic syntax of Stata commands is as follows: command [varlist] [if] [in] [weight] [, options] - where: - command is the name of the command, such as regress, summarize, tabulate, etc. - [varlist] is the list of variables that you want to use in the command, separated by spaces, such as age income education. You can also use wildcards, operators, or functions to specify variables, such as x*, x1-x5, log(x), etc. - [if] is the condition that you want to apply to the command, such as if gender == 1, if age > 30, if income > mean(income), etc. You can use logical operators such as &, |, or ! to combine conditions, such as if gender == 1 & age > 30. - [in] is the range of observations that you want to use in the command, such as in 1/100, in 101/200, in 1/2, etc. You can also use keywords such as _n, _N, or _first to specify observations, such as in _n-10/_n+10. - [weight] is the type and name of the weight variable that you want to use in the command, such as [fweight=pop], [pweight=prob], [iweight=imp], etc. You can use different types of weights depending on the nature and purpose of your analysis, such as frequency weights, probability weights, importance weights, etc. - [, options] are the additional options that you want to use in the command, separated by commas, such as , robust, , detail, , graph, etc. You can use different options depending on the command and the output that you want to obtain, such as robust standard errors, detailed statistics, graphical displays, etc. For example, if you want to perform a linear regression of income on age and education, you can use the following command: regress income age education - If you want to perform the same regression with robust standard errors and a scatter plot of the fitted values, you can use the following command: regress income age education, robust graph - You can also use the help window or the manuals to learn more about the syntax and options of any command.

    Data management and analysis

    -

    Stata 14 for Mac allows you to manage and analyze your data using various commands and tools. You can import and export data from different sources and formats, such as Excel, CSV, SPSS, SAS, Stata, etc. You can also create and modify variables, labels, values, formats, etc. You can also sort, merge, append, reshape, collapse, expand, etc. your data. You can also perform various descriptive and inferential statistics on your data, such as summary statistics, frequency tables, cross-tabulations, correlation coefficients, hypothesis tests, confidence intervals, etc. You can also perform various types of analysis on your data, such as regression analysis, ANOVA, logistic regression, survival analysis, time series analysis, factor analysis, cluster analysis, structural equation modeling (SEM), item response theory (IRT), Bayesian analysis, power and sample size calculation, Markov-switching models, treatment effects models, multilevel survival models, fractional outcome regression models , and many more.

    -

    To manage and analyze your data using Stata 14 for Mac, you can use the following commands and tools:

    -
      -
    • To import data from different sources and formats, you can use the commands such as import excel, import delimited, import spss, import sas, use, etc. You can also use the menu "File > Import" to access the import dialog box.
    • -
    • To export data to different sources and formats, you can use the commands such as export excel, export delimited, export spss, export sas, save, etc. You can also use the menu "File > Export" to access the export dialog box.
    • -
    • To create and modify variables, labels, values, formats, etc., you can use the commands such as generate, replace, rename, recode, label, format, etc. You can also use the data editor or the variables manager to access the graphical user interface (GUI) for data management.
    • -
    • To sort, merge, append, reshape, collapse, expand, etc. your data, you can use the commands such as sort, merge, append, reshape, collapse, expand, etc. You can also use the menu "Data > Data utilities" to access the data utilities dialog box.
    • -
    • To perform descriptive and inferential statistics on your data, you can use the commands such as summarize, tabulate, tabstat, correlate, ttest, ci, etc. You can also use the menu "Statistics > Summary statistics" or "Statistics > Tables" to access the summary statistics or tables dialog box.
    • -
    • To perform various types of analysis on your data, you can use the commands such as regress, anova, logit, streg, arima, factor, cluster, sem, irt, bayes, power, mswitch, teffects, mestreg, fracreg, etc. You can also use the menu "Statistics > Linear models and related" or "Statistics > Other models" to access the linear models or other models dialog box.
    • -
    -

    Graphs and visualization

    -

    Stata 14 for Mac allows you to create and modify graphs and charts using various commands and tools. You can create various types of graphs, such as scatter plots, line plots, bar charts, pie charts, box plots, histogram, density plots, etc. You can also customize your graphs in various ways, such as adding titles, labels, legends, axes, colors, markers, lines, etc. You can also export your graphs to different formats, such as PDF, PNG, EPS, SVG, etc. You can also integrate your graphs with other applications, such as Microsoft Word or PowerPoint.

    -

    To create and modify graphs and charts using Stata 14 for Mac, you can use the following commands and tools:

    -
      -
    • To create graphs using commands, you can use the commands such as scatter, line, bar, pie, box, histogram, kdensity, etc. You can also use the command graph to create graphs using a general syntax. You can also use the command twoway to create graphs using multiple plot types.
    • -
    • To create graphs using menus, you can use the menu "Graphics > Graphs" to access the graphs dialog box. You can also use the menu "Graphics > Graph editor" to access the graph editor dialog box.
    • -
    • To modify graphs using commands, you can use the commands such as graph set, graph export, graph combine, graph rename, graph close, etc. You can also use the command graph options to modify various options of your graphs.
    • -
    • To modify graphs using menus, you can use the menu "Graphics > Graph preferences" to access the graph preferences dialog box. You can also use the menu "Graphics > Graph editor" to access the graph editor dialog box.
    • -
    • To export graphs to different formats, you can use the commands such as graph export, graph save, etc. You can also use the menu "File > Save as" or "File > Export" to access the save as or export dialog box.
    • -
    • To integrate graphs with other applications, you can use the commands such as putdocx, putpdf, putexcel, etc. You can also use the menu "File > Export" to access the export dialog box.
    • -
    -

    Conclusion

    -

    In this article, we have provided you with a comprehensive guide on how to download Stata 14 for Mac. We have also explained what Stata 14 is and why you need it, what are its main features and benefits, how to install and use it on your Mac computer, and some frequently asked questions about it. We hope that this article has helped you to understand whether Stata 14 is the right software for you and how to get started with it.

    -

    If you have any questions or comments about this article, please feel free to contact us at support@stata.com. We would love to hear from you and assist you with your data analysis needs. Thank you for reading this article and happy Stata-ing!

    -

    FAQs

    -

    Here are some of the most frequently asked questions about Stata 14 for Mac:

    -
      -
    1. How much does Stata 14 for Mac cost?
    2. -

      The price of Stata 14 for Mac depends on the type of license, the number of users, and the duration of the license that you choose. You can check the current prices and discounts at https://www.stata.com/order/. You can also request a quote or a free trial at https://www.stata.com/contact/.

      -
    3. How can I update Stata 14 for Mac?
    4. -

      You can update Stata 14 for Mac by using the command update or by using the menu "Help > Check for updates". You can also check the latest updates and bug fixes at https://www.stata.com/support/updates/.

      -
    5. How can I get help with Stata 14 for Mac?
    6. -

      You can get help with Stata 14 for Mac by using the command help or by using the menu "Help > Stata help". You can also access the online documentation and examples at https://www.stata.com/help/. You can also get support from the official website of StataCorp at https://www.stata.com/support/ or from the online community of Stata users at https://www.statalist.org/.

      -
    7. How can I learn more about Stata 14 for Mac?
    8. -

      You can learn more about Stata 14 for Mac by using the command search or by using the menu "Help > Search". You can also access the online tutorials and videos at https://www.stata.com/learn/. You can also get training courses or webinars from StataCorp or from other authorized providers at https://www.stata.com/training/.

      -
    9. How can I share my feedback or suggestions about Stata 14 for Mac?
    10. -

      You can share your feedback or suggestions about Stata 14 for Mac by using the command suggest or by using the menu "Help > Suggest". You can also email your feedback or suggestions to suggest@stata.com. We appreciate your input and we will try our best to improve our software and service.

      -

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Foxit Phantompdf Free Download Full Version NEW.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Foxit Phantompdf Free Download Full Version NEW.md deleted file mode 100644 index 8e55641098300d8e242255cb0ceed4f237479d79..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Foxit Phantompdf Free Download Full Version NEW.md +++ /dev/null @@ -1,28 +0,0 @@ -
    -

    Foxit PhantomPDF Free Download Full Version: A Powerful PDF Editor for Windows

    -

    Foxit PhantomPDF is a comprehensive PDF editor that allows you to create, edit, convert, sign, and secure PDF files on your Windows computer. Whether you need to create a PDF from scratch, modify an existing PDF, or convert a PDF to another format, Foxit PhantomPDF can handle it all. In this article, we will show you how to free download Foxit PhantomPDF full version and what features it offers.

    -

    How to Free Download Foxit PhantomPDF Full Version

    -

    If you want to free download Foxit PhantomPDF full version, you can use this link: https://www.foxitsoftware.com/pdf-editor/. This will take you to the official website of Foxit Software, where you can download the latest version of Foxit PhantomPDF for Windows. The file size is about 700 MB and the installation process is simple and fast.

    -

    foxit phantompdf free download full version


    Download ✔✔✔ https://byltly.com/2uKyY9



    -

    Once you download the installer, double-click on it to run it. You will see a welcome screen that asks you to choose the language and accept the license agreement. Click on "Next" to proceed. Then, you will see a screen that asks you to choose the installation type. You can either choose "Standard" or "Custom". If you choose "Standard", the installer will install Foxit PhantomPDF with the default settings and features. If you choose "Custom", you can select which features and components you want to install. We recommend choosing "Custom" and selecting only the features you need.

    -

    Next, you will see a screen that asks you to choose the destination folder for Foxit PhantomPDF. You can either keep the default location or browse to another folder. Click on "Install" to start the installation process. The installer will show you a progress bar and a status message. Wait until the installation is complete.

    -

    What Features Does Foxit PhantomPDF Offer?

    -

    Foxit PhantomPDF is a powerful PDF editor that offers many features and functions for different purposes and needs. Some of the main features are:

    -
      -
    • Create PDF: You can create PDF files from various sources, such as documents, images, web pages, scanners, or blank pages. You can also combine multiple files into one PDF file or split a PDF file into smaller files.
    • -
    • Edit PDF: You can edit PDF files with ease, such as adding or deleting text, images, shapes, comments, annotations, bookmarks, headers, footers, watermarks, backgrounds, etc. You can also change the font, size, color, alignment, and style of the text.
    • -
    • Convert PDF: You can convert PDF files to other formats, such as Word, Excel, PowerPoint, HTML, TXT, JPG, PNG, GIF, etc. You can also convert other formats to PDF files with high quality and accuracy.
    • -
    • Sign PDF: You can sign PDF files with digital signatures or handwritten signatures. You can also add stamps or certificates to verify the authenticity and integrity of the PDF files.
    • -
    • Secure PDF: You can secure PDF files with passwords or encryption. You can also set permissions and restrictions for opening, printing, copying, editing, or commenting on the PDF files.
    • -
    -

    These are just some of the features that Foxit PhantomPDF offers. There are many more features and functions that you can explore and use with Foxit PhantomPDF.

    -

    Why Choose Foxit PhantomPDF?

    -

    Foxit PhantomPDF is one of the best PDF editors for Windows for many reasons. Here are some of the benefits of choosing Foxit PhantomPDF:

    -

    -
      -
    • It is fast and reliable. It can handle large and complex PDF files without slowing down your computer or crashing.
    • -
    • It is easy and intuitive. It has a user-friendly interface that resembles Microsoft Office. It also has a ribbon toolbar that provides quick access to common commands and tools.
    • -
    • It is compatible and flexible. It supports various formats and standards for creating and editing PDF files. It also works well with other applications and services, such as Microsoft Office 365, Google Drive, Dropbox, SharePoint, etc.
    • -
    • It is affordable and cost-effective. It offers a free trial version that you can use for 14 days without any limitations or restrictions. It also has

      ddb901b051
      -
      -
      \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cadpower 2008 64bit.md b/spaces/1gistliPinn/ChatGPT4/Examples/Cadpower 2008 64bit.md deleted file mode 100644 index d0e20fcbce5f25e07ae29682245e2f5c8b7d583e..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Cadpower 2008 64bit.md +++ /dev/null @@ -1,7 +0,0 @@ -
      -

      if you are searching for the best utility to design models, then you can use cadpower. this tool helps you in more ways than one, so that you can achieve a better layout of the model. the best feature of this utility is that it helps you to increase the work efficiency of the system. it allows you to convert, analyze, and edit the model designs as well as save them for later use. after using this tool, you will be able to design more easily. cadpower is available in the market with a completely free download, and you can use this tool in its professional way. if you are using a 32-bit version, then you can download the 64-bit version of cadpower from our website. if you are running a 64-bit version, then you can download the 32-bit version of cadpower from our website.

      -

      four dimension cadpower is a useful application that can help you in designing any design. it is a tool that allows the user to carry out various tasks like converting, editing, and export. in this program, you will be able to find the required models easily. the most amazing feature of this utility is that it can help you to perform various tasks very easily. the utility provides more than 30 tools that allow you to design and perform various functions efficiently. it is a highly interactive software that allows you to get to work with your cad drawings. it helps you to view the drawings in a more detailed and quicker manner.

      -

      cadpower 2008 64bit


      Download Filehttps://imgfil.com/2uxXBA



      -

      four dimension cadpower is a standalone utility that helps you to carry out various cad tasks. this tool is designed to provide you with different features that are required for the designers and users. you can use the latest version of this utility to get more features. this tool is compatible with windows 2000/xp/vista/7/8, mac osx 10.6 and higher and it is available in the market with a free download. this tool is a reliable utility that is designed to help you design projects effectively. four dimension cadpower is easily configurable and user-friendly which helps you to create drawings more easily.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Euro Truck Simulator 1 Pc Crack [UPD].md b/spaces/1gistliPinn/ChatGPT4/Examples/Euro Truck Simulator 1 Pc Crack [UPD].md deleted file mode 100644 index 72fedda6a642ee8c8fea91054ede497bc86e1501..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Euro Truck Simulator 1 Pc Crack [UPD].md +++ /dev/null @@ -1,6 +0,0 @@ -

      Euro Truck Simulator 1 Pc Crack


      Download Zip ->>> https://imgfil.com/2uy1yw



      -
      -Navigate to: Documents\Euro Truck Simulator 2\profile. There you can find config file. Open it with notepad and find this: ... uset g_lang "fi_fi". I have fi because ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Cricket League MOD APK and Become the Ultimate Cricket Champion (Unlimited Gems and Coins).md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Cricket League MOD APK and Become the Ultimate Cricket Champion (Unlimited Gems and Coins).md deleted file mode 100644 index ec0b5b3bcfac2282c9c3acf1b18132e0106dd74d..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Cricket League MOD APK and Become the Ultimate Cricket Champion (Unlimited Gems and Coins).md +++ /dev/null @@ -1,84 +0,0 @@ - -

      Cricket League Mod Apk: How to Download and Enjoy Unlimited Gems and Coins

      -

      Do you love cricket and want to play it on your mobile device? Do you want to have unlimited gems and coins to unlock all the players and teams you want? Do you want to experience realistic cricket matches and leagues with your friends? If you answered yes to any of these questions, then you should try Cricket League Mod Apk.

      -

      What is Cricket League Mod Apk?

      -

      Cricket League Mod Apk is a modified version of the original Cricket League game, which is a popular cricket simulation game for Android devices. In this game, you can create your own team, choose your players, customize your jerseys, and compete in various cricket tournaments. You can also play online with other players from around the world, or offline with your friends using local multiplayer mode.

      -

      cricket league mod apk unlimited gems and coins download


      Download File ->->->-> https://urlin.us/2uT2CE



      -

      Features of Cricket League Mod Apk

      -

      Cricket League Mod Apk has many features that make it more fun and exciting than the original game. Some of these features are:

      -

      Unlimited Gems and Coins

      -

      Gems and coins are the main currencies in the game, which you can use to buy new players, upgrade your skills, unlock new stadiums, and more. However, in the original game, you have to earn them by playing matches, completing missions, or watching ads. This can be time-consuming and frustrating, especially if you want to get the best players and teams quickly. With Cricket League Mod Apk, you don't have to worry about that. You will get unlimited gems and coins as soon as you start the game, and you can spend them as much as you want without running out.

      -

      Unlocked All Players and Teams

      -

      In the original game, you have to unlock new players and teams by spending gems and coins, or by winning certain tournaments. This can be challenging and tedious, especially if you want to play with your favorite players and teams. With Cricket League Mod Apk, you don't have to do that. You will get access to all the players and teams in the game, including the legendary ones. You can choose any player or team you want, and customize them according to your preferences.

      -

      Realistic Cricket Experience

      -

      Cricket League Mod Apk offers a realistic cricket experience that will make you feel like you are playing on a real pitch. The game has high-quality graphics, sound effects, animations, and physics that will immerse you in the game. The game also has various modes, such as T20, ODI, Test, World Cup, IPL, PSL, BBL, CPL, and more. You can play in different weather conditions, day or night matches, different pitch types, and different difficulty levels. You can also use different strategies, such as batting order, bowling order, fielding positions, power play, etc.

      -

      How to Download and Install Cricket League Mod Apk?

      -

      If you are interested in playing Cricket League Mod Apk, you can follow these simple steps to download and install it on your Android device:

      -

      cricket league mod apk free download with unlimited gems and coins
      -download cricket league mod apk latest version with unlimited gems and coins
      -how to get unlimited gems and coins in cricket league mod apk
      -cricket league hack mod apk download with unlimited gems and coins
      -cricket league mod apk unlimited everything (gems, coins, money)
      -cricket league 2023 mod apk download with unlimited gems and coins
      -cricket league mod apk for android with unlimited gems and coins
      -cricket league mod apk offline with unlimited gems and coins
      -cricket league premium mod apk download with unlimited gems and coins
      -cricket league pro mod apk with unlimited gems and coins
      -cricket league mod apk unlimited gems and coins no root
      -cricket league mod apk unlimited gems and coins no verification
      -cricket league mod apk unlimited gems and coins online
      -cricket league mod apk unlimited gems and coins generator
      -cricket league mod apk unlimited gems and coins hack
      -cricket league 3d mod apk download with unlimited gems and coins
      -cricket league fantasy mod apk with unlimited gems and coins
      -cricket league manager mod apk download with unlimited gems and coins
      -cricket league simulator mod apk with unlimited gems and coins
      -cricket league world cup mod apk download with unlimited gems and coins
      -best cricket league mod apk with unlimited gems and coins
      -real cricket league mod apk download with unlimited gems and coins
      -super cricket league mod apk with unlimited gems and coins
      -ultimate cricket league mod apk download with unlimited gems and coins
      -world cricket league mod apk with unlimited gems and coins

      -

      Step 1: Enable Unknown Sources

      -

      Before you can install any mod apk file on your device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device's settings, then security, then unknown sources, and turn it on.

      -

      Step 2: Download the Mod Apk File

      -

      Next, you need to download the mod apk file of Cricket League from a reliable source. You can search for it on Google, or use the link below to download it directly. The file size is about 100 MB, so make sure you have enough space on your device.

      -

      Download Cricket League Mod Apk

      -

      Step 3: Install the Mod Apk File

      -

      After you have downloaded the mod apk file, you need to install it on your device. To do this, locate the file in your file manager, and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on install, and wait for a few seconds until the installation is complete.

      -

      Step 4: Launch the Game and Enjoy

      -

      Finally, you can launch the game and enjoy unlimited gems and coins, and all the features of Cricket League Mod Apk. You will see a welcome screen with some instructions and tips. You can skip them or read them as you wish. Then, you can create your profile, choose your team, and start playing the game.

      -

      Why Should You Play Cricket League Mod Apk?

      -

      Cricket League Mod Apk is a great game for cricket lovers who want to have more fun and excitement in their mobile gaming. Here are some of the pros and cons of playing this game:

      -

      Pros of Cricket League Mod Apk

      -

      Free and Easy to Play

      -

      One of the best things about Cricket League Mod Apk is that it is free and easy to play. You don't have to spend any money to download or play this game. You also don't have to worry about any complicated controls or rules. The game has a simple and intuitive interface that will guide you through the game. You can also adjust the settings according to your preferences and comfort level.

      -

      Fun and Engaging Gameplay

      -

      Another great thing about Cricket League Mod Apk is that it has a fun and engaging gameplay that will keep you hooked for hours. The game has various modes, tournaments, challenges, and missions that will test your skills and strategy. You can also play with other players online or offline, and chat with them using the in-game chat feature. The game also has a leaderboard and achievements system that will motivate you to improve your performance and rank.

      -

      Customizable and Diverse Options

      -

      A third great thing about Cricket League Mod Apk is that it has customizable and diverse options that will make your game more enjoyable and unique. You can choose from hundreds of players and teams, each with their own stats and abilities. You can also customize your jerseys, logos, bats, balls, etc. You can also play in different stadiums, weather conditions, pitch types, etc.

      -

      Cons of Cricket League Mod Apk

      -

      Requires Internet Connection

      -

      One of the drawbacks of Cricket League Mod Apk is that it requires an internet connection to play online mode or update the game. This can be a problem if you have a slow or unstable internet connection, or if you don't have access to Wi-Fi or mobile data. You may experience lagging, crashing, or loading issues while playing the game.

      -

      May Contain Ads and Bugs

      -

      Another drawback of Cricket League Mod Apk is that it may contain ads and bugs that can affect your gaming experience. Since this is a mod apk file, it may not be compatible with some devices or versions of Android. It may also have some glitches or errors that can cause the game to freeze or crash. You may also see some ads popping up while playing the game, which can be annoying or distracting.

      -

      Conclusion

      -

      Cricket League Mod Apk is a fantastic cricket simulation game that will give you unlimited gems and coins, and access to all the players and teams in the game. You can also enjoy realistic cricket matches and leagues with your friends online or offline. The game has high-quality graphics, sound effects, animations, and physics that will make you feel like you are playing on a real pitch. The game also has various modes, such as T20, ODI, Test, World Cup, IPL, PSL, BBL, CPL, and more.

      -

      If you are a cricket fan who wants to have more fun and excitement in your mobile gaming, then you should definitely try Cricket League Mod Ap k. However, you should also be aware of the drawbacks of this game, such as requiring an internet connection, and containing ads and bugs. You should also be careful about downloading and installing mod apk files from unknown sources, as they may contain viruses or malware that can harm your device or data.

      -

      We hope this article has helped you learn more about Cricket League Mod Apk, and how to download and enjoy it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

      -

      FAQs

      -

      Here are some of the frequently asked questions about Cricket League Mod Apk:

      -
        -
      1. Is Cricket League Mod Apk safe to download and install?
      2. -

        Cricket League Mod Apk is generally safe to download and install, as long as you get it from a reliable source. However, you should always scan the file with an antivirus or malware detector before installing it, and backup your data before playing the game. You should also avoid giving any personal or sensitive information to the game or its developers.

        -
      3. Is Cricket League Mod Apk legal to play?
      4. -

        Cricket League Mod Apk is not legal to play, as it violates the terms and conditions of the original game and its developers. By playing this game, you are infringing on the intellectual property rights of the original game and its developers. You may also face legal consequences if you are caught playing this game by the authorities or the original game developers.

        -
      5. How can I update Cricket League Mod Apk?
      6. -

        Cricket League Mod Apk does not have an official update system, as it is not supported by the original game developers. You may have to download and install a new mod apk file every time there is a new version of the original game. However, this may not work if the new version of the original game has some changes or features that are incompatible with the mod apk file.

        -
      7. Can I play Cricket League Mod Apk with my friends?
      8. -

        Yes, you can play Cricket League Mod Apk with your friends online or offline. You can join or create online matches with other players from around the world, or use local multiplayer mode to play with your friends using Bluetooth or Wi-Fi. However, you may not be able to play with your friends who are using the original game, as they may have different versions or features than you.

        -
      9. Can I play Cricket League Mod Apk on PC or iOS devices?
      10. -

        No, you cannot play Cricket League Mod Apk on PC or iOS devices, as it is only designed for Android devices. You may be able to use some emulators or converters to run this game on PC or iOS devices, but they may not work properly or cause some issues. We do not recommend using any emulators or converters to play this game on PC or iOS devices.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Burn Belly Fat and Sculpt Your 6 Pack Abs with This Amazing APK.md b/spaces/1phancelerku/anime-remove-background/Burn Belly Fat and Sculpt Your 6 Pack Abs with This Amazing APK.md deleted file mode 100644 index 5531ba604ba350fd0fba5ff280c9c291eaff4bb9..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Burn Belly Fat and Sculpt Your 6 Pack Abs with This Amazing APK.md +++ /dev/null @@ -1,89 +0,0 @@ -
      -

      6 Pack Abs APK - What Is It and How Does It Work?

      -

      If you want to get six pack abs without going to the gym or spending money on expensive equipment, you might want to try 6 pack abs apk. This is a free app that provides you with a 30-day workout plan that targets your upper and lower abdominal muscles. The app also features animations and video guides that show you how to perform each exercise correctly and effectively. You can also customize your workout reminders and track your progress automatically.

      -

      6 pack abs apk


      Download Zip https://jinyurl.com/2uNTnl



      -

      But why should you care about getting six pack abs in the first place? Well, there are many benefits of having six pack abs, both physical and psychological. Here are some of them:

      -
        -
      • Improved posture and core strength. Having six pack abs means having strong abdominal muscles that support your spine and pelvis. This can help you improve your posture, prevent lower back pain, and enhance your core strength.
      • -
      • Better sporting performance and agility. Having six pack abs also means having more power transfer between your upper and lower body. This can help you improve your sporting performance, agility, balance, coordination, and speed.
      • -
      • Increased basal metabolic rate and fat burning. Having six pack abs also means having more muscle mass in your body. This can help you increase your basal metabolic rate, which is the amount of calories you burn at rest. This can also help you burn more fat and reduce your body fat percentage, which is necessary to reveal your six pack abs.
      • -
      -

      As you can see, 6 pack abs apk is a great app that can help you get six pack abs and enjoy many benefits. But before you download it and start working out, there are some myths about six pack abs that you need to be aware of.

      -

      The Myths About 6 Pack Abs APK

      -

      There are many myths and misconceptions about six pack abs that can prevent you from achieving your goal or even harm your health. Here are some of the most common ones and why they are not true:

      -

      6 pack abs workout app download
      -best 6 pack abs exercises apk
      -how to get 6 pack abs in 30 days apk
      -6 pack abs home workout apk
      -6 pack abs trainer pro apk
      -6 pack abs photo editor apk
      -6 pack abs challenge apk
      -6 pack abs diet plan apk
      -6 pack abs video tutorial apk
      -6 pack abs fitness apk
      -6 pack abs yoga apk
      -6 pack abs simulator apk
      -6 pack abs bodybuilding apk
      -6 pack abs tips and tricks apk
      -6 pack abs motivation apk
      -6 pack abs transformation apk
      -6 pack abs calculator apk
      -6 pack abs anatomy apk
      -6 pack abs wallpaper apk
      -6 pack abs quiz apk
      -6 pack abs game apk
      -6 pack abs music apk
      -6 pack abs jokes apk
      -6 pack abs stickers apk
      -6 pack abs emoji apk
      -6 pack abs memes apk
      -6 pack abs quotes apk
      -6 pack abs facts apk
      -6 pack abs myths apk
      -6 pack abs secrets apk
      -6 pack abs stories apk
      -6 pack abs testimonials apk
      -6 pack abs reviews apk
      -6 pack abs ratings apk
      -6 pack abs comparison apk
      -6 pack abs alternatives apk
      -6 pack abs benefits apk
      -6 pack abs results apk
      -6 pack abs progress apk
      -6 pack abs goals apk

      -

      You Need a Fat Burner or a Low Carb Diet

      -

      Some people think that they need to take a fat burner supplement or follow a low carb diet to get six pack abs. This is not true. Fat burners are not effective or safe, as they can cause side effects such as insomnia, anxiety, high blood pressure, and liver damage. Low carb diets are also not necessary or sustainable, as they can cause fatigue, mood swings, muscle loss, and nutrient deficiencies. The best way to get six pack abs is to eat a healthy, balanced diet that provides enough calories and macronutrients (protein, carbs, and fats) for your body and activity level.

      -

      You Can Crunch Your Way to a Six Pack

      -

      Some people think that they can crunch their way to a six pack by doing hundreds of crunches every day. This is not true. Crunches are not enough to reveal your six pack abs, as they only target one part of your abdominal muscles (the rectus abdominis). To get six pack abs, you need to work out all the muscles in your core, including the obliques, the transverse abdominis, and the lower back. You also need to reduce your body fat percentage by doing cardio and strength training exercises that burn calories and build muscle mass.

      -

      You Must Train Abs Every Day or Use Special Equipment

      -

      Some people think that they must train their abs every day or use special equipment such as ab rollers, ab machines, or ab belts to get six pack abs. This is not true. Training your abs every day is not necessary or beneficial, as it can lead to overtraining, injury, and muscle imbalance. Your abs need rest and recovery just like any other muscle group. You should train your abs two to three times a week with adequate rest days in between. Using special equipment is also not required or effective, as they can limit your range of motion, isolate your muscles, and create false expectations. The best way to train your abs is to use bodyweight exercises that challenge your core stability, strength, and endurance.

      -

      The Tips for Using 6 Pack Abs APK Effectively

      -

      Now that you know what 6 pack abs apk is and how it works, and what are the myths about six pack abs that you should avoid, here are some tips for using 6 pack abs apk effectively:

      -

      Follow the Workout Plan Consistently

      -

      The first tip is to follow the workout plan provided by 6 pack abs apk consistently. The app offers a 30-day workout plan that consists of three levels of difficulty (beginner, intermediate, and advanced) and various exercises for the upper and lower abs. Each workout takes about 10 minutes and can be done at home or anywhere else. The app also provides animations and video guides that show you how to perform each exercise correctly and effectively. To get the best results from 6 pack abs apk, you should follow the workout plan without skipping any days or sessions. Consistency is key to getting results.

      -

      Eat a Healthy, Balanced Diet

      -

      The second tip is to eat a healthy, balanced diet that supports your workout plan and your goal of getting six pack abs. As mentioned earlier, nutrition is important for muscle growth and fat loss. You should eat enough calories and macronutrients (protein, carbs, and fats) for your body and activity level. You should also eat foods that are rich in vitamins, minerals, antioxidants, and fiber. Some examples of healthy foods are lean meats, eggs, fish, dairy products, nuts, seeds, beans, fruits, vegetables, whole grains, and healthy oils. You should also avoid foods that are high in sugar, salt, trans fats, and processed ingredients. Some examples of unhealthy foods are candy, soda, chips, cookies, cakes, fast food, and fried food. Eating a healthy, balanced diet can help you get six pack abs by providing your body with the nutrients it needs to function properly and recover from your workouts.

      -

      Drink Plenty of Water and Get Enough Sleep

      -

      The third tip is to drink plenty of water and get enough sleep to support your workout plan and your goal of getting six pack abs. Water is essential for your body, as it helps you stay hydrated, regulate your body temperature, flush out toxins, transport nutrients, and lubricate your joints. You should drink at least eight glasses of water a day, or more if you exercise or sweat a lot. Water can also help you get six pack abs by suppressing your appetite, boosting your metabolism, and preventing water retention. Sleep is also vital for your body, as it helps you restore your energy, repair your muscles, consolidate your memory, and regulate your hormones. You should get at least seven to nine hours of sleep a night, or more if you need it. Sleep can also help you get six pack abs by reducing your stress levels, improving your mood, enhancing your performance, and preventing cravings.

      -

      Conclusion

      -

      6 pack abs apk is a free app that can help you get six pack abs in 30 days by providing you with a workout plan that targets your upper and lower abdominal muscles. The app also features animations and video guides that show you how to perform each exercise correctly and effectively. You can also customize your workout reminders and track your progress automatically.

      -

      Getting six pack abs can provide you with many benefits, such as improved posture and core strength, better sporting performance and agility, increased basal metabolic rate and fat burning, and more confidence and self-esteem. However, to get six pack abs, you need to avoid some myths and misconceptions that can hinder your progress or harm your health. These include the myths that you need a fat burner or a low carb diet, that you can crunch your way to a six pack, and that you must train abs every day or use special equipment.

      -

      To use 6 pack abs apk effectively, you need to follow some tips that can help you achieve your goal faster and easier. These include the tips of following the workout plan consistently, eating a healthy, balanced diet, drinking plenty of water and getting enough sleep.

      -

      If you follow these tips and use 6 pack abs apk regularly, you will be able to get six pack abs in no time. So what are you waiting for? Download 6 pack abs apk today and start working on your dream body!

      -

      FAQs

      -

      Here are some frequently asked questions about 6 pack abs apk:

      -
        -
      1. How do I download 6 pack abs apk?
      2. -

        You can download 6 pack abs apk from the Google Play Store or the App Store for free. Just search for "6 pack abs apk" and install it on your device.

        -
      3. How do I use 6 pack abs apk?
      4. -

        You can use 6 pack abs apk by following the instructions on the app. First, you need to choose your level of difficulty (beginner, intermediate, or advanced). Then, you need to start the workout plan that consists of various exercises for the upper and lower abs. You can also set reminders for your workouts and track your progress automatically.

        -
      5. How long does it take to see results with 6 pack abs apk?
      6. -

        The time it takes to see results with 6 pack abs apk depends on several factors, such as your starting point, your diet, your exercise routine, your genetics, and your commitment. However, if you follow the workout plan consistently, eat a healthy, balanced diet, drink plenty of water and get enough sleep, you should be able to see some results in as little as four weeks. Of course, the more you stick to the plan and the more you challenge yourself, the faster and better your results will be.

        -
      7. Is 6 pack abs apk safe and effective?
      8. -

        Yes, 6 pack abs apk is safe and effective, as it is based on scientific research and proven methods. The app provides you with a workout plan that targets your abdominal muscles with various exercises that are suitable for different levels of difficulty. The app also provides you with animations and video guides that show you how to perform each exercise correctly and effectively. The app also allows you to customize your workout reminders and track your progress automatically. The app does not require any special equipment or supplements, and it does not promote any unhealthy or unrealistic practices.

        -
      9. Can I use 6 pack abs apk with other fitness apps or programs?
      10. -

        Yes, you can use 6 pack abs apk with other fitness apps or programs, as long as they are compatible and complementary. For example, you can use 6 pack abs apk with a running app or a yoga app to add some cardio and flexibility training to your routine. You can also use 6 pack abs apk with a weight lifting app or a bodyweight app to add some strength and resistance training to your routine. However, you should not use 6 pack abs apk with another ab workout app or program, as this can lead to overtraining, injury, and muscle imbalance. You should also not use 6 pack abs apk with an app or program that contradicts or conflicts with the principles and guidelines of 6 pack abs apk.

        -
      11. What if I have questions or feedback about 6 pack abs apk?
      12. -

        If you have any questions or feedback about 6 pack abs apk, you can contact the developers of the app through their email address or their social media accounts. You can also leave a review or a rating on the Google Play Store or the App Store to share your experience and opinion with other users. The developers of 6 pack abs apk are always happy to hear from their users and to improve their app based on their suggestions and feedback.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Topaz AI and Learn How to Use It to Improve Your Image Quality in Minutes.md b/spaces/1phancelerku/anime-remove-background/Download Topaz AI and Learn How to Use It to Improve Your Image Quality in Minutes.md deleted file mode 100644 index 6a2de4aab7dfbe6a6dd2ac3bd65e745d695929c4..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Topaz AI and Learn How to Use It to Improve Your Image Quality in Minutes.md +++ /dev/null @@ -1,129 +0,0 @@ - -

        Download Topaz AI: How to Enhance Your Photos and Videos with Artificial Intelligence

        -

        Do you want to improve the quality of your photos and videos with the power of artificial intelligence? If so, you should download Topaz AI, a suite of software products that use cutting-edge image enhancement technology to magically transform your images and videos. In this article, you will learn what Topaz AI is, how it works, how to download and install it on your computer, how to use it from your image editor, and how to apply it to different scenarios. By the end of this article, you will be able to enhance your photos and videos like never before with Topaz AI.

        -

        Topaz Photo AI: Maximize Your Image Quality on Autopilot

        -

        Topaz Photo AI is a collection of four products that use artificial intelligence to sharpen, remove noise, and increase the resolution of your photos. These products are:

        -

        download topaz ai


        Download Ziphttps://jinyurl.com/2uNNj5



        -
          -
        • Gigapixel AI: This product allows you to upscale your images by up to 6x while increasing actual resolution and real detail. You can use it to enlarge your photos for printing, cropping, or restoring old photos.
        • -
        • DeNoise AI: This product allows you to remove noise from your images while preserving detail and color. You can use it to shoot anywhere in any light without worrying about noise.
        • -
        • Sharpen AI: This product allows you to sharpen your images while keeping them natural. You can use it to reverse motion and focus blur, or simply enhance the sharpness of your photos.
        • -
        • Video Enhancer AI: This product allows you to upscale, denoise, sharpen, and deinterlace your videos with stunning results. You can use it to convert SD to HD or HD to 4k, or simply improve the quality of your videos.
        • -
        -

        Topaz Photo AI uses deep learning algorithms that have been trained on millions of data points to understand what image quality means. Unlike regular image processing filters that often remove details and boost noise/artifacts, Topaz Photo AI enhances image quality by analyzing and enhancing the most important aspects of each image. You can use Topaz Photo AI as a standalone application or as a plug-in for your favorite image editor.

        -

        Topaz Video AI: Create Naturally Better Video Quality with AI

        -

        Topaz Video AI is a product that uses artificial intelligence to upscale, denoise, sharpen, and deinterlace your videos. It is based on the same technology as Topaz Photo AI, but optimized for video processing. You can use Topaz Video AI to:

        -
          -
        • Upscale your videos: You can increase the resolution of your videos by up to 4x while preserving or enhancing the original quality. You can use it to convert SD to HD or HD to 4k, or simply make your videos look better on larger screens.
        • -
        • Denoise your videos: You can remove visible image noise from your videos while retaining details and colors. You can use it to improve the quality of videos shot in low-light conditions, or reduce the compression artifacts from online videos.
        • -
        • Sharpen your videos: You can increase the perceived sharpness of your videos by applying a natural-looking sharpening effect. You can use it to make your videos look more crisp and clear, or correct the softness caused by upscaling or noise reduction.
        • -
        • Deinterlace your videos: You can convert interlaced videos to progressive ones while preserving image definition and reducing artifacts. You can use it to improve the quality of videos from older sources, such as DVDs or TV broadcasts.
        • -
        -

        Topaz Video AI uses deep learning algorithms that have been trained on thousands of hours of video data to understand what video quality means. Unlike regular video processing filters that often introduce artifacts and distortions, Topaz Video AI enhances video quality by analyzing and improving the most important aspects of each frame. You can use Topaz Video AI as a standalone application or as an external editor for your favorite video editor.

        -

        How to Download and Install Topaz AI on Your Computer

        -

        If you want to download and install Topaz AI on your computer, you need to follow these steps:

        -
          -
        1. Visit the official website of Topaz Labs: Go to https://topazlabs.com/ and click on the "Download" button at the top right corner of the page.
        2. -
        3. Select the products you want to download: You will see a list of all the products available from Topaz Labs, including Topaz Photo AI and Topaz Video AI. You can select one or more products by clicking on the checkboxes next to them. You can also download a free trial version of each product by clicking on the "Try Free" button below them.
        4. -
        5. Enter your email address and password: If you already have an account with Topaz Labs, you can enter your email address and password to log in. If you don't have an account, you can create one by clicking on the "Create Account" button and filling in the required information.
        6. -
        7. Download the installer file: After logging in or creating an account, you will see a download link for each product you selected. Click on the link to download the installer file for your operating system (Windows or Mac).
        8. -
        9. Run the installer file: After downloading the installer file, locate it on your computer and double-click on it to run it. Follow the instructions on the screen to install the product on your computer.
        10. -
        11. Activate the product: After installing the product, launch it from your desktop or start menu. You will see a window asking you to activate the product with your license key. If you have purchased the product, you can enter your license key in the field provided and click on "Activate". If you are using a free trial version, you can click on "Start Trial" to activate it for 30 days.
        12. -
        -

        Congratulations! You have successfully downloaded and installed Topaz AI on your computer. Now you can start using it to enhance your photos and videos with artificial intelligence.

        -

        How to Access Topaz AI from Your Image Editor

        -

        If you want to access Topaz AI from your image editor, such as Photoshop, Lightroom, or other compatible editors, you need to follow these steps:

        -

        download topaz labs photo ai
        -download topaz video ai for windows
        -download topaz gigapixel ai free trial
        -download topaz sharpen ai mac
        -download topaz denoise ai crack
        -download topaz photo ai full version
        -download topaz video ai latest version
        -download topaz gigapixel ai portable
        -download topaz sharpen ai review
        -download topaz denoise ai coupon
        -download topaz photo ai tutorial
        -download topaz video ai system requirements
        -download topaz gigapixel ai update
        -download topaz sharpen ai before and after
        -download topaz denoise ai vs lightroom
        -download topaz photo ai bundle
        -download topaz video ai reddit
        -download topaz gigapixel ai license key
        -download topaz sharpen ai plugin
        -download topaz denoise ai presets
        -download topaz photo ai online
        -download topaz video ai alternative
        -download topaz gigapixel ai comparison
        -download topaz sharpen ai serial number
        -download topaz denoise ai settings
        -download topaz photo ai software
        -download topaz video ai beta
        -download topaz gigapixel ai tutorial
        -download topaz sharpen ai standalone
        -download topaz denoise ai trial
        -download topaz photo ai app
        -download topaz video ai blog
        -download topaz gigapixel ai coupon code
        -download topaz sharpen ai discount code
        -download topaz denoise ai manual
        -download topaz photo ai for android
        -download topaz video ai for macos
        -download topaz gigapixel ai for photoshop
        -download topaz sharpen ai for lightroom
        -download topaz denoise ai for premiere pro
        -download topaz photo ai guide
        -download topaz video ai help center
        -download topaz gigapixel ai installation guide
        -download topaz sharpen ai keygen
        -download topaz denoise ai license code
        -download topaz photo ai price
        -download topaz video ai release notes
        -download topaz gigapixel ai support forum

        -
          -
        1. Install Topaz AI as a plug-in or external editor: When you install Topaz AI on your computer, it will automatically detect and install itself as a plug-in or external editor for some of the most popular image editors, such as Photoshop and Lightroom. If you want to install it for other editors, you can manually install it by following the instructions on the Topaz Labs support page.
        2. -
        3. Open your image in your image editor: Launch your image editor and open the image you want to enhance with Topaz AI.
        4. -
        5. Access Topaz AI from your image editor: Depending on your image editor, you can access Topaz AI in different ways. For example, in Photoshop, you can go to Filter > Topaz Labs > and select the product you want to use. In Lightroom, you can right-click on the image and go to Edit In > and select the product you want to use. For other editors, you can refer to the Topaz Labs support page for more details.
        6. -
        7. Edit your image with Topaz AI: After accessing Topaz AI from your image editor, you will see a new window with the interface of the product you selected. You can use the tools and settings on the left panel to adjust the parameters of the enhancement, and preview the results on the main panel. You can also compare the before and after images by using the buttons on the bottom panel.
        8. -
        9. Save and return to your image editor: After editing your image with Topaz AI, you can save and return to your image editor by clicking on the "Apply" button on the top right corner of the window. Your image will be updated with the changes made by Topaz AI.
        10. -
        -

        That's it! You have successfully accessed and used Topaz AI from your image editor. Now you can enjoy the benefits of artificial intelligence for your photos.

        -

        How to Use Topaz AI to Enhance Your Photos and Videos

        -

        If you want to use Topaz AI to enhance your photos and videos, you need to follow these steps:

        -
          -
        1. Select the product that suits your needs: Depending on what you want to achieve with your photos or videos, you can choose from different products within Topaz Photo AI or Topaz Video AI. For example, if you want to upscale your images, you can use Gigapixel AI. If you want to remove noise from your videos, you can use Video Enhancer AI.
        2. -
        3. Open your photo or video in Topaz AI: You can open your photo or video in Topaz AI either as a standalone application or as a plug-in or external editor for your image or video editor. See the previous section for more details on how to access Topaz AI from your editor.
        4. -
        5. Select the mode that suits your needs: Depending on the product you are using, you can select from different modes that offer different levels of enhancement or customization. For example, in Gigapixel AI, you can choose from Auto, Manual, or Custom modes. In Video Enhancer AI, you can choose from Standard Quality, High Quality, or Custom Quality modes.
        6. -
        7. Adjust the settings that suit your needs: Depending on the mode and product you are using, you can adjust various settings that affect the outcome of the enhancement. For example, in Gigapixel AI, you can adjust the scale factor, output size, noise reduction, face refinement, and more. In Video Enhancer AI, you can adjust the output format, frame rate, bitrate, and more.
        8. -
        9. Preview and compare the results: Depending on the product you are using, you can preview and compare the results of the enhancement before applying it. For example, in Gigapixel AI, you can zoom in and out of the image and see how it looks at different resolutions. In Video Enhancer AI, you can play back a short clip of the video and see how it looks at different qualities.
        10. -
        11. Apply and save the results: After previewing and comparing the results, you can apply and save them by clicking on the "Apply" or "Save" button on the top right corner of the window. Your photo or video will be enhanced and saved with Topaz AI.
        12. -
        -

        Congratulations! You have successfully used Topaz AI to enhance your photos or videos with artificial intelligence. Now you can enjoy the improved quality of your images and videos.

        -

        Conclusion: Why You Should Download Topaz AI Today

        -

        Topaz AI is a suite of software products that use artificial intelligence to enhance your photos and videos with amazing results. With Topaz AI, you can:

        -
          -
        • Upscale your images and videos by up to 6x or 4x respectively while increasing actual resolution and real detail.
        • -
        • Remove noise from your images and videos while preserving detail and color in any lighting condition.
        • -
        • Sharpen your images and videos while keeping them natural and reversing motion and focus blur.
        • -
        • Deinterlace your videos while preserving image definition and reducing artifacts.
        • -
        • Use Topaz AI as a standalone application or as a plug-in or external editor for your favorite image or video editor.
        • -
        -

        Topaz AI is easy to use, fast, and reliable. It uses deep learning algorithms that have been trained on millions of data points to understand and improve image and video quality. It offers different modes and settings that allow you to customize the enhancement according to your needs and preferences. It also lets you preview and compare the results before applying them, so you can see the difference for yourself.

        -

        If you want to take your photos and videos to the next level, you should download Topaz AI today. You can try it for free for 30 days, or buy it for a reasonable price. You will be amazed by the results you can achieve with Topaz AI.

        -

        FAQs: Frequently Asked Questions about Topaz AI

        -

        Here are some of the most common questions and answers about Topaz AI:

        -
          -
        1. What are the system requirements for Topaz AI?
        2. -

          Topaz AI requires a Windows or Mac computer with at least 8 GB of RAM, 2 GB of VRAM, and an OpenGL 3.3 compatible graphics card. For optimal performance, it is recommended to have 16 GB of RAM, 4 GB of VRAM, and an NVIDIA or AMD graphics card with CUDA or OpenCL support.

          -
        3. How long does it take to process an image or video with Topaz AI?
        4. -

          The processing time depends on several factors, such as the size and resolution of the image or video, the mode and settings of the product, and the speed and power of your computer. Generally, it takes a few seconds to a few minutes to process an image, and a few minutes to a few hours to process a video.

          -
        5. Can I batch process multiple images or videos with Topaz AI?
        6. -

          Yes, you can batch process multiple images or videos with Topaz AI. You can do this by selecting multiple files in the file browser of the standalone application, or by using the batch processing feature of your image or video editor.

          -
        7. Can I use Topaz AI on my smartphone or tablet?
        8. -

          No, Topaz AI is not available for mobile devices. It is only compatible with Windows or Mac computers.

          -
        9. Where can I find more information and support for Topaz AI?
        10. -

          You can find more information and support for Topaz AI on the Topaz Labs website. There you can access the user guides, tutorials, forums, blogs, and customer service for each product.

          -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Everskies Oyna A Fun and Creative Way to Express Yourself Online.md b/spaces/1phancelerku/anime-remove-background/Everskies Oyna A Fun and Creative Way to Express Yourself Online.md deleted file mode 100644 index 0388c79236bf2283efd1642f822f1f7ac56d81d8..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Everskies Oyna A Fun and Creative Way to Express Yourself Online.md +++ /dev/null @@ -1,106 +0,0 @@ - -

        Everskies Oyna: A Guide to the Virtual Dress Up Game

        -

        Do you love dressing up, designing clothes, and meeting new people? If so, you might want to try Everskies Oyna, a virtual dress up game that lets you create your own avatar, design your own fashion items, participate in outfit competitions and events, earn money and XP, and find people with similar interests and meet new friends. Everskies Oyna is a fun and creative game for everyone who enjoys fashion, art, and socializing. In this article, we will show you how to play Everskies Oyna and give you some tips and tricks to make the most of your experience.

        -

        How to Create Your Own Avatar in Everskies

        -

        One of the first things you need to do in Everskies Oyna is to create your own avatar. Your avatar is your virtual representation in the game world, and you can customize it to match your personality and style. Here are the steps to create your own avatar in Everskies:

        -

        everskies oyna


        Download 🆓 https://jinyurl.com/2uNSEY



        -
          -
        • Step 1: Choose your gender, skin tone, and facial features. You can choose from male or female avatars, and select from different skin tones, face shapes, eyebrows, noses, mouths, ears, freckles, moles, scars, etc.
        • -
        • Step 2: Customize your hair, eyes, and makeup. You can choose from different hair styles, colors, lengths, bangs, highlights, etc. You can also change your eye shape, color, size, lashes, etc. You can also apply different makeup products such as eyeshadow, eyeliner, mascara, blush, lipstick, etc.
        • -
        • Step 3: Dress up your avatar with different outfits, accessories, and shoes. You can choose from over 150000 items to dress up your avatar with different fashion outfits, accessories, and shoes. You can mix and match different items to create your own unique look. You can also save your outfits for later use or share them with other users.
        • -
        -

        Creating your own avatar in Everskies Oyna is easy and fun. You can express yourself through your avatar and show off your style to the world.

        -

        How to Design Your Own Fashion Items in Everskies

        -

        Another cool feature of Everskies Oyna is that you can design your own fashion items and sell them in the shop or trade them with other users. You can create your own clothing, accessories, shoes, hair, makeup, etc. and show off your creativity and talent. Here are the steps to design your own fashion items in Everskies:

        -
          -
        • Step 1: Go to the Creative tab and select an item template. You can choose from different categories such as tops, bottoms, dresses, jackets, hats, bags, jewelry, etc. You can also filter by gender, style, season, etc.
        • -
        • Step 2: Use the drawing tools and filters to create your own design. You can use different tools such as pencil, brush, eraser, fill, color picker, etc. to draw your design on the item template. You can also use different filters such as hue, saturation, brightness, contrast, etc. to adjust the color and tone of your design.
        • -
        • Step 3: Save and submit your item for approval. You can name your item, add a description, and set a price for it. You can also preview how it looks on different avatars. Once you are happy with your design, you can save it and submit it for approval. The approval process may take up to 24 hours, and you will be notified if your item is accepted or rejected.
        • -
        -

        Designing your own fashion items in Everskies Oyna is a great way to unleash your inner designer and earn some money and XP. You can also get feedback from other users and improve your skills.

        -

        How to Participate in Outfit Competitions and Events in Everskies

        -

        If you want to challenge yourself and compete with other users in Everskies Oyna, you can participate in outfit competitions and events. Outfit competitions and events are themed contests that require you to create an outfit that matches the theme and criteria. You can win prizes such as money, XP, items, badges, etc. Here are the steps to participate in outfit competitions and events in Everskies:

        -
          -
        • Step 1: Check the event calendar and the competition rules. You can find the event calendar on the homepage or on the Events tab. You can see the current and upcoming competitions and events, as well as their themes, criteria, deadlines, prizes, etc. You can also read the competition rules and guidelines before entering.
        • -
        • Step 2: Create an outfit that matches the theme and criteria. You can use any items that you own or buy from the shop to create your outfit. You can also use items that you designed yourself or traded with other users. Make sure that your outfit follows the theme and criteria of the competition or event.
        • -
        • Step 3: Vote for other entries and wait for the results. After you submit your entry, you can vote for other entries by giving them stars from one to five. You can vote for up to 10 entries per day. The more you vote, the more XP you earn. The results of the competition or event will be announced after the deadline, and you will be notified if you won any prizes.
        • -
        -

        Participating in outfit competitions and events in Everskies Oyna is a fun and rewarding way to test your fashion sense and creativity. You can also get inspired by other users' outfits and discover new styles.

        -

        How to Earn Money and XP in Everskies

        -

        Money and XP are two important currencies in Everskies Oyna that allow you to buy items from the shop, level up your avatar, and access more features in the game. There are many ways to earn money and XP in Everskies Oyna, such as:

        -
          -
        • Step 1: Play mini-games such as Memory, Tic Tac Toe, and Planet Popper. You can find the mini-games on the Games tab or on the homepage. You can play the mini-games for free or for a small fee, and you can win money and XP depending on your score and performance.
        • -
        • Step 2: Sell your fashion items in the shop or trade them with other users. You can sell your fashion items that you designed yourself or bought from the shop in the shop or in the trade center. You can set your own price for your items, and you can earn money and XP when someone buys or trades them.
        • -
        • Step 3: Join clubs, forums, chat rooms, and group messages to socialize and get tips. You can join or create clubs, forums, chat rooms, and group messages that match your interests and hobbies. You can interact with other users, share your outfits, give feedback, and have fun. You can also get tips and tricks from other users on how to play Everskies Oyna better.
        • -
        -

        Earning money and XP in Everskies Oyna is easy and enjoyable. You can use your money and XP to buy more items, level up your avatar, and unlock more features in the game.

        -

        everskies oyna online
        -everskies oyna ücretsiz
        -everskies oyna mobil
        -everskies oyna nasıl
        -everskies oyna türkçe
        -everskies oyna apk
        -everskies oyna indir
        -everskies oyna kaydol
        -everskies oyna giriş yap
        -everskies oyna hileleri
        -everskies oyna kıyafet yarışması
        -everskies oyna avatar oluştur
        -everskies oyna mini oyunlar
        -everskies oyna forumlar
        -everskies oyna kulüpler
        -everskies oyna sohbet odaları
        -everskies oyna grup mesajları
        -everskies oyna sanal para kazan
        -everskies oyna tasarım sat
        -everskies oyna öğeler al sat
        -everskies oyna benzeri oyunlar
        -everskies oyna yorumlar
        -everskies oyna ipuçları
        -everskies oyna rehberi
        -everskies oyna sorunları çözümü
        -everskies oyna güncellemeleri
        -everskies oyna haberleri
        -everskies oyna etkinlik takvimi
        -everskies oyna özel setler
        -everskies oyna starpass nedir
        -everskies oyna burç yarışması
        -everskies oyna doğum taşı yarışması
        -everskies oyna doğa korkutucu yarışması
        -everskies oyna gurur ayı kutlama
        -everskies oyna satranç zamanı yarışması
        -everskies oyna moda bebekleri yarışması
        -everskies oyna deniz kızı belki yarışması
        -everskies oyna zümrüt yarışması
        -everskies oyna everchanted orman yarışması
        -everskies oyna elmas yarışması
        -everskies oyna altskyler yarışması
        -everskies oyna akvaryum yarışması
        -everskies oyna hissediyorum tatlı yarışması
        -everskies oyna kırmızı halı yarışması
        -everskies oyna dükkan güncellemeleri

        -

        How to Find People with Similar Interests and Meet New Friends in Everskies

        -

        One of the best things about Everskies Oyna is that you can find people with similar interests and meet new friends from all over the world. Everskies Oyna is a friendly and welcoming community that supports diversity and creativity. You can connect with other users who share your passion for fashion, art, music, games, etc. Here are the steps to find people with similar interests and meet new friends in Everskies:

        -
          -
        • Step 1: Browse the clubs, forums, chat rooms, and group messages by category or keyword. You can find the clubs, forums, chat rooms, and group messages on the Community tab or on the homepage. You can browse them by category such as fashion, art, music, games, etc. or by keyword such as anime, kpop, harry potter, etc.
        • -
        • Step 2: Join or create a club, forum, chat room, or group message that suits your interests. You can join or create a club, forum, chat room, or group message that matches your interests and hobbies. You can also invite other users to join or create them with you.
        • -
        • Step 3: Interact with other users, share your outfits, give feedback, and have fun. You can interact with other users who are members of the same club, forum, chat room, or group message as you. You can share your outfits, give feedback, and have fun. You can also send private messages to other users, add them as friends, or block them if you don't like them.
        • -
        -

        Finding people with similar interests and meeting new friends in Everskies Oyna is a wonderful way to expand your social circle and enjoy the game more. You can also learn from other users and discover new things.

        -

        Conclusion: Everskies Oyna is a Fun and Creative Game for Everyone

        -

        Everskies Oyna is a virtual dress up game that lets you create your own avatar, design your own fashion items, participate in outfit competitions and events, earn money and XP, and find people with similar interests and meet new friends. Everskies Oyna is a fun and creative game for everyone who loves fashion, art, and socializing. You can play Everskies Oyna for free on your browser or download the app on your mobile device. You can also follow Everskies Oyna on social media platforms such as Instagram, Twitter, Facebook, etc. to get the latest news and updates. If you are looking for a game that allows you to express yourself, show off your style, and make new friends, you should definitely try Everskies Oyna today!

        -

        FAQs

        -
          -
        • Q: What is Everskies Oyna?
        • -
        • A: Everskies Oyna is a virtual dress up game that lets you create your own avatar, design your own fashion items, participate in outfit competitions and events, earn money and XP, and find people with similar interests and meet new friends.
        • -
        • Q: How can I play Everskies Oyna?
        • -
        • A: You can play Everskies Oyna for free on your browser or download the app on your mobile device. You can also follow Everskies Oyna on social media platforms such as Instagram, Twitter, Facebook, etc. to get the latest news and updates.
        • -
        • Q: How can I create my own avatar in Everskies Oyna?
        • -
        • A: You can create your own avatar in Everskies Oyna by choosing your gender, skin tone, facial features, hair, eyes, makeup, outfits, accessories, and shoes. You can customize your avatar to match your personality and style.
        • -
        • Q: How can I design my own fashion items in Everskies Oyna?
        • -
        • A: You can design your own fashion items in Everskies Oyna by going to the Creative tab and selecting an item template. You can use the drawing tools and filters to create your own design. You can save and submit your item for approval.
        • -
        • Q: How can I participate in outfit competitions and events in Everskies Oyna?
        • -
        • A: You can participate in outfit competitions and events in Everskies Oyna by checking the event calendar and the competition rules. You can create an outfit that matches the theme and criteria. You can vote for other entries and wait for the results.
        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/utils/utils_config.py b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/utils/utils_config.py deleted file mode 100644 index 0c02eaf70fc0140aca7925f621c29a496f491cae..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/utils/utils_config.py +++ /dev/null @@ -1,16 +0,0 @@ -import importlib -import os.path as osp - - -def get_config(config_file): - assert config_file.startswith('configs/'), 'config file setting must start with configs/' - temp_config_name = osp.basename(config_file) - temp_module_name = osp.splitext(temp_config_name)[0] - config = importlib.import_module("configs.base") - cfg = config.config - config = importlib.import_module("configs.%s" % temp_module_name) - job_cfg = config.config - cfg.update(job_cfg) - if cfg.output is None: - cfg.output = osp.join('work_dirs', temp_module_name) - return cfg \ No newline at end of file diff --git a/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/stable_diffusion_engine.py b/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/stable_diffusion_engine.py deleted file mode 100644 index 04629a8d863c3a3a05a4665c5d3e3fe534aa6fd3..0000000000000000000000000000000000000000 --- a/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/stable_diffusion_engine.py +++ /dev/null @@ -1,212 +0,0 @@ -import inspect -import numpy as np -# openvino -from openvino.runtime import Core -# tokenizer -from transformers import CLIPTokenizer -# utils -from tqdm import tqdm -from huggingface_hub import hf_hub_download -from diffusers import LMSDiscreteScheduler, PNDMScheduler -import cv2 - - -def result(var): - return next(iter(var.values())) - - -class StableDiffusionEngine: - def __init__( - self, - scheduler, - model="4eJIoBek/stable-diffusion-v1-4-openvino-fp32", - tokenizer="openai/clip-vit-large-patch14", - device="CPU" - ): - self.tokenizer = CLIPTokenizer.from_pretrained(tokenizer) - self.scheduler = scheduler - # models - self.core = Core() - # text features - self._text_encoder = self.core.read_model( - hf_hub_download(repo_id=model, filename="text_encoder.xml"), - hf_hub_download(repo_id=model, filename="text_encoder.bin") - ) - self.text_encoder = self.core.compile_model(self._text_encoder, device) - # diffusion - self._unet = self.core.read_model( - hf_hub_download(repo_id=model, filename="unet.xml"), - hf_hub_download(repo_id=model, filename="unet.bin") - ) - self.unet = self.core.compile_model(self._unet, device) - self.latent_shape = tuple(self._unet.inputs[0].shape)[1:] - # decoder - self._vae_decoder = self.core.read_model( - hf_hub_download(repo_id=model, filename="vae_decoder.xml"), - hf_hub_download(repo_id=model, filename="vae_decoder.bin") - ) - self.vae_decoder = self.core.compile_model(self._vae_decoder, device) - # encoder - self._vae_encoder = self.core.read_model( - hf_hub_download(repo_id=model, filename="vae_encoder.xml"), - hf_hub_download(repo_id=model, filename="vae_encoder.bin") - ) - self.vae_encoder = self.core.compile_model(self._vae_encoder, device) - self.init_image_shape = tuple(self._vae_encoder.inputs[0].shape)[2:] - - def _preprocess_mask(self, mask): - h, w = mask.shape - if h != self.init_image_shape[0] and w != self.init_image_shape[1]: - mask = cv2.resize( - mask, - (self.init_image_shape[1], self.init_image_shape[0]), - interpolation = cv2.INTER_NEAREST - ) - mask = cv2.resize( - mask, - (self.init_image_shape[1] // 8, self.init_image_shape[0] // 8), - interpolation = cv2.INTER_NEAREST - ) - mask = mask.astype(np.float32) / 255.0 - mask = np.tile(mask, (4, 1, 1)) - mask = mask[None].transpose(0, 1, 2, 3) - mask = 1 - mask - return mask - - def _preprocess_image(self, image): - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - h, w = image.shape[1:] - if h != self.init_image_shape[0] and w != self.init_image_shape[1]: - image = cv2.resize( - image, - (self.init_image_shape[1], self.init_image_shape[0]), - interpolation=cv2.INTER_LANCZOS4 - ) - # normalize - image = image.astype(np.float32) / 255.0 - image = 2.0 * image - 1.0 - # to batch - image = image[None].transpose(0, 3, 1, 2) - return image - - def _encode_image(self, init_image): - moments = result(self.vae_encoder.infer_new_request({ - "init_image": self._preprocess_image(init_image) - })) - mean, logvar = np.split(moments, 2, axis=1) - std = np.exp(logvar * 0.5) - latent = (mean + std * np.random.randn(*mean.shape)) * 0.18215 - return latent - - def __call__( - self, - prompt, - init_image = None, - mask = None, - strength = 0.5, - num_inference_steps = 32, - guidance_scale = 7.5, - eta = 0.0 - ): - # extract condition - tokens = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True - ).input_ids - text_embeddings = result( - self.text_encoder.infer_new_request({"tokens": np.array([tokens])}) - ) - - # do classifier free guidance - if guidance_scale > 1.0: - tokens_uncond = self.tokenizer( - "", - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True - ).input_ids - uncond_embeddings = result( - self.text_encoder.infer_new_request({"tokens": np.array([tokens_uncond])}) - ) - text_embeddings = np.concatenate((uncond_embeddings, text_embeddings), axis=0) - - # set timesteps - accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys()) - extra_set_kwargs = {} - offset = 0 - if accepts_offset: - offset = 1 - extra_set_kwargs["offset"] = 1 - - self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) - - # initialize latent latent - if init_image is None: - latents = np.random.randn(*self.latent_shape) - init_timestep = num_inference_steps - else: - init_latents = self._encode_image(init_image) - init_timestep = int(num_inference_steps * strength) + offset - init_timestep = min(init_timestep, num_inference_steps) - timesteps = np.array([[self.scheduler.timesteps[-init_timestep]]]).astype(np.long) - noise = np.random.randn(*self.latent_shape) - latents = self.scheduler.add_noise(init_latents, noise, timesteps)[0] - - if init_image is not None and mask is not None: - mask = self._preprocess_mask(mask) - else: - mask = None - - # if we use LMSDiscreteScheduler, let's make sure latents are mulitplied by sigmas - if isinstance(self.scheduler, LMSDiscreteScheduler): - latents = latents * self.scheduler.sigmas[0] - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - t_start = max(num_inference_steps - init_timestep + offset, 0) - for i, t in tqdm(enumerate(self.scheduler.timesteps[t_start:])): - # expand the latents if we are doing classifier free guidance - latent_model_input = np.stack([latents, latents], 0) if guidance_scale > 1.0 else latents[None] - if isinstance(self.scheduler, LMSDiscreteScheduler): - sigma = self.scheduler.sigmas[i] - latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5) - - # predict the noise residual - noise_pred = result(self.unet.infer_new_request({ - "latent_model_input": latent_model_input, - "t": t, - "encoder_hidden_states": text_embeddings - })) - - # perform guidance - if guidance_scale > 1.0: - noise_pred = noise_pred[0] + guidance_scale * (noise_pred[1] - noise_pred[0]) - - # compute the previous noisy sample x_t -> x_t-1 - if isinstance(self.scheduler, LMSDiscreteScheduler): - latents = self.scheduler.step(noise_pred, i, latents, **extra_step_kwargs)["prev_sample"] - else: - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs)["prev_sample"] - - # masking for inapinting - if mask is not None: - init_latents_proper = self.scheduler.add_noise(init_latents, noise, t) - latents = ((init_latents_proper * mask) + (latents * (1 - mask)))[0] - - image = result(self.vae_decoder.infer_new_request({ - "latents": np.expand_dims(latents, 0) - })) - - # convert tensor to opencv's image format - image = (image / 2 + 0.5).clip(0, 1) - image = (image[0].transpose(1, 2, 0)[:, :, ::-1] * 255).astype(np.uint8) - return image diff --git a/spaces/7thHeaven/GPT2WordPress/constraints.md b/spaces/7thHeaven/GPT2WordPress/constraints.md deleted file mode 100644 index 4096a6fa8b70514623b1164e67df99ad2c3408a7..0000000000000000000000000000000000000000 --- a/spaces/7thHeaven/GPT2WordPress/constraints.md +++ /dev/null @@ -1,8 +0,0 @@ -# 制約 - -- あなたはブログ記事生成アシスタントです -- あなたはユーザーが与えるプロンプトをブログ記事のタイトルとして解釈し、ブログ記事本文を生成します -- 返信はブログ記事本文のみです -- あなたは優しい性格のブロガーです -- あなたは好奇心旺盛で、人々が見逃してしまいそうな小さな幸せを発見することが得意です。作成する記事も、そのような特色が現れます -- あなたは、なんでもITに紐づけてしまう癖を持っています diff --git a/spaces/AIConsultant/MusicGen/CONTRIBUTING.md b/spaces/AIConsultant/MusicGen/CONTRIBUTING.md deleted file mode 100644 index a3e9507643d4439f509a8fc8b87dc73417ef9822..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/CONTRIBUTING.md +++ /dev/null @@ -1,35 +0,0 @@ -# Contributing to AudioCraft - -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests - -AudioCraft is the implementation of a research paper. -Therefore, we do not plan on accepting many pull requests for new features. -We certainly welcome them for bug fixes. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Meta's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to encodec, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. diff --git a/spaces/AIFILMS/StyleGANEX/models/stylegan2/op_ori/fused_act.py b/spaces/AIFILMS/StyleGANEX/models/stylegan2/op_ori/fused_act.py deleted file mode 100644 index 973a84fffde53668d31397da5fb993bbc95f7be0..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/models/stylegan2/op_ori/fused_act.py +++ /dev/null @@ -1,85 +0,0 @@ -import os - -import torch -from torch import nn -from torch.autograd import Function -from torch.utils.cpp_extension import load - -module_path = os.path.dirname(__file__) -fused = load( - 'fused', - sources=[ - os.path.join(module_path, 'fused_bias_act.cpp'), - os.path.join(module_path, 'fused_bias_act_kernel.cu'), - ], -) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output, empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale - ) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5): - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/tts_utils.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/tts_utils.py deleted file mode 100644 index 47e654c03eaf9c50ae0bb3c97ecd661666a1a6b1..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/tts_utils.py +++ /dev/null @@ -1,54 +0,0 @@ -import importlib - -from text_to_speech.data_gen.tts.base_binarizer import BaseBinarizer -from text_to_speech.data_gen.tts.base_preprocess import BasePreprocessor -from text_to_speech.data_gen.tts.txt_processors.base_text_processor import get_txt_processor_cls -from text_to_speech.utils.commons.hparams import hparams - - -def parse_dataset_configs(): - max_tokens = hparams['max_tokens'] - max_sentences = hparams['max_sentences'] - max_valid_tokens = hparams['max_valid_tokens'] - if max_valid_tokens == -1: - hparams['max_valid_tokens'] = max_valid_tokens = max_tokens - max_valid_sentences = hparams['max_valid_sentences'] - if max_valid_sentences == -1: - hparams['max_valid_sentences'] = max_valid_sentences = max_sentences - return max_tokens, max_sentences, max_valid_tokens, max_valid_sentences - - -def parse_mel_losses(): - mel_losses = hparams['mel_losses'].split("|") - loss_and_lambda = {} - for i, l in enumerate(mel_losses): - if l == '': - continue - if ':' in l: - l, lbd = l.split(":") - lbd = float(lbd) - else: - lbd = 1.0 - loss_and_lambda[l] = lbd - print("| Mel losses:", loss_and_lambda) - return loss_and_lambda - - -def load_data_preprocessor(): - preprocess_cls = hparams["preprocess_cls"] - pkg = ".".join(preprocess_cls.split(".")[:-1]) - cls_name = preprocess_cls.split(".")[-1] - preprocessor: BasePreprocessor = getattr(importlib.import_module(pkg), cls_name)() - preprocess_args = {} - preprocess_args.update(hparams['preprocess_args']) - return preprocessor, preprocess_args - - -def load_data_binarizer(): - binarizer_cls = hparams['binarizer_cls'] - pkg = ".".join(binarizer_cls.split(".")[:-1]) - cls_name = binarizer_cls.split(".")[-1] - binarizer: BaseBinarizer = getattr(importlib.import_module(pkg), cls_name)() - binarization_args = {} - binarization_args.update(hparams['binarization_args']) - return binarizer, binarization_args diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/image_degradation/bsrgan_light.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/image_degradation/bsrgan_light.py deleted file mode 100644 index 9e1f823996bf559e9b015ea9aa2b3cd38dd13af1..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/image_degradation/bsrgan_light.py +++ /dev/null @@ -1,650 +0,0 @@ -# -*- coding: utf-8 -*- -import numpy as np -import cv2 -import torch - -from functools import partial -import random -from scipy import ndimage -import scipy -import scipy.stats as ss -from scipy.interpolate import interp2d -from scipy.linalg import orth -import albumentations - -import ldm.modules.image_degradation.utils_image as util - -""" -# -------------------------------------------- -# Super-Resolution -# -------------------------------------------- -# -# Kai Zhang (cskaizhang@gmail.com) -# https://github.com/cszn -# From 2019/03--2021/08 -# -------------------------------------------- -""" - - -def modcrop_np(img, sf): - ''' - Args: - img: numpy image, WxH or WxHxC - sf: scale factor - Return: - cropped image - ''' - w, h = img.shape[:2] - im = np.copy(img) - return im[:w - w % sf, :h - h % sf, ...] - - -""" -# -------------------------------------------- -# anisotropic Gaussian kernels -# -------------------------------------------- -""" - - -def analytic_kernel(k): - """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)""" - k_size = k.shape[0] - # Calculate the big kernels size - big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2)) - # Loop over the small kernel to fill the big one - for r in range(k_size): - for c in range(k_size): - big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k - # Crop the edges of the big kernel to ignore very small values and increase run time of SR - crop = k_size // 2 - cropped_big_k = big_k[crop:-crop, crop:-crop] - # Normalize to 1 - return cropped_big_k / cropped_big_k.sum() - - -def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6): - """ generate an anisotropic Gaussian kernel - Args: - ksize : e.g., 15, kernel size - theta : [0, pi], rotation angle range - l1 : [0.1,50], scaling of eigenvalues - l2 : [0.1,l1], scaling of eigenvalues - If l1 = l2, will get an isotropic Gaussian kernel. - Returns: - k : kernel - """ - - v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.])) - V = np.array([[v[0], v[1]], [v[1], -v[0]]]) - D = np.array([[l1, 0], [0, l2]]) - Sigma = np.dot(np.dot(V, D), np.linalg.inv(V)) - k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize) - - return k - - -def gm_blur_kernel(mean, cov, size=15): - center = size / 2.0 + 0.5 - k = np.zeros([size, size]) - for y in range(size): - for x in range(size): - cy = y - center + 1 - cx = x - center + 1 - k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov) - - k = k / np.sum(k) - return k - - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf - 1) * 0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w - 1) - y1 = np.clip(y1, 0, h - 1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - - -def blur(x, k): - ''' - x: image, NxcxHxW - k: kernel, Nx1xhxw - ''' - n, c = x.shape[:2] - p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2 - x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate') - k = k.repeat(1, c, 1, 1) - k = k.view(-1, 1, k.shape[2], k.shape[3]) - x = x.view(1, -1, x.shape[2], x.shape[3]) - x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c) - x = x.view(n, c, x.shape[2], x.shape[3]) - - return x - - -def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0): - """" - # modified version of https://github.com/assafshocher/BlindSR_dataset_generator - # Kai Zhang - # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var - # max_var = 2.5 * sf - """ - # Set random eigen-vals (lambdas) and angle (theta) for COV matrix - lambda_1 = min_var + np.random.rand() * (max_var - min_var) - lambda_2 = min_var + np.random.rand() * (max_var - min_var) - theta = np.random.rand() * np.pi # random theta - noise = -noise_level + np.random.rand(*k_size) * noise_level * 2 - - # Set COV matrix using Lambdas and Theta - LAMBDA = np.diag([lambda_1, lambda_2]) - Q = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - SIGMA = Q @ LAMBDA @ Q.T - INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :] - - # Set expectation position (shifting kernel for aligned image) - MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2) - MU = MU[None, None, :, None] - - # Create meshgrid for Gaussian - [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1])) - Z = np.stack([X, Y], 2)[:, :, :, None] - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - MU - ZZ_t = ZZ.transpose(0, 1, 3, 2) - raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise) - - # shift the kernel so it will be centered - # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor) - - # Normalize the kernel and return - # kernel = raw_kernel_centered / np.sum(raw_kernel_centered) - kernel = raw_kernel / np.sum(raw_kernel) - return kernel - - -def fspecial_gaussian(hsize, sigma): - hsize = [hsize, hsize] - siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0] - std = sigma - [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1)) - arg = -(x * x + y * y) / (2 * std * std) - h = np.exp(arg) - h[h < scipy.finfo(float).eps * h.max()] = 0 - sumh = h.sum() - if sumh != 0: - h = h / sumh - return h - - -def fspecial_laplacian(alpha): - alpha = max([0, min([alpha, 1])]) - h1 = alpha / (alpha + 1) - h2 = (1 - alpha) / (alpha + 1) - h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]] - h = np.array(h) - return h - - -def fspecial(filter_type, *args, **kwargs): - ''' - python code from: - https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py - ''' - if filter_type == 'gaussian': - return fspecial_gaussian(*args, **kwargs) - if filter_type == 'laplacian': - return fspecial_laplacian(*args, **kwargs) - - -""" -# -------------------------------------------- -# degradation models -# -------------------------------------------- -""" - - -def bicubic_degradation(x, sf=3): - ''' - Args: - x: HxWxC image, [0, 1] - sf: down-scale factor - Return: - bicubicly downsampled LR image - ''' - x = util.imresize_np(x, scale=1 / sf) - return x - - -def srmd_degradation(x, k, sf=3): - ''' blur + bicubic downsampling - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2018learning, - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} - } - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror' - x = bicubic_degradation(x, sf=sf) - return x - - -def dpsr_degradation(x, k, sf=3): - ''' bicubic downsampling + blur - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2019deep, - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} - } - ''' - x = bicubic_degradation(x, sf=sf) - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - return x - - -def classical_degradation(x, k, sf=3): - ''' blur + downsampling - Args: - x: HxWxC image, [0, 1]/[0, 255] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2)) - st = 0 - return x[st::sf, st::sf, ...] - - -def add_sharpening(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. borrowed from real-ESRGAN - Input image: I; Blurry image: B. - 1. K = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * K + (1 - Mask) * I - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - K = img + weight * residual - K = np.clip(K, 0, 1) - return soft_mask * K + (1 - soft_mask) * img - - -def add_blur(img, sf=4): - wd2 = 4.0 + sf - wd = 2.0 + 0.2 * sf - - wd2 = wd2/4 - wd = wd/4 - - if random.random() < 0.5: - l1 = wd2 * random.random() - l2 = wd2 * random.random() - k = anisotropic_Gaussian(ksize=random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2) - else: - k = fspecial('gaussian', random.randint(2, 4) + 3, wd * random.random()) - img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror') - - return img - - -def add_resize(img, sf=4): - rnum = np.random.rand() - if rnum > 0.8: # up - sf1 = random.uniform(1, 2) - elif rnum < 0.7: # down - sf1 = random.uniform(0.5 / sf, 1) - else: - sf1 = 1.0 - img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - return img - - -# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): -# noise_level = random.randint(noise_level1, noise_level2) -# rnum = np.random.rand() -# if rnum > 0.6: # add color Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) -# elif rnum < 0.4: # add grayscale Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) -# else: # add noise -# L = noise_level2 / 255. -# D = np.diag(np.random.rand(3)) -# U = orth(np.random.rand(3, 3)) -# conv = np.dot(np.dot(np.transpose(U), D), U) -# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) -# img = np.clip(img, 0.0, 1.0) -# return img - -def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - rnum = np.random.rand() - if rnum > 0.6: # add color Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: # add grayscale Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: # add noise - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_speckle_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - img = np.clip(img, 0.0, 1.0) - rnum = random.random() - if rnum > 0.6: - img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: - img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_Poisson_noise(img): - img = np.clip((img * 255.0).round(), 0, 255) / 255. - vals = 10 ** (2 * random.random() + 2.0) # [2, 4] - if random.random() < 0.5: - img = np.random.poisson(img * vals).astype(np.float32) / vals - else: - img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114]) - img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255. - noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray - img += noise_gray[:, :, np.newaxis] - img = np.clip(img, 0.0, 1.0) - return img - - -def add_JPEG_noise(img): - quality_factor = random.randint(80, 95) - img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR) - result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) - img = cv2.imdecode(encimg, 1) - img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB) - return img - - -def random_crop(lq, hq, sf=4, lq_patchsize=64): - h, w = lq.shape[:2] - rnd_h = random.randint(0, h - lq_patchsize) - rnd_w = random.randint(0, w - lq_patchsize) - lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :] - - rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf) - hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :] - return lq, hq - - -def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - hq = img.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - img = util.imresize_np(img, 1 / 2, True) - img = np.clip(img, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - img = add_blur(img, sf=sf) - - elif i == 1: - img = add_blur(img, sf=sf) - - elif i == 2: - a, b = img.shape[1], img.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror') - img = img[0::sf, 0::sf, ...] # nearest downsampling - img = np.clip(img, 0.0, 1.0) - - elif i == 3: - # downsample3 - img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=8) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - img = add_JPEG_noise(img) - - elif i == 6: - # add processed camera sensor noise - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf_ori, lq_patchsize) - - return img, hq - - -# todo no isp_model? -def degradation_bsrgan_variant(image, sf=4, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - image = util.uint2single(image) - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = image.shape[:2] - image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = image.shape[:2] - - hq = image.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - image = util.imresize_np(image, 1 / 2, True) - image = np.clip(image, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - image = add_blur(image, sf=sf) - - # elif i == 1: - # image = add_blur(image, sf=sf) - - if i == 0: - pass - - elif i == 2: - a, b = image.shape[1], image.shape[0] - # downsample2 - if random.random() < 0.8: - sf1 = random.uniform(1, 2 * sf) - image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror') - image = image[0::sf, 0::sf, ...] # nearest downsampling - - image = np.clip(image, 0.0, 1.0) - - elif i == 3: - # downsample3 - image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - image = np.clip(image, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - image = add_Gaussian_noise(image, noise_level1=1, noise_level2=2) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - image = add_JPEG_noise(image) - # - # elif i == 6: - # # add processed camera sensor noise - # if random.random() < isp_prob and isp_model is not None: - # with torch.no_grad(): - # img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - image = add_JPEG_noise(image) - image = util.single2uint(image) - example = {"image": image} - return example - - - - -if __name__ == '__main__': - print("hey") - img = util.imread_uint('utils/test.png', 3) - img = img[:448, :448] - h = img.shape[0] // 4 - print("resizing to", h) - sf = 4 - deg_fn = partial(degradation_bsrgan_variant, sf=sf) - for i in range(20): - print(i) - img_hq = img - img_lq = deg_fn(img)["image"] - img_hq, img_lq = util.uint2single(img_hq), util.uint2single(img_lq) - print(img_lq) - img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img_hq)["image"] - print(img_lq.shape) - print("bicubic", img_lq_bicubic.shape) - print(img_hq.shape) - lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), - (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1) - util.imsave(img_concat, str(i) + '.png') diff --git a/spaces/AIZ2H/05-SOTA-Question-Answer-From-TextFileContext/app.py b/spaces/AIZ2H/05-SOTA-Question-Answer-From-TextFileContext/app.py deleted file mode 100644 index c66d3925b6805866e5bead78cee8fdfacd2c9638..0000000000000000000000000000000000000000 --- a/spaces/AIZ2H/05-SOTA-Question-Answer-From-TextFileContext/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio as gr -import os - -context = "This could be any large text corpus to use as subject matter to ask questions about. You can load it as well from text file to isolate it from code changes like in the next line" - -with open('Context.txt', 'r') as file: - context = file.read() - -question = "What should be documented in a care plan?" - -API_KEY = os.environ.get("HF_TOKEN") -gr.Interface.load( - "huggingface/deepset/roberta-base-squad2", - api_key=API_KEY, - theme="default", - css=".footer{display:none !important}", - inputs=[gr.inputs.Textbox(lines=12, default=context, label="Context paragraph"), gr.inputs.Textbox(lines=3, default=question, label="Question")], - outputs=[gr.outputs.Textbox(label="Answer"), gr.outputs.Textbox(label="Score")], - title=None, - description="Provide your own paragraph and ask any question about the text. How well does the model answer?").launch() \ No newline at end of file diff --git a/spaces/Aadi1149/Arkenbrien-text-to-image-Arkenbrien/app.py b/spaces/Aadi1149/Arkenbrien-text-to-image-Arkenbrien/app.py deleted file mode 100644 index 2d86cf2d2784e40969af85bc3ed6a35fd525b5ac..0000000000000000000000000000000000000000 --- a/spaces/Aadi1149/Arkenbrien-text-to-image-Arkenbrien/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Arkenbrien/text-to-image-Arkenbrien").launch() \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/6.js b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/6.js deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinput/methods/ConfigurationMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinput/methods/ConfigurationMethods.js deleted file mode 100644 index 22f9b489453fb0b1b5cedb6ea5a3fbdb4a99e231..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinput/methods/ConfigurationMethods.js +++ /dev/null @@ -1,107 +0,0 @@ -var methods = { - // Color picker - setCreateColorPickerBackgroundCallback(callback) { - this.colorPickerCreateBackgroundCallback = callback; - return this; - }, - - setColorPickerHPalettePosition(position) { - this.colorPickerHPalettePosition = position; - return this; - }, - - setColorPickerExpandDirection(direction) { - if (typeof (direction) === 'string') { - direction = ColorPickerExpandDirections[direction]; - } - this.colorPickerExpandDirection = direction; - return this; - }, - - setColorPickerEaseInDuration(duration) { - if (duration === undefined) { - duration = 0; - } - this.colorPickerEaseInDuration = duration; - return this; - }, - - setColorPickerEaseOutDuration(duration) { - if (duration === undefined) { - duration = 0; - } - this.colorPickerEaseOutDuration = duration; - return this; - }, - - setColorPickerTransitInCallback(callback) { - this.colorPickerTransitInCallback = callback; - // callback = function(gameObject, duration) {} - return this; - }, - - setColorPickerTransitOutCallback(callback) { - this.colorPickerTransitOutCallback = callback; - // callback = function(gameObject, duration) {} - return this; - }, - - setColorPickerBounds(bounds) { - this.colorPickerBounds = bounds; - return this; - }, - - setColorPickerWidth(width) { - this.colorPickerWidth = width; - return this; - }, - - setColorPickerHeight(height) { - this.colorPickerHeight = height; - return this; - }, - - setColorPickerSize(width, height) { - this.setColorPickerWidth(width).setColorPickerHeight(height); - return this; - }, - - setColorPickerSpace(space) { - if (space === undefined) { - space = {}; - } - this.colorPickerSpace = space; - return this; - }, - - // Color components - setColorComponentsHeight(height) { - this.colorComponentsHeight = height; - return this; - }, - - setColorComponentsFormatLabelConfig(config) { - this.colorComponentsFormatLabelConfig = config; - return this; - }, - - setColorComponentsInputTextConfig(config) { - this.colorComponentsInputTextConfig = config; - return this; - }, - - setColorComponentsSpace(space) { - if (space === undefined) { - space = {}; - } - this.colorComponentsSpace = space; - return this; - }, -} - -const ColorPickerExpandDirections = { - down: 0, - up: 1 -} - -export default methods; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/GetThumbAlignPoint.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/GetThumbAlignPoint.js deleted file mode 100644 index e8f10277175c4dccc6b3f1402ee4838eae6e5089..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/GetThumbAlignPoint.js +++ /dev/null @@ -1,23 +0,0 @@ -import AlignIn from '../../../plugins/utils/actions/AlignIn.js'; - -var GetThumbAlignPoint = function (align, out) { - if (out === undefined) { - out = tmpPoint; - } - var thumb = this.childrenMap.thumb; - var currentX = thumb.x; - var currentY = thumb.y; - - AlignIn(thumb, this.innerLeft, this.innerTop, this.innerWidth, this.innerHeight, align); - out.x = thumb.x; - out.y = thumb.y; - - thumb.x = currentX; - thumb.y = currentY; - - return out; -} - -var tmpPoint = {}; - -export default GetThumbAlignPoint; \ No newline at end of file diff --git a/spaces/AlawnCN/webui-docker/oh-no.py b/spaces/AlawnCN/webui-docker/oh-no.py deleted file mode 100644 index e8c0f3bd8d72805b4ee69d4d0fd9133347d00f92..0000000000000000000000000000000000000000 --- a/spaces/AlawnCN/webui-docker/oh-no.py +++ /dev/null @@ -1,14 +0,0 @@ -import gradio as gr - -block = gr.Blocks() - -def run(): - with block: - gr.Markdown( - """ -

        oh no 😐 something wrong with the 🤗 hugging face servers 😐 hopefully, it will be fixed soon

        - """) - block.launch(server_name="0.0.0.0", server_port=7860) - -if __name__ == "__main__": - run() \ No newline at end of file diff --git a/spaces/Ali-Omrani/CCR/README.md b/spaces/Ali-Omrani/CCR/README.md deleted file mode 100644 index 5a0f1c8d1b4c92cd6abf2a8b4771db87c831bcbf..0000000000000000000000000000000000000000 --- a/spaces/Ali-Omrani/CCR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: CCR -emoji: 🚀 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Alpaca233/SadTalker/src/facerender/sync_batchnorm/batchnorm.py b/spaces/Alpaca233/SadTalker/src/facerender/sync_batchnorm/batchnorm.py deleted file mode 100644 index 5f4e763f0366dffa10320116413f8c7181a8aeb1..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/facerender/sync_batchnorm/batchnorm.py +++ /dev/null @@ -1,315 +0,0 @@ -# -*- coding: utf-8 -*- -# File : batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import collections - -import torch -import torch.nn.functional as F - -from torch.nn.modules.batchnorm import _BatchNorm -from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast - -from .comm import SyncMaster - -__all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d'] - - -def _sum_ft(tensor): - """sum over the first and last dimention""" - return tensor.sum(dim=0).sum(dim=-1) - - -def _unsqueeze_ft(tensor): - """add new dementions at the front and the tail""" - return tensor.unsqueeze(0).unsqueeze(-1) - - -_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size']) -_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std']) - - -class _SynchronizedBatchNorm(_BatchNorm): - def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True): - super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine) - - self._sync_master = SyncMaster(self._data_parallel_master) - - self._is_parallel = False - self._parallel_id = None - self._slave_pipe = None - - def forward(self, input): - # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation. - if not (self._is_parallel and self.training): - return F.batch_norm( - input, self.running_mean, self.running_var, self.weight, self.bias, - self.training, self.momentum, self.eps) - - # Resize the input to (B, C, -1). - input_shape = input.size() - input = input.view(input.size(0), self.num_features, -1) - - # Compute the sum and square-sum. - sum_size = input.size(0) * input.size(2) - input_sum = _sum_ft(input) - input_ssum = _sum_ft(input ** 2) - - # Reduce-and-broadcast the statistics. - if self._parallel_id == 0: - mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size)) - else: - mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size)) - - # Compute the output. - if self.affine: - # MJY:: Fuse the multiplication for speed. - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias) - else: - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std) - - # Reshape it. - return output.view(input_shape) - - def __data_parallel_replicate__(self, ctx, copy_id): - self._is_parallel = True - self._parallel_id = copy_id - - # parallel_id == 0 means master device. - if self._parallel_id == 0: - ctx.sync_master = self._sync_master - else: - self._slave_pipe = ctx.sync_master.register_slave(copy_id) - - def _data_parallel_master(self, intermediates): - """Reduce the sum and square-sum, compute the statistics, and broadcast it.""" - - # Always using same "device order" makes the ReduceAdd operation faster. - # Thanks to:: Tete Xiao (http://tetexiao.com/) - intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device()) - - to_reduce = [i[1][:2] for i in intermediates] - to_reduce = [j for i in to_reduce for j in i] # flatten - target_gpus = [i[1].sum.get_device() for i in intermediates] - - sum_size = sum([i[1].sum_size for i in intermediates]) - sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce) - mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size) - - broadcasted = Broadcast.apply(target_gpus, mean, inv_std) - - outputs = [] - for i, rec in enumerate(intermediates): - outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2]))) - - return outputs - - def _compute_mean_std(self, sum_, ssum, size): - """Compute the mean and standard-deviation with sum and square-sum. This method - also maintains the moving average on the master device.""" - assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.' - mean = sum_ / size - sumvar = ssum - sum_ * mean - unbias_var = sumvar / (size - 1) - bias_var = sumvar / size - - self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data - self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data - - return mean, bias_var.clamp(self.eps) ** -0.5 - - -class SynchronizedBatchNorm1d(_SynchronizedBatchNorm): - r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a - mini-batch. - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm1d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm - - Args: - num_features: num_features from an expected input of size - `batch_size x num_features [x width]` - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C)` or :math:`(N, C, L)` - - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 2 and input.dim() != 3: - raise ValueError('expected 2D or 3D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm1d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm2d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch - of 3d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm2d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C, H, W)` - - Output: :math:`(N, C, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 4: - raise ValueError('expected 4D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm2d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm3d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch - of 4d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm3d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm - or Spatio-temporal BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x depth x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C, D, H, W)` - - Output: :math:`(N, C, D, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 5: - raise ValueError('expected 5D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm3d, self)._check_input_dim(input) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/tutorials/tutorial_overview.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/tutorials/tutorial_overview.md deleted file mode 100644 index 0cec9a317ddbef7488204f9e8cd6c7f07aca6b79..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/tutorials/tutorial_overview.md +++ /dev/null @@ -1,23 +0,0 @@ - - -# Overview - -Welcome to 🧨 Diffusers! If you're new to diffusion models and generative AI, and want to learn more, then you've come to the right place. These beginner-friendly tutorials are designed to provide a gentle introduction to diffusion models and help you understand the library fundamentals - the core components and how 🧨 Diffusers is meant to be used. - -You'll learn how to use a pipeline for inference to rapidly generate things, and then deconstruct that pipeline to really understand how to use the library as a modular toolbox for building your own diffusion systems. In the next lesson, you'll learn how to train your own diffusion model to generate what you want. - -After completing the tutorials, you'll have gained the necessary skills to start exploring the library on your own and see how to use it for your own projects and applications. - -Feel free to join our community on [Discord](https://discord.com/invite/JfAtkvEtRb) or the [forums](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) to connect and collaborate with other users and developers! - -Let's start diffusing! 🧨 \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_euler_ancestral.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_euler_ancestral.py deleted file mode 100644 index 9866bd12d6af863469fa7369245dce5843d69080..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_euler_ancestral.py +++ /dev/null @@ -1,118 +0,0 @@ -import torch - -from diffusers import EulerAncestralDiscreteScheduler -from diffusers.utils import torch_device - -from .test_schedulers import SchedulerCommonTest - - -class EulerAncestralDiscreteSchedulerTest(SchedulerCommonTest): - scheduler_classes = (EulerAncestralDiscreteScheduler,) - num_inference_steps = 10 - - def get_scheduler_config(self, **kwargs): - config = { - "num_train_timesteps": 1100, - "beta_start": 0.0001, - "beta_end": 0.02, - "beta_schedule": "linear", - } - - config.update(**kwargs) - return config - - def test_timesteps(self): - for timesteps in [10, 50, 100, 1000]: - self.check_over_configs(num_train_timesteps=timesteps) - - def test_betas(self): - for beta_start, beta_end in zip([0.00001, 0.0001, 0.001], [0.0002, 0.002, 0.02]): - self.check_over_configs(beta_start=beta_start, beta_end=beta_end) - - def test_schedules(self): - for schedule in ["linear", "scaled_linear"]: - self.check_over_configs(beta_schedule=schedule) - - def test_prediction_type(self): - for prediction_type in ["epsilon", "v_prediction"]: - self.check_over_configs(prediction_type=prediction_type) - - def test_full_loop_no_noise(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(self.num_inference_steps) - - generator = torch.manual_seed(0) - - model = self.dummy_model() - sample = self.dummy_sample_deter * scheduler.init_noise_sigma.cpu() - sample = sample.to(torch_device) - - for i, t in enumerate(scheduler.timesteps): - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample, generator=generator) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 152.3192) < 1e-2 - assert abs(result_mean.item() - 0.1983) < 1e-3 - - def test_full_loop_with_v_prediction(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(prediction_type="v_prediction") - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(self.num_inference_steps) - - generator = torch.manual_seed(0) - - model = self.dummy_model() - sample = self.dummy_sample_deter * scheduler.init_noise_sigma - sample = sample.to(torch_device) - - for i, t in enumerate(scheduler.timesteps): - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample, generator=generator) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 108.4439) < 1e-2 - assert abs(result_mean.item() - 0.1412) < 1e-3 - - def test_full_loop_device(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(self.num_inference_steps, device=torch_device) - generator = torch.manual_seed(0) - - model = self.dummy_model() - sample = self.dummy_sample_deter * scheduler.init_noise_sigma.cpu() - sample = sample.to(torch_device) - - for t in scheduler.timesteps: - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample, generator=generator) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 152.3192) < 1e-2 - assert abs(result_mean.item() - 0.1983) < 1e-3 diff --git a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_c4_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_c4_1x_coco.py deleted file mode 100644 index a44c01831b508da0a5e1ca3720bb437bcea086d1..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_c4_1x_coco.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_caffe_c4.py', - '../_base_/datasets/coco_instance.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/pisa_retinanet_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/pisa_retinanet_head.py deleted file mode 100644 index bd87b9aeb07e05ff94b444ac8999eca3f616711a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/pisa_retinanet_head.py +++ /dev/null @@ -1,154 +0,0 @@ -import torch -from mmcv.runner import force_fp32 - -from mmdet.core import images_to_levels -from ..builder import HEADS -from ..losses import carl_loss, isr_p -from .retina_head import RetinaHead - - -@HEADS.register_module() -class PISARetinaHead(RetinaHead): - """PISA Retinanet Head. - - The head owns the same structure with Retinanet Head, but differs in two - aspects: - 1. Importance-based Sample Reweighting Positive (ISR-P) is applied to - change the positive loss weights. - 2. Classification-aware regression loss is adopted as a third loss. - """ - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes of each image - with shape (num_obj, 4). - gt_labels (list[Tensor]): Ground truth labels of each image - with shape (num_obj, 4). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor]): Ignored gt bboxes of each image. - Default: None. - - Returns: - dict: Loss dict, comprise classification loss, regression loss and - carl loss. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels, - return_sampling_results=True) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg, sampling_results_list) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - num_imgs = len(img_metas) - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(num_imgs, -1, label_channels) - for cls_score in cls_scores - ] - flatten_cls_scores = torch.cat( - flatten_cls_scores, dim=1).reshape(-1, - flatten_cls_scores[0].size(-1)) - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, 4) - for bbox_pred in bbox_preds - ] - flatten_bbox_preds = torch.cat( - flatten_bbox_preds, dim=1).view(-1, flatten_bbox_preds[0].size(-1)) - flatten_labels = torch.cat(labels_list, dim=1).reshape(-1) - flatten_label_weights = torch.cat( - label_weights_list, dim=1).reshape(-1) - flatten_anchors = torch.cat(all_anchor_list, dim=1).reshape(-1, 4) - flatten_bbox_targets = torch.cat( - bbox_targets_list, dim=1).reshape(-1, 4) - flatten_bbox_weights = torch.cat( - bbox_weights_list, dim=1).reshape(-1, 4) - - # Apply ISR-P - isr_cfg = self.train_cfg.get('isr', None) - if isr_cfg is not None: - all_targets = (flatten_labels, flatten_label_weights, - flatten_bbox_targets, flatten_bbox_weights) - with torch.no_grad(): - all_targets = isr_p( - flatten_cls_scores, - flatten_bbox_preds, - all_targets, - flatten_anchors, - sampling_results_list, - bbox_coder=self.bbox_coder, - loss_cls=self.loss_cls, - num_class=self.num_classes, - **self.train_cfg.isr) - (flatten_labels, flatten_label_weights, flatten_bbox_targets, - flatten_bbox_weights) = all_targets - - # For convenience we compute loss once instead separating by fpn level, - # so that we don't need to separate the weights by level again. - # The result should be the same - losses_cls = self.loss_cls( - flatten_cls_scores, - flatten_labels, - flatten_label_weights, - avg_factor=num_total_samples) - losses_bbox = self.loss_bbox( - flatten_bbox_preds, - flatten_bbox_targets, - flatten_bbox_weights, - avg_factor=num_total_samples) - loss_dict = dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - - # CARL Loss - carl_cfg = self.train_cfg.get('carl', None) - if carl_cfg is not None: - loss_carl = carl_loss( - flatten_cls_scores, - flatten_labels, - flatten_bbox_preds, - flatten_bbox_targets, - self.loss_bbox, - **self.train_cfg.carl, - avg_factor=num_total_pos, - sigmoid=True, - num_class=self.num_classes) - loss_dict.update(loss_carl) - - return loss_dict diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d16-mg124_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d16-mg124_512x1024_40k_cityscapes.py deleted file mode 100644 index f20f260e23a95dfee9dfdceef9badab992246f53..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d16-mg124_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,11 +0,0 @@ -_base_ = './deeplabv3_r50-d8_512x1024_40k_cityscapes.py' -model = dict( - pretrained='open-mmlab://resnet101_v1c', - backbone=dict( - depth=101, - dilations=(1, 1, 1, 2), - strides=(1, 2, 2, 1), - multi_grid=(1, 2, 4)), - decode_head=dict( - dilations=(1, 6, 12, 18), - sampler=dict(type='OHEMPixelSampler', min_kept=100000))) diff --git a/spaces/AnnonSubmission/xai-cl/utils.py b/spaces/AnnonSubmission/xai-cl/utils.py deleted file mode 100644 index 59470f9bb1276013f1db5cd6ce2a3c69410da9be..0000000000000000000000000000000000000000 --- a/spaces/AnnonSubmission/xai-cl/utils.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np -from PIL import Image -import random -import cv2 -import io -from ssl_models.simclr2 import get_simclr2_model -from ssl_models.barlow_twins import get_barlow_twins_model -from ssl_models.simsiam import get_simsiam -from ssl_models.dino import get_dino_model_without_loss, get_dino_model_with_loss - -def get_ssl_model(network, variant): - - if network == 'simclrv2': - if variant == '1x': - ssl_model = get_simclr2_model('r50_1x_sk0_ema.pth').eval() - else: - ssl_model = get_simclr2_model('r50_2x_sk0_ema.pth').eval() - elif network == 'barlow_twins': - ssl_model = get_barlow_twins_model().eval() - elif network == 'simsiam': - ssl_model = get_simsiam().eval() - elif network == 'dino': - ssl_model = get_dino_model_without_loss().eval() - elif network == 'dino+loss': - ssl_model, dino_score = get_dino_model_with_loss() - ssl_model = ssl_model.eval() - - return ssl_model - -def overlay_heatmap(img, heatmap, denormalize = False): - loaded_img = img.squeeze(0).cpu().numpy().transpose((1, 2, 0)) - - if denormalize: - mean = np.array([0.485, 0.456, 0.406]) - std = np.array([0.229, 0.224, 0.225]) - loaded_img = std * loaded_img + mean - - loaded_img = (loaded_img.clip(0, 1) * 255).astype(np.uint8) - cam = heatmap / heatmap.max() - cam = cv2.resize(cam, (224, 224)) - cam = np.uint8(255 * cam) - cam = cv2.applyColorMap(cam, cv2.COLORMAP_JET) # jet: blue --> red - cam = cv2.cvtColor(cam, cv2.COLOR_BGR2RGB) - added_image = cv2.addWeighted(cam, 0.5, loaded_img, 0.5, 0) - return added_image - -def viz_map(img_path, heatmap): - "For pixel invariance" - img = np.array(Image.open(img_path).resize((224,224))) if isinstance(img_path, str) else np.array(img_path.resize((224,224))) - width, height, _ = img.shape - cam = heatmap.detach().cpu().numpy() - cam = cam / cam.max() - cam = cv2.resize(cam, (height, width)) - heatmap = np.uint8(255 * cam) - heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET) - heatmap = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB) - added_image = cv2.addWeighted(heatmap, 0.5, img, 0.7, 0) - return added_image - -def show_image(x, squeeze = True, denormalize = False): - - if squeeze: - x = x.squeeze(0) - - x = x.cpu().numpy().transpose((1, 2, 0)) - - if denormalize: - mean = np.array([0.485, 0.456, 0.406]) - std = np.array([0.229, 0.224, 0.225]) - x = std * x + mean - - return x.clip(0, 1) - -def deprocess(inp, to_numpy = True, to_PIL = False, denormalize = False): - - if to_numpy: - inp = inp.detach().cpu().numpy() - - inp = inp.squeeze(0).transpose((1, 2, 0)) - - if denormalize: - mean = np.array([0.485, 0.456, 0.406]) - std = np.array([0.229, 0.224, 0.225]) - inp = std * inp + mean - - inp = (inp.clip(0, 1) * 255).astype(np.uint8) - - if to_PIL: - return Image.fromarray(inp) - return inp - -def fig2img(fig): - """Convert a Matplotlib figure to a PIL Image and return it""" - buf = io.BytesIO() - fig.savefig(buf, bbox_inches='tight', pad_inches=0) - buf.seek(0) - img = Image.open(buf) - return img diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/vit.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/vit.py deleted file mode 100644 index ea46b1be88b261b0dec04f3da0256f5f66f88a74..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/vit.py +++ /dev/null @@ -1,491 +0,0 @@ -import torch -import torch.nn as nn -import timm -import types -import math -import torch.nn.functional as F - - -class Slice(nn.Module): - def __init__(self, start_index=1): - super(Slice, self).__init__() - self.start_index = start_index - - def forward(self, x): - return x[:, self.start_index :] - - -class AddReadout(nn.Module): - def __init__(self, start_index=1): - super(AddReadout, self).__init__() - self.start_index = start_index - - def forward(self, x): - if self.start_index == 2: - readout = (x[:, 0] + x[:, 1]) / 2 - else: - readout = x[:, 0] - return x[:, self.start_index :] + readout.unsqueeze(1) - - -class ProjectReadout(nn.Module): - def __init__(self, in_features, start_index=1): - super(ProjectReadout, self).__init__() - self.start_index = start_index - - self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU()) - - def forward(self, x): - readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :]) - features = torch.cat((x[:, self.start_index :], readout), -1) - - return self.project(features) - - -class Transpose(nn.Module): - def __init__(self, dim0, dim1): - super(Transpose, self).__init__() - self.dim0 = dim0 - self.dim1 = dim1 - - def forward(self, x): - x = x.transpose(self.dim0, self.dim1) - return x - - -def forward_vit(pretrained, x): - b, c, h, w = x.shape - - glob = pretrained.model.forward_flex(x) - - layer_1 = pretrained.activations["1"] - layer_2 = pretrained.activations["2"] - layer_3 = pretrained.activations["3"] - layer_4 = pretrained.activations["4"] - - layer_1 = pretrained.act_postprocess1[0:2](layer_1) - layer_2 = pretrained.act_postprocess2[0:2](layer_2) - layer_3 = pretrained.act_postprocess3[0:2](layer_3) - layer_4 = pretrained.act_postprocess4[0:2](layer_4) - - unflatten = nn.Sequential( - nn.Unflatten( - 2, - torch.Size( - [ - h // pretrained.model.patch_size[1], - w // pretrained.model.patch_size[0], - ] - ), - ) - ) - - if layer_1.ndim == 3: - layer_1 = unflatten(layer_1) - if layer_2.ndim == 3: - layer_2 = unflatten(layer_2) - if layer_3.ndim == 3: - layer_3 = unflatten(layer_3) - if layer_4.ndim == 3: - layer_4 = unflatten(layer_4) - - layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1) - layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2) - layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3) - layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4) - - return layer_1, layer_2, layer_3, layer_4 - - -def _resize_pos_embed(self, posemb, gs_h, gs_w): - posemb_tok, posemb_grid = ( - posemb[:, : self.start_index], - posemb[0, self.start_index :], - ) - - gs_old = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - -def forward_flex(self, x): - b, c, h, w = x.shape - - pos_embed = self._resize_pos_embed( - self.pos_embed, h // self.patch_size[1], w // self.patch_size[0] - ) - - B = x.shape[0] - - if hasattr(self.patch_embed, "backbone"): - x = self.patch_embed.backbone(x) - if isinstance(x, (list, tuple)): - x = x[-1] # last feature if backbone outputs list/tuple of features - - x = self.patch_embed.proj(x).flatten(2).transpose(1, 2) - - if getattr(self, "dist_token", None) is not None: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - dist_token = self.dist_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, dist_token, x), dim=1) - else: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - x = x + pos_embed - x = self.pos_drop(x) - - for blk in self.blocks: - x = blk(x) - - x = self.norm(x) - - return x - - -activations = {} - - -def get_activation(name): - def hook(model, input, output): - activations[name] = output - - return hook - - -def get_readout_oper(vit_features, features, use_readout, start_index=1): - if use_readout == "ignore": - readout_oper = [Slice(start_index)] * len(features) - elif use_readout == "add": - readout_oper = [AddReadout(start_index)] * len(features) - elif use_readout == "project": - readout_oper = [ - ProjectReadout(vit_features, start_index) for out_feat in features - ] - else: - assert ( - False - ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'" - - return readout_oper - - -def _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[2, 5, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - # 32, 48, 136, 384 - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_large_patch16_384", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[256, 512, 1024, 1024], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - ) - - -def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model( - "vit_deit_base_distilled_patch16_384", pretrained=pretrained - ) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - start_index=2, - ) - - -def _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=[0, 1, 8, 11], - vit_features=768, - use_vit_only=False, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - - if use_vit_only == True: - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - else: - pretrained.model.patch_embed.backbone.stages[0].register_forward_hook( - get_activation("1") - ) - pretrained.model.patch_embed.backbone.stages[1].register_forward_hook( - get_activation("2") - ) - - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - if use_vit_only == True: - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - else: - pretrained.act_postprocess1 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - pretrained.act_postprocess2 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitb_rn50_384( - pretrained, use_readout="ignore", hooks=None, use_vit_only=False -): - model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained) - - hooks = [0, 1, 8, 11] if hooks == None else hooks - return _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) diff --git a/spaces/Apex-X/GODROOP/roop/predictor.py b/spaces/Apex-X/GODROOP/roop/predictor.py deleted file mode 100644 index 877fd725d21bddf5e788677eefbc917ddc79f52b..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/GODROOP/roop/predictor.py +++ /dev/null @@ -1,22 +0,0 @@ -import threading -import numpy -from PIL import Image - -from roop.typing import Frame - -# Define any other necessary variables or constants here - -def predict_frame(target_frame: Frame) -> bool: - # Modify this function as needed for your specific use case, without NSFW prediction - # For example, you can implement custom image analysis or processing here - return False - -def predict_image(target_path: str) -> bool: - # Modify this function as needed for your specific use case, without NSFW prediction - # For example, you can check the image based on your application's requirements - return False - -def predict_video(target_path: str) -> bool: - # Modify this function as needed for your specific use case, without NSFW prediction - # For example, you can analyze video frames for other purposes - return False diff --git a/spaces/ArcanAlt/arcanDream/server.js b/spaces/ArcanAlt/arcanDream/server.js deleted file mode 100644 index 04a48b7a429c4d0ad0b772ba1edf503e349eda21..0000000000000000000000000000000000000000 --- a/spaces/ArcanAlt/arcanDream/server.js +++ /dev/null @@ -1,32 +0,0 @@ -const express = require('express'); -const proxy = require('express-http-proxy'); -const app = express(); -const targetUrl = 'https://api.openai.com'; -const openaiKey = process.env.OPENAI_KEY -const port = 7860; -const baseUrl = getExternalUrl(process.env.SPACE_ID); - -app.use('/api', proxy(targetUrl, { - proxyReqOptDecorator: (proxyReqOpts, srcReq) => { - // Modify the request headers if necessary - proxyReqOpts.headers['Authorization'] = 'Bearer '+openaiKey; - return proxyReqOpts; - }, -})); - -app.get("/", (req, res) => { - res.send(`This is your OpenAI Reverse Proxy URL: ${baseUrl}`); -}); - -function getExternalUrl(spaceId) { - try { - const [username, spacename] = spaceId.split("/"); - return `https://${username}-${spacename.replace(/_/g, "-")}.hf.space/api/v1`; - } catch (e) { - return ""; - } -} - -app.listen(port, () => { - console.log(`Reverse proxy server running on ${baseUrl}`); -}); \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/scope.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/scope.py deleted file mode 100644 index c9d134cc3cedae929e5bef2b5547f7e33dc10a52..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/scope.py +++ /dev/null @@ -1,86 +0,0 @@ -from collections.abc import Mapping -from typing import TYPE_CHECKING, Any, Optional, Tuple - -from .highlighter import ReprHighlighter -from .panel import Panel -from .pretty import Pretty -from .table import Table -from .text import Text, TextType - -if TYPE_CHECKING: - from .console import ConsoleRenderable - - -def render_scope( - scope: "Mapping[str, Any]", - *, - title: Optional[TextType] = None, - sort_keys: bool = True, - indent_guides: bool = False, - max_length: Optional[int] = None, - max_string: Optional[int] = None, -) -> "ConsoleRenderable": - """Render python variables in a given scope. - - Args: - scope (Mapping): A mapping containing variable names and values. - title (str, optional): Optional title. Defaults to None. - sort_keys (bool, optional): Enable sorting of items. Defaults to True. - indent_guides (bool, optional): Enable indentation guides. Defaults to False. - max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation. - Defaults to None. - max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to None. - - Returns: - ConsoleRenderable: A renderable object. - """ - highlighter = ReprHighlighter() - items_table = Table.grid(padding=(0, 1), expand=False) - items_table.add_column(justify="right") - - def sort_items(item: Tuple[str, Any]) -> Tuple[bool, str]: - """Sort special variables first, then alphabetically.""" - key, _ = item - return (not key.startswith("__"), key.lower()) - - items = sorted(scope.items(), key=sort_items) if sort_keys else scope.items() - for key, value in items: - key_text = Text.assemble( - (key, "scope.key.special" if key.startswith("__") else "scope.key"), - (" =", "scope.equals"), - ) - items_table.add_row( - key_text, - Pretty( - value, - highlighter=highlighter, - indent_guides=indent_guides, - max_length=max_length, - max_string=max_string, - ), - ) - return Panel.fit( - items_table, - title=title, - border_style="scope.border", - padding=(0, 1), - ) - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich import print - - print() - - def test(foo: float, bar: float) -> None: - list_of_things = [1, 2, 3, None, 4, True, False, "Hello World"] - dict_of_things = { - "version": "1.1", - "method": "confirmFruitPurchase", - "params": [["apple", "orange", "mangoes", "pomelo"], 1.123], - "id": "194521489", - } - print(render_scope(locals(), title="[i]locals", sort_keys=False)) - - test(20.3423, 3.1427) - print() diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build.py deleted file mode 100644 index c0676d8e4b1a567969cf05c5825d49c3300284c9..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build.py +++ /dev/null @@ -1,146 +0,0 @@ -import sys -import warnings -from typing import TYPE_CHECKING, List, Dict -from distutils.command.build import build as _build - -from setuptools import SetuptoolsDeprecationWarning - -if sys.version_info >= (3, 8): - from typing import Protocol -elif TYPE_CHECKING: - from typing_extensions import Protocol -else: - from abc import ABC as Protocol - - -_ORIGINAL_SUBCOMMANDS = {"build_py", "build_clib", "build_ext", "build_scripts"} - - -class build(_build): - # copy to avoid sharing the object with parent class - sub_commands = _build.sub_commands[:] - - def get_sub_commands(self): - subcommands = {cmd[0] for cmd in _build.sub_commands} - if subcommands - _ORIGINAL_SUBCOMMANDS: - msg = """ - It seems that you are using `distutils.command.build` to add - new subcommands. Using `distutils` directly is considered deprecated, - please use `setuptools.command.build`. - """ - warnings.warn(msg, SetuptoolsDeprecationWarning) - self.sub_commands = _build.sub_commands - return super().get_sub_commands() - - -class SubCommand(Protocol): - """In order to support editable installations (see :pep:`660`) all - build subcommands **SHOULD** implement this protocol. They also **MUST** inherit - from ``setuptools.Command``. - - When creating an :pep:`editable wheel <660>`, ``setuptools`` will try to evaluate - custom ``build`` subcommands using the following procedure: - - 1. ``setuptools`` will set the ``editable_mode`` attribute to ``True`` - 2. ``setuptools`` will execute the ``run()`` command. - - .. important:: - Subcommands **SHOULD** take advantage of ``editable_mode=True`` to adequate - its behaviour or perform optimisations. - - For example, if a subcommand don't need to generate any extra file and - everything it does is to copy a source file into the build directory, - ``run()`` **SHOULD** simply "early return". - - Similarly, if the subcommand creates files that would be placed alongside - Python files in the final distribution, during an editable install - the command **SHOULD** generate these files "in place" (i.e. write them to - the original source directory, instead of using the build directory). - Note that ``get_output_mapping()`` should reflect that and include mappings - for "in place" builds accordingly. - - 3. ``setuptools`` use any knowledge it can derive from the return values of - ``get_outputs()`` and ``get_output_mapping()`` to create an editable wheel. - When relevant ``setuptools`` **MAY** attempt to use file links based on the value - of ``get_output_mapping()``. Alternatively, ``setuptools`` **MAY** attempt to use - :doc:`import hooks ` to redirect any attempt to import - to the directory with the original source code and other files built in place. - - Please note that custom sub-commands **SHOULD NOT** rely on ``run()`` being - executed (or not) to provide correct return values for ``get_outputs()``, - ``get_output_mapping()`` or ``get_source_files()``. The ``get_*`` methods should - work independently of ``run()``. - """ - - editable_mode: bool = False - """Boolean flag that will be set to ``True`` when setuptools is used for an - editable installation (see :pep:`660`). - Implementations **SHOULD** explicitly set the default value of this attribute to - ``False``. - When subcommands run, they can use this flag to perform optimizations or change - their behaviour accordingly. - """ - - build_lib: str - """String representing the directory where the build artifacts should be stored, - e.g. ``build/lib``. - For example, if a distribution wants to provide a Python module named ``pkg.mod``, - then a corresponding file should be written to ``{build_lib}/package/module.py``. - A way of thinking about this is that the files saved under ``build_lib`` - would be eventually copied to one of the directories in :obj:`site.PREFIXES` - upon installation. - - A command that produces platform-independent files (e.g. compiling text templates - into Python functions), **CAN** initialize ``build_lib`` by copying its value from - the ``build_py`` command. On the other hand, a command that produces - platform-specific files **CAN** initialize ``build_lib`` by copying its value from - the ``build_ext`` command. In general this is done inside the ``finalize_options`` - method with the help of the ``set_undefined_options`` command:: - - def finalize_options(self): - self.set_undefined_options("build_py", ("build_lib", "build_lib")) - ... - """ - - def initialize_options(self): - """(Required by the original :class:`setuptools.Command` interface)""" - - def finalize_options(self): - """(Required by the original :class:`setuptools.Command` interface)""" - - def run(self): - """(Required by the original :class:`setuptools.Command` interface)""" - - def get_source_files(self) -> List[str]: - """ - Return a list of all files that are used by the command to create the expected - outputs. - For example, if your build command transpiles Java files into Python, you should - list here all the Java files. - The primary purpose of this function is to help populating the ``sdist`` - with all the files necessary to build the distribution. - All files should be strings relative to the project root directory. - """ - - def get_outputs(self) -> List[str]: - """ - Return a list of files intended for distribution as they would have been - produced by the build. - These files should be strings in the form of - ``"{build_lib}/destination/file/path"``. - - .. note:: - The return value of ``get_output()`` should include all files used as keys - in ``get_output_mapping()`` plus files that are generated during the build - and don't correspond to any source file already present in the project. - """ - - def get_output_mapping(self) -> Dict[str, str]: - """ - Return a mapping between destination files as they would be produced by the - build (dict keys) into the respective existing (source) files (dict values). - Existing (source) files should be represented as strings relative to the project - root directory. - Destination files should be strings in the form of - ``"{build_lib}/destination/file/path"``. - """ diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/__init__.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/__init__.py deleted file mode 100644 index d96609e8f2261a6800fe85fcf3e1eaeaa44455c6..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .cityscapes_evaluation import CityscapesInstanceEvaluator, CityscapesSemSegEvaluator -from .coco_evaluation import COCOEvaluator -from .rotated_coco_evaluation import RotatedCOCOEvaluator -from .evaluator import DatasetEvaluator, DatasetEvaluators, inference_context, inference_on_dataset -from .lvis_evaluation import LVISEvaluator -from .panoptic_evaluation import COCOPanopticEvaluator -from .pascal_voc_evaluation import PascalVOCDetectionEvaluator -from .sem_seg_evaluation import SemSegEvaluator -from .testing import print_csv_format, verify_results - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/Benson/text-generation/Examples/Apk Mod De Da Para Android 11.md b/spaces/Benson/text-generation/Examples/Apk Mod De Da Para Android 11.md deleted file mode 100644 index b2c2d1ad5c2ec28eab0a66b247b5a2ca234ef08b..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Apk Mod De Da Para Android 11.md +++ /dev/null @@ -1,53 +0,0 @@ - -

        Totalmente fiable servicio de entrega Mod APK An1: Un divertido y caótico juego basado en la física

        -

        ¿Te gustan los juegos divertidos, impredecibles y llenos de sorpresas? Si es así, es posible que desee echa un vistazo a Totally Reliable Delivery Service, un juego basado en la física en el que entregar paquetes en un mundo loco. Y si quieres hacer el juego aún más divertido y emocionante, se puede descargar totalmente fiable Servicio de entrega Mod APK An1, una versión modificada del juego que le da dinero ilimitado y desbloqueado características. En este artículo, te diremos de qué se trata este juego, qué ofrece el apk mod, y cómo descargarlo e instalarlo en tu dispositivo Android.

        -

        apk mod de día para android 11


        Download · https://bltlly.com/2v6II7



        -

        ¿Qué es un servicio de entrega totalmente confiable?

        -

        Un juego donde se entregan paquetes en un mundo loco

        -

        Totally Reliable Delivery Service es un juego donde juegas como un repartidor que tiene que entregar paquetes en un mundo loco y caótico. El juego cuenta con la física ragdoll, lo que significa que su personaje y los objetos en el juego se comportan de manera realista e hilarante. Puede utilizar varios vehículos, como automóviles, camiones, aviones, helicópteros, barcos e incluso cohetes, para transportar sus paquetes. Pero ten cuidado, porque cualquier cosa puede salir mal en el camino. Puedes chocar contra edificios, caerte de puentes, ser perseguido por animales o explotar en el aire. El juego está lleno de sorpresas y desafíos que te harán reír a carcajadas.

        -

        Un juego donde puedes personalizar tu personaje y vehículos

        -

        Servicio de entrega totalmente confiable también le permite personalizar su personaje y vehículos para adaptarse a su estilo y preferencias. Puedes elegir entre diferentes trajes, accesorios, peinados y colores para tu personaje. También puede actualizar sus vehículos con diferentes partes, como motores, ruedas, alas, hélices y más. Incluso puedes crear tus propios vehículos usando el modo sandbox. El juego te da muchas opciones para expresar tu creatividad y personalidad.

        - -

        Totally Reliable Delivery Service es un juego que puedes disfrutar solo o con tus amigos online. Puedes jugar solo y completar varias misiones y desafíos en el mundo abierto. O puede unirse a hasta otros tres jugadores en línea y cooperar o competir con ellos en la entrega de paquetes. También pueden explorar el mundo juntos y divertirse con el juego basado en la física. El juego admite multijugador multiplataforma, lo que significa que puedes jugar con personas que utilizan diferentes dispositivos, como PC, consola o dispositivos móviles.

        -

        ¿Qué es totalmente fiable servicio de entrega Mod APK An1?

        -

        Una versión modificada del juego que te da dinero ilimitado y funciones desbloqueadas

        -

        Totalmente fiable servicio de entrega Mod APK An1 es una versión modificada del juego que le da algunas ventajas sobre la versión original. Con este mod apk, obtendrá dinero ilimitado que se puede utilizar para comprar cualquier cosa en el juego. También obtendrá todas las características desbloqueadas, como todos los trajes, accesorios, vehículos, piezas, mapas, modos y más. Podrás disfrutar del juego sin limitaciones ni restricciones.

        -

        Una versión del juego que es compatible con dispositivos Android

        -

        Una versión del juego que es gratis para descargar e instalar

        -

        Totalmente fiable servicio de entrega Mod APK An1 es una versión del juego que es gratis para descargar e instalar en su dispositivo Android. Usted no necesita pagar nada para obtener este apk mod. También no es necesario para erradicar el dispositivo o utilizar cualquier otra herramienta para instalarlo. Solo tienes que seguir unos sencillos pasos que explicaremos más adelante en este artículo. Podrás jugar el juego sin problemas ni riesgos.

        -

        ¿Cómo descargar e instalar el servicio de entrega totalmente confiable Mod APK An1?

        -

        Paso 1: Ir al sitio web

        - -Captura de pantalla del sitio web> -

        Paso 2: Haga clic en el botón de descarga y espere a que el archivo se descargue

        -

        El siguiente paso es hacer clic en el botón de descarga que se encuentra en la parte inferior de la página. Verá una ventana emergente que le pide que confirme su descarga. Haga clic en Aceptar y espere a que se descargue el archivo. El tamaño del archivo es de unos 50 MB, por lo que puede tardar unos minutos dependiendo de su velocidad de Internet. Puede comprobar el progreso de su descarga en la barra de notificaciones.

        -

        -Confirmación de descarga> -

        Paso 3: Habilitar fuentes desconocidas en la configuración del dispositivo

        -

        Una vez que se descargue el archivo, debe habilitar fuentes desconocidas en la configuración del dispositivo. Esta es una medida de seguridad que le impide instalar aplicaciones de fuentes distintas de Google Play Store. Para habilitar fuentes desconocidas, vaya a la configuración del dispositivo y busque opciones de seguridad o privacidad. Luego, busque la opción que dice fuentes desconocidas o permita la instalación desde fuentes desconocidas y cámbiela. Es posible que vea un mensaje de advertencia que indica que la instalación desde fuentes desconocidas podría dañar su dispositivo. No te preocupes, este apk mod es seguro y probado, por lo que puede ignorar la advertencia y proceder.

        -Opción de fuentes desconocidas> -

        Paso 4: Busque el archivo descargado y toque en él para instalarlo

        -

        El siguiente paso es localizar el archivo descargado y tocar en él para instalarlo. Puede encontrar el archivo en su carpeta de descargas o en su aplicación de administrador de archivos. El nombre del archivo es totalmente fiable-delivery-service-mod_1.4.0.apk. Toque en él y verá una pantalla de instalación que le pide que confirme su instalación. Toque en instalar y espere a que el proceso termine.

        -Pantalla de instalación> - -

        Felicidades! Usted ha descargado e instalado con éxito Totalmente fiable Servicio de entrega Mod APK An1 en su dispositivo Android. Ahora puedes disfrutar del juego con dinero ilimitado y funciones desbloqueadas. Puedes lanzar el juego desde el cajón de la app o la pantalla de inicio. ¡Diviértete entregando paquetes en un mundo loco!

        -Icono del juego> -

        Conclusión

        -

        Totally Reliable Delivery Service es un divertido y caótico juego basado en la física en el que entregar paquetes en un mundo loco e impredecible. Puedes personalizar tu personaje y vehículos, jugar solo o con amigos en línea, y explorar diferentes mapas y modos. Si usted quiere hacer el juego aún más agradable, se puede descargar totalmente fiable servicio de entrega Mod APK An1, una versión modificada del juego que le da dinero ilimitado y desbloqueado características. Puede descargar e instalar este apk mod de forma gratuita y fácil siguiendo nuestra guía anterior. Esperamos que haya encontrado este artículo útil e informativo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación.

        -

        Preguntas frecuentes

        -
          -
        • ¿Es totalmente confiable servicio de entrega mod APK An1 seguro?
        • -
            -
          • ¿Es totalmente confiable servicio de entrega mod APK An1 seguro?
          • -

            Sí, Servicio de entrega totalmente confiable Mod APK An1 es seguro y probado por nuestro equipo. No contiene ningún virus, malware o spyware que pueda dañar su dispositivo o comprometer su privacidad. Puede descargar e instalar este apk mod sin ninguna preocupación.

            -
          • ¿Es totalmente confiable servicio de entrega mod APK An1 legal?
          • - -
          • ¿Cuáles son los requisitos para ejecutar Totalmente fiable servicio de entrega Mod APK An1?
          • -

            Servicio de entrega totalmente confiable Mod APK An1 requiere un dispositivo Android que se ejecuta en Android 4.1 o superior. También necesita tener al menos 1 GB de RAM y 200 MB de espacio de almacenamiento gratuito en su dispositivo. También necesitas tener una conexión a Internet estable para jugar online.

            -
          • ¿Puedo jugar totalmente fiable servicio de entrega Mod APK An1 en PC o iOS?
          • -

            No, Servicio de entrega totalmente fiable Mod APK An1 solo es compatible con dispositivos Android. No se puede jugar este apk mod en dispositivos PC o iOS. Sin embargo, puedes jugar la versión original del juego en dispositivos PC o iOS descargándolo desde las plataformas oficiales, como Steam, Epic Games Store, App Store o Google Play Store.

            -
          • ¿Puedo actualizar el servicio de entrega totalmente confiable Mod APK An1?
          • -

            No, no se puede actualizar totalmente fiable servicio de entrega Mod APK An1 del juego en sí. Si intentas actualizar el juego desde la configuración del juego, podrías perder las características de mod y volver a la versión original del juego. Para actualizar el apk mod, es necesario visitar nuestro sitio web de nuevo y descargar la última versión del apk mod. A continuación, es necesario desinstalar la versión anterior del apk mod e instalar el nuevo siguiendo los mismos pasos que antes.

            -

          64aa2da5cf
          -
          -
          \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Blacknoise Reste Toi Mp3 Download.md b/spaces/Benson/text-generation/Examples/Blacknoise Reste Toi Mp3 Download.md deleted file mode 100644 index 44e2912b0971f562ed86cfe44e4a0acfa7a6dc31..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Blacknoise Reste Toi Mp3 Download.md +++ /dev/null @@ -1,109 +0,0 @@ - -

          Blacknoise Reste Toi Mp3 Descargar: Una revisión del éxito de Amapiano

          -

          Si eres un fan de amapiano, el popular género de música house sudafricano, es posible que hayas oído hablar de Blacknoise, un artista de hip-hop que recientemente ha colaborado con Kazeli y Mashaya para crear una canción pegadiza y edificante llamada Reste Toi. En este artículo, revisaremos esta canción y te diremos cómo descargarla en formato Mp3.

          -

          blacknoise reste toi mp3 download


          Downloadhttps://bltlly.com/2v6JPu



          -

          ¿Quién es Blacknoise?

          -

          Breve biografía del artista sudafricano de hip-hop

          -

          Blacknoise es el nombre artístico de Emile Jansen, un rapero, productor y activista de Ciudad del Cabo, Sudáfrica. También es el fundador y líder de Black Noise, un grupo de hip-hop que ha estado activo desde 1986. Blacknoise es uno de los pioneros de la escena hip-hop 'consciente' de Ciudad del Cabo, usando el rap como una herramienta para el comentario social y el empoderamiento. También ha participado en diversas iniciativas de desarrollo juvenil, como talleres, revistas, libros, obras de teatro y eventos. Ha lanzado 12 álbumes con Black Noise, seis álbumes en solitario y varios álbumes recopilatorios.

          -

          Su estilo musical e influencias

          -

          El estilo musical de Blacknoise está influenciado por varios géneros, como rap, reggae, jazz, funk, soul y amapiano. Combina sonidos africanos tradicionales con ritmos y samples modernos, creando un sonido único y diverso. También incorpora elementos de su cultura y lenguaje, como el afrikaans, xhosa y khoisan. Algunas de sus influencias musicales incluyen Public Enemy, Bob Marley, Fela Kuti, Brenda Fassie y Kabza De Small.

          -

          ¿Qué es Reste Toi?

          -

          El significado y origen del título de la canción

          - -

          La colaboración con Kazeli y Mashaya

          -

          Kazeli es una cantante y compositora francesa que se mudó a Sudáfrica en 2019. Conoció a Blacknoise a través de un amigo mutuo y decidieron trabajar juntos en algunos proyectos musicales. También invitaron a Mashaya, un cantante y productor sudafricano conocido por sus éxitos de amapiano. El trío grabó Reste Toi en el estudio de Blacknoise en Ciudad del Cabo. Querían crear una canción que mostrara sus diferentes orígenes y talentos, a la vez que entregara un mensaje positivo.

          -

          -

          La letra y el mensaje de la canción

          -

          Las letras de Reste Toi tratan de celebrar la individualidad y la singularidad de uno. El estribillo dice así:

          -
          -

          Buscar en la web
          -No cambiar pas pour les autres
          -Volver a la página principal -Tu es beau comme tu es

          -
          -

          Esto se traduce a:

          -
          -

          Mantente a ti mismo
          -No cambie para otros
          -Mantente a ti mismo
          -Eres hermosa como eres

          -
          -

          Los versículos también contienen palabras de aliento y afirmación, como "Eres increíble", "Eres una estrella", y "Eres una bendición". La canción también incluye algunas frases de Xhosa, como "Molo sisi" (Hola hermana) y "Enkosi kakhulu" (Muchas gracias). El mensaje de la canción es inspirar a la gente a sentirse segura y feliz con lo que son, y respetar y apreciar a los demás por sus diferencias.

          -

          Como descargar Reste Toi Mp3?

          -

          Las plataformas de streaming que ofrecen la canción

          -

          Reste Toi está disponible en varias plataformas de streaming, como Spotify, Apple Music, YouTube Music, Deezer y SoundCloud. Puedes escuchar la canción online o offline, dependiendo de tu suscripción y preferencias. También puedes ver el video musical oficial de la canción en YouTube, que muestra a los artistas interpretando la canción en diferentes lugares de Ciudad del Cabo.

          -

          Los beneficios de descargar la canción en formato Mp3

          - -
            -
          • Puede reproducir la canción en cualquier dispositivo que soporte archivos Mp3, como su teléfono, computadora o reproductor Mp3.
          • -
          • Puede ahorrar espacio de almacenamiento en su dispositivo, ya que los archivos MP3 son más pequeños que otros formatos de audio.
          • -
          • Puedes transferir la canción a otros dispositivos o compartirla con tus amigos fácilmente.
          • -
          • Puede editar la canción o usarla para otros fines, como hacer un tono de llamada o un remix.
          • -
          -

          Los pasos para descargar la canción de diferentes fuentes

          -

          Hay diferentes maneras de descargar Reste Toi en formato Mp3, dependiendo de la fuente que elija. Estos son algunos de los métodos más comunes:

          - -
    FuentePasos
    Spotify
      -
    1. Abra la aplicación Spotify en su dispositivo y busque Reste Toi por Blacknoise, Kazeli y Mashaya.
    2. -
    3. Seleccione la canción y toque en el icono de tres puntos en la esquina superior derecha.
    4. -
    5. Seleccione Compartir y luego Copiar enlace.
    6. -
    7. Abra un navegador web y vaya a un sitio web de conversión de Spotify a Mp3, como SpotiFlyer o SpotiApp.
    8. -
    9. Pega el enlace que has copiado y haz clic en Convertir o Descargar.
    10. -
    11. Espere a que el proceso de conversión termine y luego descargue el archivo Mp3 en su dispositivo.
    12. -
    YouTube
      -
    1. Abra un navegador web y vaya a YouTube.com. Busque Reste Toi por Blacknoise, Kazeli y Mashaya.
    2. -
    3. Seleccione el vídeo de la canción y copie su URL desde la barra de direcciones.
    4. -
    5. Abra otra pestaña y vaya a un sitio web de conversión de YouTube a Mp3, como YTMP3 o 4K Video Downloader.
    6. -
    7. Pegue la URL que copió y haga clic en Convertir o Descargar.
    8. -
    9. Seleccione Mp3 como formato de salida y elija la calidad que desee.
    10. -
    11. Espere a que el proceso de conversión termine y luego descargue el archivo Mp3 en su dispositivo.
    12. -
    SoundCloud
      -
    1. Abra un navegador web y vaya a SoundCloud.com. Busque Reste Toi por Blacknoise, Kazeli y Mashaya.
    2. - -
    3. Abra otra pestaña y vaya a un sitio web de conversión de SoundCloud a Mp3, como SCDL o SoundCloud Downloader.
    4. -
    5. Pegue la URL que copió y haga clic en Descargar o Convertir.
    6. -
    7. Espere a que el proceso de conversión termine y luego descargue el archivo Mp3 en su dispositivo.
    8. -
    Number of variables3
    Number of observations93
    Missing cells0
    Missing cells (%)0.0%
    Duplicate rows0
    Duplicate rows (%)0.0%
    Total size in memory2.3 KiB
    Average record size in memory25.4 B

    Variable types

    Categorical3

    Alerts

    time has a high cardinality: 93 distinct values High cardinality
    story is highly correlated with title and 1 other fieldsHigh correlation
    title is highly correlated with story and 1 other fieldsHigh correlation
    time is highly correlated with story and 1 other fieldsHigh correlation
    title is highly correlated with story and 1 other fieldsHigh correlation
    story is highly correlated with title and 1 other fieldsHigh correlation
    time is highly correlated with title and 1 other fieldsHigh correlation
    time is uniformly distributed Uniform
    time has unique values Unique

    Reproduction

    Analysis started2022-10-20 19:04:05.736377
    Analysis finished2022-10-20 19:04:09.219235
    Duration3.48 seconds
    Software versionpandas-profiling v3.2.0
    Download configurationconfig.json

    Variables

    title
    Categorical

    HIGH CORRELATION
    HIGH CORRELATION

    Distinct35
    Distinct (%)37.6%
    Missing0
    Missing (%)0.0%
    Memory size872.0 B
    International Sports Science Association (ISSA)
    cerebrovascular disease
     
    5
    the collective nature of the text
     
    4
    New York City is the largest
     
    4
    speeches
     
    4
    Other values (30)
    67 

    Length

    Max length5558
    Median length65
    Mean length258.2688172
    Min length2

    Characters and Unicode

    Total characters24019
    Distinct characters84
    Distinct categories12 ?
    Distinct scripts2 ?
    Distinct blocks5 ?
    The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

    Unique

    Unique15 ?
    Unique (%)16.1%

    Sample

    1st rowelectromagnetic energy is absorbed or emitted in discrete packets
    2nd rowspeeches
    3rd rowcerebrovascular disease
    4th rowantidepressant medications
    5th rowScalp cooling

    Common Values

    ValueCountFrequency (%)
    International Sports Science Association (ISSA)9
     
    9.7%
    cerebrovascular disease5
     
    5.4%
    the collective nature of the text4
     
    4.3%
    New York City is the largest4
     
    4.3%
    speeches4
     
    4.3%
    at any age and not just during or after menopause4
     
    4.3%
    provides the prescription drug4
     
    4.3%
    description, identification, nomenclature, and classification of organisms4
     
    4.3%
    Film4
     
    4.3%
    by4
     
    4.3%
    Other values (25)47
    50.5%

    Length

    2022-10-20T19:04:09.299048image/svg+xmlMatplotlib v3.5.2, https://matplotlib.org/
    Histogram of lengths of the category
    ValueCountFrequency (%)
    the285
     
    7.5%
    of127
     
    3.4%
    a105
     
    2.8%
    to91
     
    2.4%
    and75
     
    2.0%
    64
     
    1.7%
    star57
     
    1.5%
    for50
     
    1.3%
    time46
     
    1.2%
    in41
     
    1.1%
    Other values (1006)2850
    75.2%

    Most occurring characters

    ValueCountFrequency (%)
    3627
    15.1%
    e2303
     
    9.6%
    t1898
     
    7.9%
    a1698
     
    7.1%
    o1469
     
    6.1%
    r1425
     
    5.9%
    s1376
     
    5.7%
    i1331
     
    5.5%
    n1223
     
    5.1%
    l749
     
    3.1%
    Other values (74)6920
    28.8%

    Most occurring categories

    ValueCountFrequency (%)
    Lowercase Letter18705
    77.9%
    Space Separator3627
     
    15.1%
    Uppercase Letter640
     
    2.7%
    Other Punctuation433
     
    1.8%
    Decimal Number176
     
    0.7%
    Control156
     
    0.6%
    Math Symbol128
     
    0.5%
    Open Punctuation57
     
    0.2%
    Close Punctuation53
     
    0.2%
    Dash Punctuation41
     
    0.2%
    Other values (2)3
     
    < 0.1%

    Most frequent character per category

    Lowercase Letter
    ValueCountFrequency (%)
    e2303
    12.3%
    t1898
    10.1%
    a1698
     
    9.1%
    o1469
     
    7.9%
    r1425
     
    7.6%
    s1376
     
    7.4%
    i1331
     
    7.1%
    n1223
     
    6.5%
    l749
     
    4.0%
    c746
     
    4.0%
    Other values (19)4487
    24.0%
    Uppercase Letter
    ValueCountFrequency (%)
    S119
    18.6%
    I66
     
    10.3%
    T65
     
    10.2%
    A51
     
    8.0%
    C39
     
    6.1%
    N34
     
    5.3%
    P29
     
    4.5%
    B26
     
    4.1%
    R24
     
    3.8%
    M22
     
    3.4%
    Other values (14)165
    25.8%
    Decimal Number
    ValueCountFrequency (%)
    065
    36.9%
    136
    20.5%
    915
     
    8.5%
    212
     
    6.8%
    512
     
    6.8%
    410
     
    5.7%
    39
     
    5.1%
    69
     
    5.1%
    75
     
    2.8%
    83
     
    1.7%
    Other Punctuation
    ValueCountFrequency (%)
    ,201
    46.4%
    .167
    38.6%
    "18
     
    4.2%
    '16
     
    3.7%
    /14
     
    3.2%
    :9
     
    2.1%
    ;7
     
    1.6%
    *1
     
    0.2%
    Math Symbol
    ValueCountFrequency (%)
    =123
    96.1%
    +4
     
    3.1%
    1
     
    0.8%
    Control
    ValueCountFrequency (%)
    142
    91.0%
    14
     
    9.0%
    Open Punctuation
    ValueCountFrequency (%)
    (55
    96.5%
    [2
     
    3.5%
    Close Punctuation
    ValueCountFrequency (%)
    )51
    96.2%
    ]2
     
    3.8%
    Space Separator
    ValueCountFrequency (%)
    3627
    100.0%
    Dash Punctuation
    ValueCountFrequency (%)
    -41
    100.0%
    Modifier Letter
    ValueCountFrequency (%)
    ˈ2
    100.0%
    Other Symbol
    ValueCountFrequency (%)
    °1
    100.0%

    Most occurring scripts

    ValueCountFrequency (%)
    Latin19345
    80.5%
    Common4674
     
    19.5%

    Most frequent character per script

    Latin
    ValueCountFrequency (%)
    e2303
    11.9%
    t1898
     
    9.8%
    a1698
     
    8.8%
    o1469
     
    7.6%
    r1425
     
    7.4%
    s1376
     
    7.1%
    i1331
     
    6.9%
    n1223
     
    6.3%
    l749
     
    3.9%
    c746
     
    3.9%
    Other values (43)5127
    26.5%
    Common
    ValueCountFrequency (%)
    3627
    77.6%
    ,201
     
    4.3%
    .167
     
    3.6%
    142
     
    3.0%
    =123
     
    2.6%
    065
     
    1.4%
    (55
     
    1.2%
    )51
     
    1.1%
    -41
     
    0.9%
    136
     
    0.8%
    Other values (21)166
     
    3.6%

    Most occurring blocks

    ValueCountFrequency (%)
    ASCII24011
    > 99.9%
    IPA Ext4
     
    < 0.1%
    Modifier Letters2
     
    < 0.1%
    None1
     
    < 0.1%
    Math Operators1
     
    < 0.1%

    Most frequent character per block

    ASCII
    ValueCountFrequency (%)
    3627
    15.1%
    e2303
     
    9.6%
    t1898
     
    7.9%
    a1698
     
    7.1%
    o1469
     
    6.1%
    r1425
     
    5.9%
    s1376
     
    5.7%
    i1331
     
    5.5%
    n1223
     
    5.1%
    l749
     
    3.1%
    Other values (68)6912
    28.8%
    Modifier Letters
    ValueCountFrequency (%)
    ˈ2
    100.0%
    IPA Ext
    ValueCountFrequency (%)
    ɔ2
    50.0%
    ə1
    25.0%
    ʀ1
    25.0%
    None
    ValueCountFrequency (%)
    °1
    100.0%
    Math Operators
    ValueCountFrequency (%)
    1
    100.0%

    story
    Categorical

    HIGH CORRELATION
    HIGH CORRELATION

    Distinct26
    Distinct (%)28.0%
    Missing0
    Missing (%)0.0%
    Memory size872.0 B
    https://en.wikipedia.org/wiki/Physical_fitness
    22 
    https://en.wikipedia.org/wiki/Alzheimer%27s_disease
     
    5
    https://en.wikipedia.org/wiki/Star_tracker
     
    4
    https://en.wikipedia.org/wiki/Anthology
     
    4
    https://en.wikipedia.org/wiki/Cicero
     
    4
    Other values (21)
    54 

    Length

    Max length74
    Median length51
    Mean length44.84946237
    Min length35

    Characters and Unicode

    Total characters4171
    Distinct characters49
    Distinct categories5 ?
    Distinct scripts2 ?
    Distinct blocks1 ?
    The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

    Unique

    Unique7 ?
    Unique (%)7.5%

    Sample

    1st rowhttps://en.wikipedia.org/wiki/Quantum
    2nd rowhttps://en.wikipedia.org/wiki/Cicero
    3rd rowhttps://en.wikipedia.org/wiki/Alzheimer%27s_disease
    4th rowhttps://en.wikipedia.org/wiki/Peripheral_neuropathy
    5th rowhttps://en.wikipedia.org/wiki/Chemotherapy

    Common Values

    ValueCountFrequency (%)
    https://en.wikipedia.org/wiki/Physical_fitness22
    23.7%
    https://en.wikipedia.org/wiki/Alzheimer%27s_disease5
     
    5.4%
    https://en.wikipedia.org/wiki/Star_tracker4
     
    4.3%
    https://en.wikipedia.org/wiki/Anthology4
     
    4.3%
    https://en.wikipedia.org/wiki/Cicero4
     
    4.3%
    https://en.wikipedia.org/wiki/Pharmacy4
     
    4.3%
    https://en.wikipedia.org/wiki/Taxonomy4
     
    4.3%
    https://en.wikipedia.org/wiki/Financial_services4
     
    4.3%
    https://en.wikipedia.org/wiki/Securitization4
     
    4.3%
    https://en.wikipedia.org/wiki/Medicina4
     
    4.3%
    Other values (16)34
    36.6%

    Length

    2022-10-20T19:04:09.444965image/svg+xmlMatplotlib v3.5.2, https://matplotlib.org/
    Histogram of lengths of the category
    ValueCountFrequency (%)
    https://en.wikipedia.org/wiki/physical_fitness22
    23.7%
    https://en.wikipedia.org/wiki/alzheimer%27s_disease5
     
    5.4%
    https://en.wikipedia.org/wiki/securitization4
     
    4.3%
    https://en.wikipedia.org/wiki/peripheral_neuropathy4
     
    4.3%
    https://en.wikipedia.org/wiki/chemotherapy4
     
    4.3%
    https://en.wikipedia.org/wiki/death4
     
    4.3%
    https://en.wikipedia.org/wiki/medicina4
     
    4.3%
    https://en.wikipedia.org/wiki/national_commission_for_culture_and_the_arts4
     
    4.3%
    https://en.wikipedia.org/wiki/financial_services4
     
    4.3%
    https://en.wikipedia.org/wiki/taxonomy4
     
    4.3%
    Other values (16)34
    36.6%

    Most occurring characters

    ValueCountFrequency (%)
    i584
    14.0%
    /372
     
    8.9%
    e308
     
    7.4%
    t275
     
    6.6%
    a209
     
    5.0%
    p207
     
    5.0%
    s202
     
    4.8%
    k192
     
    4.6%
    .186
     
    4.5%
    w186
     
    4.5%
    Other values (39)1450
    34.8%

    Most occurring categories

    ValueCountFrequency (%)
    Lowercase Letter3305
    79.2%
    Other Punctuation658
     
    15.8%
    Uppercase Letter113
     
    2.7%
    Connector Punctuation85
     
    2.0%
    Decimal Number10
     
    0.2%

    Most frequent character per category

    Lowercase Letter
    ValueCountFrequency (%)
    i584
    17.7%
    e308
     
    9.3%
    t275
     
    8.3%
    a209
     
    6.3%
    p207
     
    6.3%
    s202
     
    6.1%
    k192
     
    5.8%
    w186
     
    5.6%
    r175
     
    5.3%
    n170
     
    5.1%
    Other values (14)797
    24.1%
    Uppercase Letter
    ValueCountFrequency (%)
    P30
    26.5%
    A18
    15.9%
    C18
    15.9%
    S13
    11.5%
    N6
     
    5.3%
    T5
     
    4.4%
    F4
     
    3.5%
    M4
     
    3.5%
    D4
     
    3.5%
    W2
     
    1.8%
    Other values (7)9
     
    8.0%
    Other Punctuation
    ValueCountFrequency (%)
    /372
    56.5%
    .186
    28.3%
    :93
     
    14.1%
    %5
     
    0.8%
    ,2
     
    0.3%
    Decimal Number
    ValueCountFrequency (%)
    75
    50.0%
    25
    50.0%
    Connector Punctuation
    ValueCountFrequency (%)
    _85
    100.0%

    Most occurring scripts

    ValueCountFrequency (%)
    Latin3418
    81.9%
    Common753
     
    18.1%

    Most frequent character per script

    Latin
    ValueCountFrequency (%)
    i584
    17.1%
    e308
     
    9.0%
    t275
     
    8.0%
    a209
     
    6.1%
    p207
     
    6.1%
    s202
     
    5.9%
    k192
     
    5.6%
    w186
     
    5.4%
    r175
     
    5.1%
    n170
     
    5.0%
    Other values (31)910
    26.6%
    Common
    ValueCountFrequency (%)
    /372
    49.4%
    .186
    24.7%
    :93
     
    12.4%
    _85
     
    11.3%
    75
     
    0.7%
    %5
     
    0.7%
    25
     
    0.7%
    ,2
     
    0.3%

    Most occurring blocks

    ValueCountFrequency (%)
    ASCII4171
    100.0%

    Most frequent character per block

    ASCII
    ValueCountFrequency (%)
    i584
    14.0%
    /372
     
    8.9%
    e308
     
    7.4%
    t275
     
    6.6%
    a209
     
    5.0%
    p207
     
    5.0%
    s202
     
    4.8%
    k192
     
    4.6%
    .186
     
    4.5%
    w186
     
    4.5%
    Other values (39)1450
    34.8%

    time
    Categorical

    HIGH CARDINALITY
    HIGH CORRELATION
    HIGH CORRELATION
    UNIFORM
    UNIQUE

    Distinct93
    Distinct (%)100.0%
    Missing0
    Missing (%)0.0%
    Memory size872.0 B
    2022-10-12 01:47:58.485120
     
    1
    2022-10-12 02:42:11.480184
     
    1
    2022-10-12 02:43:19.284149
     
    1
    2022-10-12 02:43:07.107545
     
    1
    2022-10-12 02:42:58.815490
     
    1
    Other values (88)
    88 

    Length

    Max length26
    Median length26
    Mean length26
    Min length26

    Characters and Unicode

    Total characters2418
    Distinct characters14
    Distinct categories4 ?
    Distinct scripts1 ?
    Distinct blocks1 ?
    The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

    Unique

    Unique93 ?
    Unique (%)100.0%

    Sample

    1st row2022-10-12 01:47:58.485120
    2nd row2022-10-12 01:48:09.186213
    3rd row2022-10-12 01:48:18.661961
    4th row2022-10-12 01:48:25.728941
    5th row2022-10-12 01:48:37.233833

    Common Values

    ValueCountFrequency (%)
    2022-10-12 01:47:58.4851201
     
    1.1%
    2022-10-12 02:42:11.4801841
     
    1.1%
    2022-10-12 02:43:19.2841491
     
    1.1%
    2022-10-12 02:43:07.1075451
     
    1.1%
    2022-10-12 02:42:58.8154901
     
    1.1%
    2022-10-12 02:42:49.0803671
     
    1.1%
    2022-10-12 02:42:38.8779971
     
    1.1%
    2022-10-12 02:42:33.6205961
     
    1.1%
    2022-10-12 02:42:28.0301691
     
    1.1%
    2022-10-12 02:42:22.8745691
     
    1.1%
    Other values (83)83
    89.2%

    Length

    2022-10-20T19:04:09.719696image/svg+xmlMatplotlib v3.5.2, https://matplotlib.org/
    Histogram of lengths of the category
    ValueCountFrequency (%)
    2022-10-1278
    41.9%
    2022-10-197
     
    3.8%
    2022-10-144
     
    2.2%
    2022-10-172
     
    1.1%
    01:49:17.0234431
     
    0.5%
    01:48:37.2338331
     
    0.5%
    01:48:45.1582201
     
    0.5%
    01:48:48.7521571
     
    0.5%
    01:48:56.0008131
     
    0.5%
    01:49:00.6372051
     
    0.5%
    Other values (89)89
    47.8%

    Most occurring characters

    ValueCountFrequency (%)
    2526
    21.8%
    0376
    15.6%
    1345
    14.3%
    -186
     
    7.7%
    :186
     
    7.7%
    4105
     
    4.3%
    5105
     
    4.3%
    93
     
    3.8%
    .93
     
    3.8%
    392
     
    3.8%
    Other values (4)311
    12.9%

    Most occurring categories

    ValueCountFrequency (%)
    Decimal Number1860
    76.9%
    Other Punctuation279
     
    11.5%
    Dash Punctuation186
     
    7.7%
    Space Separator93
     
    3.8%

    Most frequent character per category

    Decimal Number
    ValueCountFrequency (%)
    2526
    28.3%
    0376
    20.2%
    1345
    18.5%
    4105
     
    5.6%
    5105
     
    5.6%
    392
     
    4.9%
    985
     
    4.6%
    880
     
    4.3%
    676
     
    4.1%
    770
     
    3.8%
    Other Punctuation
    ValueCountFrequency (%)
    :186
    66.7%
    .93
    33.3%
    Dash Punctuation
    ValueCountFrequency (%)
    -186
    100.0%
    Space Separator
    ValueCountFrequency (%)
    93
    100.0%

    Most occurring scripts

    ValueCountFrequency (%)
    Common2418
    100.0%

    Most frequent character per script

    Common
    ValueCountFrequency (%)
    2526
    21.8%
    0376
    15.6%
    1345
    14.3%
    -186
     
    7.7%
    :186
     
    7.7%
    4105
     
    4.3%
    5105
     
    4.3%
    93
     
    3.8%
    .93
     
    3.8%
    392
     
    3.8%
    Other values (4)311
    12.9%

    Most occurring blocks

    ValueCountFrequency (%)
    ASCII2418
    100.0%

    Most frequent character per block

    ASCII
    ValueCountFrequency (%)
    2526
    21.8%
    0376
    15.6%
    1345
    14.3%
    -186
     
    7.7%
    :186
     
    7.7%
    4105
     
    4.3%
    5105
     
    4.3%
    93
     
    3.8%
    .93
     
    3.8%
    392
     
    3.8%
    Other values (4)311
    12.9%

    Correlations

    2022-10-20T19:04:09.810411image/svg+xmlMatplotlib v3.5.2, https://matplotlib.org/

    Cramér's V (φc)

    Cramér's V is an association measure for nominal random variables. The coefficient ranges from 0 to 1, with 0 indicating independence and 1 indicating perfect association. The empirical estimators used for Cramér's V have been proved to be biased, even for large samples. We use a bias-corrected measure that has been proposed by Bergsma in 2013 that can be found here.
    2022-10-20T19:04:09.918498image/svg+xmlMatplotlib v3.5.2, https://matplotlib.org/

    Phik (φk)

    Phik (φk) is a new and practical correlation coefficient that works consistently between categorical, ordinal and interval variables, captures non-linear dependency and reverts to the Pearson correlation coefficient in case of a bivariate normal input distribution. There is extensive documentation available here.

    Missing values

    2022-10-20T19:04:09.028678image/svg+xmlMatplotlib v3.5.2, https://matplotlib.org/
    A simple visualization of nullity by column.
    2022-10-20T19:04:09.154989image/svg+xmlMatplotlib v3.5.2, https://matplotlib.org/
    Nullity matrix is a data-dense display which lets you quickly visually pick out patterns in data completion.

    Sample

    First rows

    titlestorytime
    0electromagnetic energy is absorbed or emitted in discrete packetshttps://en.wikipedia.org/wiki/Quantum2022-10-12 01:47:58.485120
    1speecheshttps://en.wikipedia.org/wiki/Cicero2022-10-12 01:48:09.186213
    2cerebrovascular diseasehttps://en.wikipedia.org/wiki/Alzheimer%27s_disease2022-10-12 01:48:18.661961
    3antidepressant medicationshttps://en.wikipedia.org/wiki/Peripheral_neuropathy2022-10-12 01:48:25.728941
    4Scalp coolinghttps://en.wikipedia.org/wiki/Chemotherapy2022-10-12 01:48:37.233833
    5pro-aging trancehttps://en.wikipedia.org/wiki/Death2022-10-12 01:48:45.158220
    6Radioastronomia di Bologna (Institute for Radio Astronomyhttps://en.wikipedia.org/wiki/Medicina2022-10-12 01:48:48.752157
    7byhttps://en.wikipedia.org/wiki/Securitization2022-10-12 01:48:56.000813
    8New York City is the largesthttps://en.wikipedia.org/wiki/Financial_services2022-10-12 01:49:00.637205
    9the collective nature of the texthttps://en.wikipedia.org/wiki/Anthology2022-10-12 01:49:04.268440

    Last rows

    titlestorytime
    83metadatahttps://en.wikipedia.org/wiki/Scalable_Vector_Graphics2022-10-17 15:42:07.764264
    84Organized efforts to search for extraterrestrial intelligencehttps://en.wikipedia.org/wiki/Technology2022-10-17 15:42:25.557130
    85== Function ===\nCalcium is an essential elementhttps://en.wikipedia.org/wiki/Calcium2022-10-18 15:55:02.907650
    86A star trail is a type of photograph that uses long exposure times to capture diurnal circles, the apparent motion of stars in the night sky due to Earth's rotation. A star-trail photograph shows individual stars as streaks across the image, with longer exposures yielding longer arcs. The term is used for similar photos captured elsewhere, such as on board the International Space Station and on Mars.Typical shutter speeds for a star trail range from 15 minutes to several hours, requiring a "Bulb" setting on the camera to open the shutter for a period longer than usual. However, a more practiced technique is to blend a number of frames together to create the final star trail image.Star trails have been used by professional astronomers to measure the quality of observing locations for major telescopes.\n\n\n== Capture ==\n\nStar trail photographs are captured by placing a camera on a tripod, pointing the lens toward the night sky, and allowing the shutter to stay open for a long period of time. Star trails are considered relatively easy for amateur astrophotographers to create. Photographers generally make these images by using a DSLR or Mirrorless camera with its lens focus set to infinity. A cable release or intervalometer allows the photographer to hold the shutter open for the desired amount of time. Typical exposure times range from 15 minutes to many hours long, depending on the desired length of the star trail arcs for the image. Even though star trail pictures are created under low-light conditions, long exposure times allow fast films, such as ISO 200 and ISO 400. Wide-apertures, such as f/5.6 and f/4, are recommended for star trails.\n\nBecause exposure times for star trail photographs can be several hours long, camera batteries can be easily depleted. Mechanical cameras that do not require a battery to open and close the shutter have an advantage over more modern film and digital cameras that rely on battery power. On these cameras, the Bulb, or B, exposure setting keeps the shutter open. Another problem that digital cameras encounter is an increase in electronic noise with increasing exposure time. However, this can be avoided through the use of shorter exposure times that are then stacked in post production software. This avoids possible heat build up or digital noise caused from a single long exposure. \nAmerican astronaut Don Pettit recorded star trails with a digital camera from the International Space Station in Earth orbit between April and June, 2012. Pettit described his technique as follows: "My star trail images are made by taking a time exposure of about 10 to 15 minutes. However, with modern digital cameras, 30 seconds is about the longest exposure possible, due to electronic detector noise effectively snowing out the image. To achieve the longer exposures I do what many amateur astronomers do. I take multiple 30-second exposures, then 'stack' them using imaging software, thus producing the longer exposure."Star trail images have also been taken on Mars. The Spirit rover produced them while looking for meteors. Since the camera was limited to 60 second exposures the trails appear as dashed lines.\n\n\n== Earth's rotation ==\n\nStar trail photographs are possible because of the rotation of Earth about its axis. The apparent motion of the stars is recorded as mostly curved streaks on the film or detector. For observers in the Northern Hemisphere, aiming the camera northward creates an image with concentric circular arcs centered on the north celestial pole (very near Polaris). For those in the Southern Hemisphere, this same effect is achieved by aiming the camera southward. In this case, the arc streaks are centered on the south celestial pole (near Sigma Octantis). Aiming the camera eastward or westward shows straight streaks on the celestial equator, which is tilted at angle with respect to the horizon. The angular measure of this tilt depends on the photographer's latitude (L), and is equal to 90° − L.\n\n\n== Astronomical site testing ==\nStar trail photographs can be used by astronomers to determine the quality of a location for telescope observations. Star trail observations of Polaris have been used to measure the quality of seeing in the atmosphere, and the vibrations in telescope mounting systems. The first recorded suggestion of this technique is from E.S. Skinner's 1931 book A Manual of Celestial Photography.\n\n\n== Gallery ==\n\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\n== References ==\n\n\n== External links ==\n\n4 Steps To Creating Star Trails Photos Using Stacking Software\nStar trail photography\nStarStaX free multi-platform star trail softwarehttps://en.wikipedia.org/wiki/Star_trail2022-10-19 00:58:13.680029
    87E.S. Skinnerhttps://en.wikipedia.org/wiki/Star_trail2022-10-19 00:58:16.667325
    88A star tracker is an optical device that measures the positions of stars using photocells or a camera.\nAs the positions of many stars have been measured by astronomers to a high degree of accuracy, a star tracker on a satellite or spacecraft may be used to determine the orientation (or attitude) of the spacecraft with respect to the stars. In order to do this, the star tracker must obtain an image of the stars, measure their apparent position in the reference frame of the spacecraft, and identify the stars so their position can be compared with their known absolute position from a star catalog. A star tracker may include a processor to identify stars by comparing the pattern of observed stars with the known pattern of stars in the sky.\n\n\n== History ==\nIn the 1950s and early 1960s, star trackers were an important part of early long-range ballistic missiles and cruise missiles, in the era when inertial navigation systems (INS) were not sufficiently accurate for intercontinental ranges.Consider a Cold War missile flying towards its target; it initially starts by flying northward, passes over the arctic, and then begins flying southward again. From the missile's perspective, stars behind it appear to move closer to the southern horizon while those in front are rising. Before flight, one can calculate the relative angle of a star based on where the missile should be at that instant if it is in the correct location. That can then be compared to the measured location to produce an "error off" signal that can be used to bring the missile back onto its correct trajectory.Due to the Earth's rotation, stars that are in a usable location change over the course of a day and the location of the target. Generally, a selection of several bright stars would be used and one would be selected at launch time. For guidance systems based solely on star tracking, some sort of recording mechanism, typically a magnetic tape, was pre-recorded with a signal that represented the angle of the star over the period of a day. At launch, the tape was forwarded to the appropriate time. During the flight, the signal on the tape was used to roughly position a telescope so it would point at the expected position of the star. At the telescope's focus was a photocell and some sort of signal-generator, typically a spinning disk known as a chopper. The chopper causes the image of the star to repeatedly appear and disappear on the photocell, producing a signal that was then smoothed to produce an alternating current output. The phase of that signal was compared to the one on the tape to produce a guidance signal.Star trackers were often combined with an INS. INS systems measure accelerations and integrate those over time to determine a velocity and, optionally, double-integrate to produce a location relative to its launch location. Even tiny measurement errors, when integrated, adds up to an appreciable error known as "drift". For instance, the N-1 navigation system developed for the SM-64 Navaho cruise missile drifted at a rate of 1 nautical mile per hour, meaning that after a two-hour flight the INS would be indicating a position 2 nautical miles (3.7 km; 2.3 mi) away from its actual location. This was outside the desired accuracy of about half a mile.\nIn the case of an INS, the magnetic tape can be removed and those signals instead provided by the INS. The rest of the system works as before; the signal from the INS roughly positions the star tracker, which then measures the actual location of the star and produces an error signal. This signal is then used to correct the position being generated from the INS, reducing the accumulated drift back to the limit of the accuracy of the tracker. These "stellar inertial" systems were especially common from the 1950s through the 1980s, although some systems use it to this day.\n\n\n== Current technology ==\nMany models are currently available. There also exist open projects designed to be used for the global CubeSat researchers and developers community.\nStar trackers, which require high sensitivity, may become confused by sunlight reflected from the spacecraft, or by exhaust gas plumes from the spacecraft thrusters (either sunlight reflection or contamination of the star tracker window). Star trackers are also susceptible to a variety of errors (low spatial frequency, high spatial frequency, temporal, ...) in addition to a variety of optical sources of error (spherical aberration, chromatic aberration, etc.). There are also many potential sources of confusion for the star identification algorithm (planets, comets, supernovae, the bimodal character of the point spread function for adjacent stars, other nearby satellites, point-source light pollution from large cities on Earth, ...). There are roughly 57 bright navigational stars in common use. However, for more complex missions, entire star field databases are used to determine spacecraft orientation. A typical star catalog for high-fidelity attitude determination is originated from a standard base catalog (for example from the United States Naval Observatory) and then filtered to remove problematic stars, for example due to apparent magnitude variability, color index uncertainty, or a location within the Hertzsprung-Russell diagram implying unreliability. These types of star catalogs can have thousands of stars stored in memory on board the spacecraft, or else processed using tools at the ground station and then uploaded.\n\n\n== See also ==\nCelestial navigation\nGoTo (telescopes)\nSun sensor\n\n\n== References ==https://en.wikipedia.org/wiki/Star_tracker2022-10-19 00:58:42.472490
    89Celestial navigation\nGoTo (telescopes)\nSun sensorhttps://en.wikipedia.org/wiki/Star_tracker2022-10-19 00:58:45.548139
    90A star tracker is an optical device that measures the positions of stars using photocells or a camera.\nAs the positions of many stars have been measured by astronomers to a high degree of accuracy, a star tracker on a satellite or spacecraft may be used to determine the orientation (or attitude) of the spacecraft with respect to the stars. In order to do this, the star tracker must obtain an image of the stars, measure their apparent position in the reference frame of the spacecraft, and identify the stars so their position can be compared with their known absolute position from a star catalog. A star tracker may include a processor to identify stars by comparing the pattern of observed stars with the known pattern of stars in the sky.\n\n\n== History ==\nIn the 1950s and early 1960s, star trackers were an important part of early long-range ballistic missiles and cruise missiles, in the era when inertial navigation systems (INS) were not sufficiently accurate for intercontinental ranges.Consider a Cold War missile flying towards its target; it initially starts by flying northward, passes over the arctic, and then begins flying southward again. From the missile's perspective, stars behind it appear to move closer to the southern horizon while those in front are rising. Before flight, one can calculate the relative angle of a star based on where the missile should be at that instant if it is in the correct location. That can then be compared to the measured location to produce an "error off" signal that can be used to bring the missile back onto its correct trajectory.Due to the Earth's rotation, stars that are in a usable location change over the course of a day and the location of the target. Generally, a selection of several bright stars would be used and one would be selected at launch time. For guidance systems based solely on star tracking, some sort of recording mechanism, typically a magnetic tape, was pre-recorded with a signal that represented the angle of the star over the period of a day. At launch, the tape was forwarded to the appropriate time. During the flight, the signal on the tape was used to roughly position a telescope so it would point at the expected position of the star. At the telescope's focus was a photocell and some sort of signal-generator, typically a spinning disk known as a chopper. The chopper causes the image of the star to repeatedly appear and disappear on the photocell, producing a signal that was then smoothed to produce an alternating current output. The phase of that signal was compared to the one on the tape to produce a guidance signal.Star trackers were often combined with an INS. INS systems measure accelerations and integrate those over time to determine a velocity and, optionally, double-integrate to produce a location relative to its launch location. Even tiny measurement errors, when integrated, adds up to an appreciable error known as "drift". For instance, the N-1 navigation system developed for the SM-64 Navaho cruise missile drifted at a rate of 1 nautical mile per hour, meaning that after a two-hour flight the INS would be indicating a position 2 nautical miles (3.7 km; 2.3 mi) away from its actual location. This was outside the desired accuracy of about half a mile.\nIn the case of an INS, the magnetic tape can be removed and those signals instead provided by the INS. The rest of the system works as before; the signal from the INS roughly positions the star tracker, which then measures the actual location of the star and produces an error signal. This signal is then used to correct the position being generated from the INS, reducing the accumulated drift back to the limit of the accuracy of the tracker. These "stellar inertial" systems were especially common from the 1950s through the 1980s, although some systems use it to this day.\n\n\n== Current technology ==\nMany models are currently available. There also exist open projects designed to be used for the global CubeSat researchers and developers community.\nStar trackers, which require high sensitivity, may become confused by sunlight reflected from the spacecraft, or by exhaust gas plumes from the spacecraft thrusters (either sunlight reflection or contamination of the star tracker window). Star trackers are also susceptible to a variety of errors (low spatial frequency, high spatial frequency, temporal, ...) in addition to a variety of optical sources of error (spherical aberration, chromatic aberration, etc.). There are also many potential sources of confusion for the star identification algorithm (planets, comets, supernovae, the bimodal character of the point spread function for adjacent stars, other nearby satellites, point-source light pollution from large cities on Earth, ...). There are roughly 57 bright navigational stars in common use. However, for more complex missions, entire star field databases are used to determine spacecraft orientation. A typical star catalog for high-fidelity attitude determination is originated from a standard base catalog (for example from the United States Naval Observatory) and then filtered to remove problematic stars, for example due to apparent magnitude variability, color index uncertainty, or a location within the Hertzsprung-Russell diagram implying unreliability. These types of star catalogs can have thousands of stars stored in memory on board the spacecraft, or else processed using tools at the ground station and then uploaded.\n\n\n== See also ==\nCelestial navigation\nGoTo (telescopes)\nSun sensor\n\n\n== References ==https://en.wikipedia.org/wiki/Star_tracker2022-10-19 03:03:35.059016
    91Celestial navigation\nGoTo (telescopes)\nSun sensorhttps://en.wikipedia.org/wiki/Star_tracker2022-10-19 03:03:39.599730
    92Filmhttps://en.wikipedia.org/wiki/National_Commission_for_Culture_and_the_Arts2022-10-19 03:05:22.704793
    \ No newline at end of file diff --git a/spaces/awen666/web-ui/index.html b/spaces/awen666/web-ui/index.html deleted file mode 100644 index 598d70a359bb59ba7f59afc4974219eda01dac2f..0000000000000000000000000000000000000000 --- a/spaces/awen666/web-ui/index.html +++ /dev/null @@ -1 +0,0 @@ -Gradiobot UI \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/TeapotBufferGeometry.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/TeapotBufferGeometry.js deleted file mode 100644 index 3b8811fd6b413385db1ddc0767ef9ddbeb0826c7..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/TeapotBufferGeometry.js +++ /dev/null @@ -1,718 +0,0 @@ -/** - * @author Eric Haines / http://erichaines.com/ - * - * Tessellates the famous Utah teapot database by Martin Newell into triangles. - * - * THREE.TeapotBufferGeometry = function ( size, segments, bottom, lid, body, fitLid, blinn ) - * - * defaults: size = 50, segments = 10, bottom = true, lid = true, body = true, - * fitLid = false, blinn = true - * - * size is a relative scale: I've scaled the teapot to fit vertically between -1 and 1. - * Think of it as a "radius". - * segments - number of line segments to subdivide each patch edge; - * 1 is possible but gives degenerates, so two is the real minimum. - * bottom - boolean, if true (default) then the bottom patches are added. Some consider - * adding the bottom heresy, so set this to "false" to adhere to the One True Way. - * lid - to remove the lid and look inside, set to true. - * body - to remove the body and leave the lid, set this and "bottom" to false. - * fitLid - the lid is a tad small in the original. This stretches it a bit so you can't - * see the teapot's insides through the gap. - * blinn - Jim Blinn scaled the original data vertically by dividing by about 1.3 to look - * nicer. If you want to see the original teapot, similar to the real-world model, set - * this to false. True by default. - * See http://en.wikipedia.org/wiki/File:Original_Utah_Teapot.jpg for the original - * real-world teapot (from http://en.wikipedia.org/wiki/Utah_teapot). - * - * Note that the bottom (the last four patches) is not flat - blame Frank Crow, not me. - * - * The teapot should normally be rendered as a double sided object, since for some - * patches both sides can be seen, e.g., the gap around the lid and inside the spout. - * - * Segments 'n' determines the number of triangles output. - * Total triangles = 32*2*n*n - 8*n [degenerates at the top and bottom cusps are deleted] - * - * size_factor # triangles - * 1 56 - * 2 240 - * 3 552 - * 4 992 - * - * 10 6320 - * 20 25440 - * 30 57360 - * - * Code converted from my ancient SPD software, http://tog.acm.org/resources/SPD/ - * Created for the Udacity course "Interactive Rendering", http://bit.ly/ericity - * Lesson: https://www.udacity.com/course/viewer#!/c-cs291/l-68866048/m-106482448 - * YouTube video on teapot history: https://www.youtube.com/watch?v=DxMfblPzFNc - * - * See https://en.wikipedia.org/wiki/Utah_teapot for the history of the teapot - * - */ -/*global THREE */ - -THREE.TeapotBufferGeometry = function ( size, segments, bottom, lid, body, fitLid, blinn ) { - - // 32 * 4 * 4 Bezier spline patches - var teapotPatches = [ - /*rim*/ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, - 3, 16, 17, 18, 7, 19, 20, 21, 11, 22, 23, 24, 15, 25, 26, 27, - 18, 28, 29, 30, 21, 31, 32, 33, 24, 34, 35, 36, 27, 37, 38, 39, - 30, 40, 41, 0, 33, 42, 43, 4, 36, 44, 45, 8, 39, 46, 47, 12, - /*body*/ - 12, 13, 14, 15, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, - 15, 25, 26, 27, 51, 60, 61, 62, 55, 63, 64, 65, 59, 66, 67, 68, - 27, 37, 38, 39, 62, 69, 70, 71, 65, 72, 73, 74, 68, 75, 76, 77, - 39, 46, 47, 12, 71, 78, 79, 48, 74, 80, 81, 52, 77, 82, 83, 56, - 56, 57, 58, 59, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, - 59, 66, 67, 68, 87, 96, 97, 98, 91, 99, 100, 101, 95, 102, 103, 104, - 68, 75, 76, 77, 98, 105, 106, 107, 101, 108, 109, 110, 104, 111, 112, 113, - 77, 82, 83, 56, 107, 114, 115, 84, 110, 116, 117, 88, 113, 118, 119, 92, - /*handle*/ - 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, - 123, 136, 137, 120, 127, 138, 139, 124, 131, 140, 141, 128, 135, 142, 143, 132, - 132, 133, 134, 135, 144, 145, 146, 147, 148, 149, 150, 151, 68, 152, 153, 154, - 135, 142, 143, 132, 147, 155, 156, 144, 151, 157, 158, 148, 154, 159, 160, 68, - /*spout*/ - 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, - 164, 177, 178, 161, 168, 179, 180, 165, 172, 181, 182, 169, 176, 183, 184, 173, - 173, 174, 175, 176, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, - 176, 183, 184, 173, 188, 197, 198, 185, 192, 199, 200, 189, 196, 201, 202, 193, - /*lid*/ - 203, 203, 203, 203, 204, 205, 206, 207, 208, 208, 208, 208, 209, 210, 211, 212, - 203, 203, 203, 203, 207, 213, 214, 215, 208, 208, 208, 208, 212, 216, 217, 218, - 203, 203, 203, 203, 215, 219, 220, 221, 208, 208, 208, 208, 218, 222, 223, 224, - 203, 203, 203, 203, 221, 225, 226, 204, 208, 208, 208, 208, 224, 227, 228, 209, - 209, 210, 211, 212, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, - 212, 216, 217, 218, 232, 241, 242, 243, 236, 244, 245, 246, 240, 247, 248, 249, - 218, 222, 223, 224, 243, 250, 251, 252, 246, 253, 254, 255, 249, 256, 257, 258, - 224, 227, 228, 209, 252, 259, 260, 229, 255, 261, 262, 233, 258, 263, 264, 237, - /*bottom*/ - 265, 265, 265, 265, 266, 267, 268, 269, 270, 271, 272, 273, 92, 119, 118, 113, - 265, 265, 265, 265, 269, 274, 275, 276, 273, 277, 278, 279, 113, 112, 111, 104, - 265, 265, 265, 265, 276, 280, 281, 282, 279, 283, 284, 285, 104, 103, 102, 95, - 265, 265, 265, 265, 282, 286, 287, 266, 285, 288, 289, 270, 95, 94, 93, 92 - ]; - - var teapotVertices = [ - 1.4, 0, 2.4, - 1.4, - 0.784, 2.4, - 0.784, - 1.4, 2.4, - 0, - 1.4, 2.4, - 1.3375, 0, 2.53125, - 1.3375, - 0.749, 2.53125, - 0.749, - 1.3375, 2.53125, - 0, - 1.3375, 2.53125, - 1.4375, 0, 2.53125, - 1.4375, - 0.805, 2.53125, - 0.805, - 1.4375, 2.53125, - 0, - 1.4375, 2.53125, - 1.5, 0, 2.4, - 1.5, - 0.84, 2.4, - 0.84, - 1.5, 2.4, - 0, - 1.5, 2.4, - - 0.784, - 1.4, 2.4, - - 1.4, - 0.784, 2.4, - - 1.4, 0, 2.4, - - 0.749, - 1.3375, 2.53125, - - 1.3375, - 0.749, 2.53125, - - 1.3375, 0, 2.53125, - - 0.805, - 1.4375, 2.53125, - - 1.4375, - 0.805, 2.53125, - - 1.4375, 0, 2.53125, - - 0.84, - 1.5, 2.4, - - 1.5, - 0.84, 2.4, - - 1.5, 0, 2.4, - - 1.4, 0.784, 2.4, - - 0.784, 1.4, 2.4, - 0, 1.4, 2.4, - - 1.3375, 0.749, 2.53125, - - 0.749, 1.3375, 2.53125, - 0, 1.3375, 2.53125, - - 1.4375, 0.805, 2.53125, - - 0.805, 1.4375, 2.53125, - 0, 1.4375, 2.53125, - - 1.5, 0.84, 2.4, - - 0.84, 1.5, 2.4, - 0, 1.5, 2.4, - 0.784, 1.4, 2.4, - 1.4, 0.784, 2.4, - 0.749, 1.3375, 2.53125, - 1.3375, 0.749, 2.53125, - 0.805, 1.4375, 2.53125, - 1.4375, 0.805, 2.53125, - 0.84, 1.5, 2.4, - 1.5, 0.84, 2.4, - 1.75, 0, 1.875, - 1.75, - 0.98, 1.875, - 0.98, - 1.75, 1.875, - 0, - 1.75, 1.875, - 2, 0, 1.35, - 2, - 1.12, 1.35, - 1.12, - 2, 1.35, - 0, - 2, 1.35, - 2, 0, 0.9, - 2, - 1.12, 0.9, - 1.12, - 2, 0.9, - 0, - 2, 0.9, - - 0.98, - 1.75, 1.875, - - 1.75, - 0.98, 1.875, - - 1.75, 0, 1.875, - - 1.12, - 2, 1.35, - - 2, - 1.12, 1.35, - - 2, 0, 1.35, - - 1.12, - 2, 0.9, - - 2, - 1.12, 0.9, - - 2, 0, 0.9, - - 1.75, 0.98, 1.875, - - 0.98, 1.75, 1.875, - 0, 1.75, 1.875, - - 2, 1.12, 1.35, - - 1.12, 2, 1.35, - 0, 2, 1.35, - - 2, 1.12, 0.9, - - 1.12, 2, 0.9, - 0, 2, 0.9, - 0.98, 1.75, 1.875, - 1.75, 0.98, 1.875, - 1.12, 2, 1.35, - 2, 1.12, 1.35, - 1.12, 2, 0.9, - 2, 1.12, 0.9, - 2, 0, 0.45, - 2, - 1.12, 0.45, - 1.12, - 2, 0.45, - 0, - 2, 0.45, - 1.5, 0, 0.225, - 1.5, - 0.84, 0.225, - 0.84, - 1.5, 0.225, - 0, - 1.5, 0.225, - 1.5, 0, 0.15, - 1.5, - 0.84, 0.15, - 0.84, - 1.5, 0.15, - 0, - 1.5, 0.15, - - 1.12, - 2, 0.45, - - 2, - 1.12, 0.45, - - 2, 0, 0.45, - - 0.84, - 1.5, 0.225, - - 1.5, - 0.84, 0.225, - - 1.5, 0, 0.225, - - 0.84, - 1.5, 0.15, - - 1.5, - 0.84, 0.15, - - 1.5, 0, 0.15, - - 2, 1.12, 0.45, - - 1.12, 2, 0.45, - 0, 2, 0.45, - - 1.5, 0.84, 0.225, - - 0.84, 1.5, 0.225, - 0, 1.5, 0.225, - - 1.5, 0.84, 0.15, - - 0.84, 1.5, 0.15, - 0, 1.5, 0.15, - 1.12, 2, 0.45, - 2, 1.12, 0.45, - 0.84, 1.5, 0.225, - 1.5, 0.84, 0.225, - 0.84, 1.5, 0.15, - 1.5, 0.84, 0.15, - - 1.6, 0, 2.025, - - 1.6, - 0.3, 2.025, - - 1.5, - 0.3, 2.25, - - 1.5, 0, 2.25, - - 2.3, 0, 2.025, - - 2.3, - 0.3, 2.025, - - 2.5, - 0.3, 2.25, - - 2.5, 0, 2.25, - - 2.7, 0, 2.025, - - 2.7, - 0.3, 2.025, - - 3, - 0.3, 2.25, - - 3, 0, 2.25, - - 2.7, 0, 1.8, - - 2.7, - 0.3, 1.8, - - 3, - 0.3, 1.8, - - 3, 0, 1.8, - - 1.5, 0.3, 2.25, - - 1.6, 0.3, 2.025, - - 2.5, 0.3, 2.25, - - 2.3, 0.3, 2.025, - - 3, 0.3, 2.25, - - 2.7, 0.3, 2.025, - - 3, 0.3, 1.8, - - 2.7, 0.3, 1.8, - - 2.7, 0, 1.575, - - 2.7, - 0.3, 1.575, - - 3, - 0.3, 1.35, - - 3, 0, 1.35, - - 2.5, 0, 1.125, - - 2.5, - 0.3, 1.125, - - 2.65, - 0.3, 0.9375, - - 2.65, 0, 0.9375, - - 2, - 0.3, 0.9, - - 1.9, - 0.3, 0.6, - - 1.9, 0, 0.6, - - 3, 0.3, 1.35, - - 2.7, 0.3, 1.575, - - 2.65, 0.3, 0.9375, - - 2.5, 0.3, 1.125, - - 1.9, 0.3, 0.6, - - 2, 0.3, 0.9, - 1.7, 0, 1.425, - 1.7, - 0.66, 1.425, - 1.7, - 0.66, 0.6, - 1.7, 0, 0.6, - 2.6, 0, 1.425, - 2.6, - 0.66, 1.425, - 3.1, - 0.66, 0.825, - 3.1, 0, 0.825, - 2.3, 0, 2.1, - 2.3, - 0.25, 2.1, - 2.4, - 0.25, 2.025, - 2.4, 0, 2.025, - 2.7, 0, 2.4, - 2.7, - 0.25, 2.4, - 3.3, - 0.25, 2.4, - 3.3, 0, 2.4, - 1.7, 0.66, 0.6, - 1.7, 0.66, 1.425, - 3.1, 0.66, 0.825, - 2.6, 0.66, 1.425, - 2.4, 0.25, 2.025, - 2.3, 0.25, 2.1, - 3.3, 0.25, 2.4, - 2.7, 0.25, 2.4, - 2.8, 0, 2.475, - 2.8, - 0.25, 2.475, - 3.525, - 0.25, 2.49375, - 3.525, 0, 2.49375, - 2.9, 0, 2.475, - 2.9, - 0.15, 2.475, - 3.45, - 0.15, 2.5125, - 3.45, 0, 2.5125, - 2.8, 0, 2.4, - 2.8, - 0.15, 2.4, - 3.2, - 0.15, 2.4, - 3.2, 0, 2.4, - 3.525, 0.25, 2.49375, - 2.8, 0.25, 2.475, - 3.45, 0.15, 2.5125, - 2.9, 0.15, 2.475, - 3.2, 0.15, 2.4, - 2.8, 0.15, 2.4, - 0, 0, 3.15, - 0.8, 0, 3.15, - 0.8, - 0.45, 3.15, - 0.45, - 0.8, 3.15, - 0, - 0.8, 3.15, - 0, 0, 2.85, - 0.2, 0, 2.7, - 0.2, - 0.112, 2.7, - 0.112, - 0.2, 2.7, - 0, - 0.2, 2.7, - - 0.45, - 0.8, 3.15, - - 0.8, - 0.45, 3.15, - - 0.8, 0, 3.15, - - 0.112, - 0.2, 2.7, - - 0.2, - 0.112, 2.7, - - 0.2, 0, 2.7, - - 0.8, 0.45, 3.15, - - 0.45, 0.8, 3.15, - 0, 0.8, 3.15, - - 0.2, 0.112, 2.7, - - 0.112, 0.2, 2.7, - 0, 0.2, 2.7, - 0.45, 0.8, 3.15, - 0.8, 0.45, 3.15, - 0.112, 0.2, 2.7, - 0.2, 0.112, 2.7, - 0.4, 0, 2.55, - 0.4, - 0.224, 2.55, - 0.224, - 0.4, 2.55, - 0, - 0.4, 2.55, - 1.3, 0, 2.55, - 1.3, - 0.728, 2.55, - 0.728, - 1.3, 2.55, - 0, - 1.3, 2.55, - 1.3, 0, 2.4, - 1.3, - 0.728, 2.4, - 0.728, - 1.3, 2.4, - 0, - 1.3, 2.4, - - 0.224, - 0.4, 2.55, - - 0.4, - 0.224, 2.55, - - 0.4, 0, 2.55, - - 0.728, - 1.3, 2.55, - - 1.3, - 0.728, 2.55, - - 1.3, 0, 2.55, - - 0.728, - 1.3, 2.4, - - 1.3, - 0.728, 2.4, - - 1.3, 0, 2.4, - - 0.4, 0.224, 2.55, - - 0.224, 0.4, 2.55, - 0, 0.4, 2.55, - - 1.3, 0.728, 2.55, - - 0.728, 1.3, 2.55, - 0, 1.3, 2.55, - - 1.3, 0.728, 2.4, - - 0.728, 1.3, 2.4, - 0, 1.3, 2.4, - 0.224, 0.4, 2.55, - 0.4, 0.224, 2.55, - 0.728, 1.3, 2.55, - 1.3, 0.728, 2.55, - 0.728, 1.3, 2.4, - 1.3, 0.728, 2.4, - 0, 0, 0, - 1.425, 0, 0, - 1.425, 0.798, 0, - 0.798, 1.425, 0, - 0, 1.425, 0, - 1.5, 0, 0.075, - 1.5, 0.84, 0.075, - 0.84, 1.5, 0.075, - 0, 1.5, 0.075, - - 0.798, 1.425, 0, - - 1.425, 0.798, 0, - - 1.425, 0, 0, - - 0.84, 1.5, 0.075, - - 1.5, 0.84, 0.075, - - 1.5, 0, 0.075, - - 1.425, - 0.798, 0, - - 0.798, - 1.425, 0, - 0, - 1.425, 0, - - 1.5, - 0.84, 0.075, - - 0.84, - 1.5, 0.075, - 0, - 1.5, 0.075, - 0.798, - 1.425, 0, - 1.425, - 0.798, 0, - 0.84, - 1.5, 0.075, - 1.5, - 0.84, 0.075 - ]; - - THREE.BufferGeometry.call( this ); - - size = size || 50; - - // number of segments per patch - segments = segments !== undefined ? Math.max( 2, Math.floor( segments ) || 10 ) : 10; - - // which parts should be visible - bottom = bottom === undefined ? true : bottom; - lid = lid === undefined ? true : lid; - body = body === undefined ? true : body; - - // Should the lid be snug? It's not traditional, but we make it snug by default - fitLid = fitLid === undefined ? true : fitLid; - - // Jim Blinn scaled the teapot down in size by about 1.3 for - // some rendering tests. He liked the new proportions that he kept - // the data in this form. The model was distributed with these new - // proportions and became the norm. Trivia: comparing images of the - // real teapot and the computer model, the ratio for the bowl of the - // real teapot is more like 1.25, but since 1.3 is the traditional - // value given, we use it here. - var blinnScale = 1.3; - blinn = blinn === undefined ? true : blinn; - - // scale the size to be the real scaling factor - var maxHeight = 3.15 * ( blinn ? 1 : blinnScale ); - - var maxHeight2 = maxHeight / 2; - var trueSize = size / maxHeight2; - - // Number of elements depends on what is needed. Subtract degenerate - // triangles at tip of bottom and lid out in advance. - var numTriangles = bottom ? ( 8 * segments - 4 ) * segments : 0; - numTriangles += lid ? ( 16 * segments - 4 ) * segments : 0; - numTriangles += body ? 40 * segments * segments : 0; - - var indices = new Uint32Array( numTriangles * 3 ); - - var numVertices = bottom ? 4 : 0; - numVertices += lid ? 8 : 0; - numVertices += body ? 20 : 0; - numVertices *= ( segments + 1 ) * ( segments + 1 ); - - var vertices = new Float32Array( numVertices * 3 ); - var normals = new Float32Array( numVertices * 3 ); - var uvs = new Float32Array( numVertices * 2 ); - - // Bezier form - var ms = new THREE.Matrix4(); - ms.set( - - 1.0, 3.0, - 3.0, 1.0, - 3.0, - 6.0, 3.0, 0.0, - - 3.0, 3.0, 0.0, 0.0, - 1.0, 0.0, 0.0, 0.0 ); - - var g = []; - var i, r, c; - - var sp = []; - var tp = []; - var dsp = []; - var dtp = []; - - // M * G * M matrix, sort of see - // http://www.cs.helsinki.fi/group/goa/mallinnus/curves/surfaces.html - var mgm = []; - - var vert = []; - var sdir = []; - var tdir = []; - - var norm = new THREE.Vector3(); - - var tcoord; - - var sstep, tstep; - var vertPerRow; - - var s, t, sval, tval, p; - var dsval = 0; - var dtval = 0; - - var normOut = new THREE.Vector3(); - var v1, v2, v3, v4; - - var gmx = new THREE.Matrix4(); - var tmtx = new THREE.Matrix4(); - - var vsp = new THREE.Vector4(); - var vtp = new THREE.Vector4(); - var vdsp = new THREE.Vector4(); - var vdtp = new THREE.Vector4(); - - var vsdir = new THREE.Vector3(); - var vtdir = new THREE.Vector3(); - - var mst = ms.clone(); - mst.transpose(); - - // internal function: test if triangle has any matching vertices; - // if so, don't save triangle, since it won't display anything. - var notDegenerate = function ( vtx1, vtx2, vtx3 ) { - - // if any vertex matches, return false - return ! ( ( ( vertices[ vtx1 * 3 ] === vertices[ vtx2 * 3 ] ) && - ( vertices[ vtx1 * 3 + 1 ] === vertices[ vtx2 * 3 + 1 ] ) && - ( vertices[ vtx1 * 3 + 2 ] === vertices[ vtx2 * 3 + 2 ] ) ) || - ( ( vertices[ vtx1 * 3 ] === vertices[ vtx3 * 3 ] ) && - ( vertices[ vtx1 * 3 + 1 ] === vertices[ vtx3 * 3 + 1 ] ) && - ( vertices[ vtx1 * 3 + 2 ] === vertices[ vtx3 * 3 + 2 ] ) ) || - ( ( vertices[ vtx2 * 3 ] === vertices[ vtx3 * 3 ] ) && - ( vertices[ vtx2 * 3 + 1 ] === vertices[ vtx3 * 3 + 1 ] ) && - ( vertices[ vtx2 * 3 + 2 ] === vertices[ vtx3 * 3 + 2 ] ) ) ); - - }; - - - for ( i = 0; i < 3; i ++ ) { - - mgm[ i ] = new THREE.Matrix4(); - - } - - var minPatches = body ? 0 : 20; - var maxPatches = bottom ? 32 : 28; - - vertPerRow = segments + 1; - - var surfCount = 0; - - var vertCount = 0; - var normCount = 0; - var uvCount = 0; - - var indexCount = 0; - - for ( var surf = minPatches; surf < maxPatches; surf ++ ) { - - // lid is in the middle of the data, patches 20-27, - // so ignore it for this part of the loop if the lid is not desired - if ( lid || ( surf < 20 || surf >= 28 ) ) { - - // get M * G * M matrix for x,y,z - for ( i = 0; i < 3; i ++ ) { - - // get control patches - for ( r = 0; r < 4; r ++ ) { - - for ( c = 0; c < 4; c ++ ) { - - // transposed - g[ c * 4 + r ] = teapotVertices[ teapotPatches[ surf * 16 + r * 4 + c ] * 3 + i ]; - - // is the lid to be made larger, and is this a point on the lid - // that is X or Y? - if ( fitLid && ( surf >= 20 && surf < 28 ) && ( i !== 2 ) ) { - - // increase XY size by 7.7%, found empirically. I don't - // increase Z so that the teapot will continue to fit in the - // space -1 to 1 for Y (Y is up for the final model). - g[ c * 4 + r ] *= 1.077; - - } - - // Blinn "fixed" the teapot by dividing Z by blinnScale, and that's the - // data we now use. The original teapot is taller. Fix it: - if ( ! blinn && ( i === 2 ) ) { - - g[ c * 4 + r ] *= blinnScale; - - } - - } - - } - - gmx.set( g[ 0 ], g[ 1 ], g[ 2 ], g[ 3 ], g[ 4 ], g[ 5 ], g[ 6 ], g[ 7 ], g[ 8 ], g[ 9 ], g[ 10 ], g[ 11 ], g[ 12 ], g[ 13 ], g[ 14 ], g[ 15 ] ); - - tmtx.multiplyMatrices( gmx, ms ); - mgm[ i ].multiplyMatrices( mst, tmtx ); - - } - - // step along, get points, and output - for ( sstep = 0; sstep <= segments; sstep ++ ) { - - s = sstep / segments; - - for ( tstep = 0; tstep <= segments; tstep ++ ) { - - t = tstep / segments; - - // point from basis - // get power vectors and their derivatives - for ( p = 4, sval = tval = 1.0; p --; ) { - - sp[ p ] = sval; - tp[ p ] = tval; - sval *= s; - tval *= t; - - if ( p === 3 ) { - - dsp[ p ] = dtp[ p ] = 0.0; - dsval = dtval = 1.0; - - } else { - - dsp[ p ] = dsval * ( 3 - p ); - dtp[ p ] = dtval * ( 3 - p ); - dsval *= s; - dtval *= t; - - } - - } - - vsp.fromArray( sp ); - vtp.fromArray( tp ); - vdsp.fromArray( dsp ); - vdtp.fromArray( dtp ); - - // do for x,y,z - for ( i = 0; i < 3; i ++ ) { - - // multiply power vectors times matrix to get value - tcoord = vsp.clone(); - tcoord.applyMatrix4( mgm[ i ] ); - vert[ i ] = tcoord.dot( vtp ); - - // get s and t tangent vectors - tcoord = vdsp.clone(); - tcoord.applyMatrix4( mgm[ i ] ); - sdir[ i ] = tcoord.dot( vtp ); - - tcoord = vsp.clone(); - tcoord.applyMatrix4( mgm[ i ] ); - tdir[ i ] = tcoord.dot( vdtp ); - - } - - // find normal - vsdir.fromArray( sdir ); - vtdir.fromArray( tdir ); - norm.crossVectors( vtdir, vsdir ); - norm.normalize(); - - // if X and Z length is 0, at the cusp, so point the normal up or down, depending on patch number - if ( vert[ 0 ] === 0 && vert[ 1 ] === 0 ) { - - // if above the middle of the teapot, normal points up, else down - normOut.set( 0, vert[ 2 ] > maxHeight2 ? 1 : - 1, 0 ); - - } else { - - // standard output: rotate on X axis - normOut.set( norm.x, norm.z, - norm.y ); - - } - - // store it all - vertices[ vertCount ++ ] = trueSize * vert[ 0 ]; - vertices[ vertCount ++ ] = trueSize * ( vert[ 2 ] - maxHeight2 ); - vertices[ vertCount ++ ] = - trueSize * vert[ 1 ]; - - normals[ normCount ++ ] = normOut.x; - normals[ normCount ++ ] = normOut.y; - normals[ normCount ++ ] = normOut.z; - - uvs[ uvCount ++ ] = 1 - t; - uvs[ uvCount ++ ] = 1 - s; - - } - - } - - // save the faces - for ( sstep = 0; sstep < segments; sstep ++ ) { - - for ( tstep = 0; tstep < segments; tstep ++ ) { - - v1 = surfCount * vertPerRow * vertPerRow + sstep * vertPerRow + tstep; - v2 = v1 + 1; - v3 = v2 + vertPerRow; - v4 = v1 + vertPerRow; - - // Normals and UVs cannot be shared. Without clone(), you can see the consequences - // of sharing if you call geometry.applyMatrix( matrix ). - if ( notDegenerate( v1, v2, v3 ) ) { - - indices[ indexCount ++ ] = v1; - indices[ indexCount ++ ] = v2; - indices[ indexCount ++ ] = v3; - - } - if ( notDegenerate( v1, v3, v4 ) ) { - - indices[ indexCount ++ ] = v1; - indices[ indexCount ++ ] = v3; - indices[ indexCount ++ ] = v4; - - } - - } - - } - - // increment only if a surface was used - surfCount ++; - - } - - } - - this.setIndex( new THREE.BufferAttribute( indices, 1 ) ); - this.addAttribute( 'position', new THREE.BufferAttribute( vertices, 3 ) ); - this.addAttribute( 'normal', new THREE.BufferAttribute( normals, 3 ) ); - this.addAttribute( 'uv', new THREE.BufferAttribute( uvs, 2 ) ); - - this.computeBoundingSphere(); - -}; - - -THREE.TeapotBufferGeometry.prototype = Object.create( THREE.BufferGeometry.prototype ); -THREE.TeapotBufferGeometry.prototype.constructor = THREE.TeapotBufferGeometry; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/core/ShapePath.js b/spaces/banana-projects/web3d/node_modules/three/src/extras/core/ShapePath.js deleted file mode 100644 index a5f734497a9686f334f7c98742b0a19206c68878..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/extras/core/ShapePath.js +++ /dev/null @@ -1,286 +0,0 @@ -/** - * @author zz85 / http://www.lab4games.net/zz85/blog - * minimal class for proxing functions to Path. Replaces old "extractSubpaths()" - **/ - -import { Color } from '../../math/Color.js'; -import { Path } from './Path.js'; -import { Shape } from './Shape.js'; -import { ShapeUtils } from '../ShapeUtils.js'; - -function ShapePath() { - - this.type = 'ShapePath'; - - this.color = new Color(); - - this.subPaths = []; - this.currentPath = null; - -} - -Object.assign( ShapePath.prototype, { - - moveTo: function ( x, y ) { - - this.currentPath = new Path(); - this.subPaths.push( this.currentPath ); - this.currentPath.moveTo( x, y ); - - }, - - lineTo: function ( x, y ) { - - this.currentPath.lineTo( x, y ); - - }, - - quadraticCurveTo: function ( aCPx, aCPy, aX, aY ) { - - this.currentPath.quadraticCurveTo( aCPx, aCPy, aX, aY ); - - }, - - bezierCurveTo: function ( aCP1x, aCP1y, aCP2x, aCP2y, aX, aY ) { - - this.currentPath.bezierCurveTo( aCP1x, aCP1y, aCP2x, aCP2y, aX, aY ); - - }, - - splineThru: function ( pts ) { - - this.currentPath.splineThru( pts ); - - }, - - toShapes: function ( isCCW, noHoles ) { - - function toShapesNoHoles( inSubpaths ) { - - var shapes = []; - - for ( var i = 0, l = inSubpaths.length; i < l; i ++ ) { - - var tmpPath = inSubpaths[ i ]; - - var tmpShape = new Shape(); - tmpShape.curves = tmpPath.curves; - - shapes.push( tmpShape ); - - } - - return shapes; - - } - - function isPointInsidePolygon( inPt, inPolygon ) { - - var polyLen = inPolygon.length; - - // inPt on polygon contour => immediate success or - // toggling of inside/outside at every single! intersection point of an edge - // with the horizontal line through inPt, left of inPt - // not counting lowerY endpoints of edges and whole edges on that line - var inside = false; - for ( var p = polyLen - 1, q = 0; q < polyLen; p = q ++ ) { - - var edgeLowPt = inPolygon[ p ]; - var edgeHighPt = inPolygon[ q ]; - - var edgeDx = edgeHighPt.x - edgeLowPt.x; - var edgeDy = edgeHighPt.y - edgeLowPt.y; - - if ( Math.abs( edgeDy ) > Number.EPSILON ) { - - // not parallel - if ( edgeDy < 0 ) { - - edgeLowPt = inPolygon[ q ]; edgeDx = - edgeDx; - edgeHighPt = inPolygon[ p ]; edgeDy = - edgeDy; - - } - if ( ( inPt.y < edgeLowPt.y ) || ( inPt.y > edgeHighPt.y ) ) continue; - - if ( inPt.y === edgeLowPt.y ) { - - if ( inPt.x === edgeLowPt.x ) return true; // inPt is on contour ? - // continue; // no intersection or edgeLowPt => doesn't count !!! - - } else { - - var perpEdge = edgeDy * ( inPt.x - edgeLowPt.x ) - edgeDx * ( inPt.y - edgeLowPt.y ); - if ( perpEdge === 0 ) return true; // inPt is on contour ? - if ( perpEdge < 0 ) continue; - inside = ! inside; // true intersection left of inPt - - } - - } else { - - // parallel or collinear - if ( inPt.y !== edgeLowPt.y ) continue; // parallel - // edge lies on the same horizontal line as inPt - if ( ( ( edgeHighPt.x <= inPt.x ) && ( inPt.x <= edgeLowPt.x ) ) || - ( ( edgeLowPt.x <= inPt.x ) && ( inPt.x <= edgeHighPt.x ) ) ) return true; // inPt: Point on contour ! - // continue; - - } - - } - - return inside; - - } - - var isClockWise = ShapeUtils.isClockWise; - - var subPaths = this.subPaths; - if ( subPaths.length === 0 ) return []; - - if ( noHoles === true ) return toShapesNoHoles( subPaths ); - - - var solid, tmpPath, tmpShape, shapes = []; - - if ( subPaths.length === 1 ) { - - tmpPath = subPaths[ 0 ]; - tmpShape = new Shape(); - tmpShape.curves = tmpPath.curves; - shapes.push( tmpShape ); - return shapes; - - } - - var holesFirst = ! isClockWise( subPaths[ 0 ].getPoints() ); - holesFirst = isCCW ? ! holesFirst : holesFirst; - - // console.log("Holes first", holesFirst); - - var betterShapeHoles = []; - var newShapes = []; - var newShapeHoles = []; - var mainIdx = 0; - var tmpPoints; - - newShapes[ mainIdx ] = undefined; - newShapeHoles[ mainIdx ] = []; - - for ( var i = 0, l = subPaths.length; i < l; i ++ ) { - - tmpPath = subPaths[ i ]; - tmpPoints = tmpPath.getPoints(); - solid = isClockWise( tmpPoints ); - solid = isCCW ? ! solid : solid; - - if ( solid ) { - - if ( ( ! holesFirst ) && ( newShapes[ mainIdx ] ) ) mainIdx ++; - - newShapes[ mainIdx ] = { s: new Shape(), p: tmpPoints }; - newShapes[ mainIdx ].s.curves = tmpPath.curves; - - if ( holesFirst ) mainIdx ++; - newShapeHoles[ mainIdx ] = []; - - //console.log('cw', i); - - } else { - - newShapeHoles[ mainIdx ].push( { h: tmpPath, p: tmpPoints[ 0 ] } ); - - //console.log('ccw', i); - - } - - } - - // only Holes? -> probably all Shapes with wrong orientation - if ( ! newShapes[ 0 ] ) return toShapesNoHoles( subPaths ); - - - if ( newShapes.length > 1 ) { - - var ambiguous = false; - var toChange = []; - - for ( var sIdx = 0, sLen = newShapes.length; sIdx < sLen; sIdx ++ ) { - - betterShapeHoles[ sIdx ] = []; - - } - - for ( var sIdx = 0, sLen = newShapes.length; sIdx < sLen; sIdx ++ ) { - - var sho = newShapeHoles[ sIdx ]; - - for ( var hIdx = 0; hIdx < sho.length; hIdx ++ ) { - - var ho = sho[ hIdx ]; - var hole_unassigned = true; - - for ( var s2Idx = 0; s2Idx < newShapes.length; s2Idx ++ ) { - - if ( isPointInsidePolygon( ho.p, newShapes[ s2Idx ].p ) ) { - - if ( sIdx !== s2Idx ) toChange.push( { froms: sIdx, tos: s2Idx, hole: hIdx } ); - if ( hole_unassigned ) { - - hole_unassigned = false; - betterShapeHoles[ s2Idx ].push( ho ); - - } else { - - ambiguous = true; - - } - - } - - } - if ( hole_unassigned ) { - - betterShapeHoles[ sIdx ].push( ho ); - - } - - } - - } - // console.log("ambiguous: ", ambiguous); - if ( toChange.length > 0 ) { - - // console.log("to change: ", toChange); - if ( ! ambiguous ) newShapeHoles = betterShapeHoles; - - } - - } - - var tmpHoles; - - for ( var i = 0, il = newShapes.length; i < il; i ++ ) { - - tmpShape = newShapes[ i ].s; - shapes.push( tmpShape ); - tmpHoles = newShapeHoles[ i ]; - - for ( var j = 0, jl = tmpHoles.length; j < jl; j ++ ) { - - tmpShape.holes.push( tmpHoles[ j ].h ); - - } - - } - - //console.log("shape", shapes); - - return shapes; - - } - -} ); - - -export { ShapePath }; diff --git a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/retriever/abstract_embedder.py b/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/retriever/abstract_embedder.py deleted file mode 100644 index e075364aa904e17e946112a7240bccaa7e400077..0000000000000000000000000000000000000000 --- a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/retriever/abstract_embedder.py +++ /dev/null @@ -1,29 +0,0 @@ -import os -from abc import abstractmethod - -from PIL import Image -import numpy as np -from tqdm import tqdm - - -class AbstractImageEmbedder: - def __init__(self, device: str = "cpu"): - self.device = device - - @abstractmethod - def embed(self, image: Image) -> np.ndarray: - """Embed an image - """ - raise NotImplementedError - - def embed_folder(self, folder_path: str, output_path: str) -> None: - """Embed all images in a folder and save them in a .npy file - """ - assert output_path.endswith(".npy"), "`output_path` must end with .npy" - embeddings = {} - for name in tqdm(os.listdir(folder_path)): - image_path = os.path.join(folder_path, name) - image = Image.open(image_path) - embedding = self.embed(image) - embeddings[name] = embedding - np.save(output_path, embeddings) diff --git a/spaces/bigjoker/stable-diffusion-webui/webui.bat b/spaces/bigjoker/stable-diffusion-webui/webui.bat deleted file mode 100644 index 5139b7eb020139c65fa6390a7078c761301229b0..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/webui.bat +++ /dev/null @@ -1,85 +0,0 @@ -@echo off - -if not defined PYTHON (set PYTHON=python) -if not defined VENV_DIR (set "VENV_DIR=%~dp0%venv") - - -set ERROR_REPORTING=FALSE - -mkdir tmp 2>NUL - -%PYTHON% -c "" >tmp/stdout.txt 2>tmp/stderr.txt -if %ERRORLEVEL% == 0 goto :check_pip -echo Couldn't launch python -goto :show_stdout_stderr - -:check_pip -%PYTHON% -mpip --help >tmp/stdout.txt 2>tmp/stderr.txt -if %ERRORLEVEL% == 0 goto :start_venv -if "%PIP_INSTALLER_LOCATION%" == "" goto :show_stdout_stderr -%PYTHON% "%PIP_INSTALLER_LOCATION%" >tmp/stdout.txt 2>tmp/stderr.txt -if %ERRORLEVEL% == 0 goto :start_venv -echo Couldn't install pip -goto :show_stdout_stderr - -:start_venv -if ["%VENV_DIR%"] == ["-"] goto :skip_venv -if ["%SKIP_VENV%"] == ["1"] goto :skip_venv - -dir "%VENV_DIR%\Scripts\Python.exe" >tmp/stdout.txt 2>tmp/stderr.txt -if %ERRORLEVEL% == 0 goto :activate_venv - -for /f "delims=" %%i in ('CALL %PYTHON% -c "import sys; print(sys.executable)"') do set PYTHON_FULLNAME="%%i" -echo Creating venv in directory %VENV_DIR% using python %PYTHON_FULLNAME% -%PYTHON_FULLNAME% -m venv "%VENV_DIR%" >tmp/stdout.txt 2>tmp/stderr.txt -if %ERRORLEVEL% == 0 goto :activate_venv -echo Unable to create venv in directory "%VENV_DIR%" -goto :show_stdout_stderr - -:activate_venv -set PYTHON="%VENV_DIR%\Scripts\Python.exe" -echo venv %PYTHON% - -:skip_venv -if [%ACCELERATE%] == ["True"] goto :accelerate -goto :launch - -:accelerate -echo Checking for accelerate -set ACCELERATE="%VENV_DIR%\Scripts\accelerate.exe" -if EXIST %ACCELERATE% goto :accelerate_launch - -:launch -%PYTHON% launch.py %* -pause -exit /b - -:accelerate_launch -echo Accelerating -%ACCELERATE% launch --num_cpu_threads_per_process=6 launch.py -pause -exit /b - -:show_stdout_stderr - -echo. -echo exit code: %errorlevel% - -for /f %%i in ("tmp\stdout.txt") do set size=%%~zi -if %size% equ 0 goto :show_stderr -echo. -echo stdout: -type tmp\stdout.txt - -:show_stderr -for /f %%i in ("tmp\stderr.txt") do set size=%%~zi -if %size% equ 0 goto :show_stderr -echo. -echo stderr: -type tmp\stderr.txt - -:endofscript - -echo. -echo Launch unsuccessful. Exiting. -pause diff --git a/spaces/bioriAsaeru/text-to-voice/CCleaner Pro 5.63 [2021] Crack Plus Serial Key Free Download 2019.md b/spaces/bioriAsaeru/text-to-voice/CCleaner Pro 5.63 [2021] Crack Plus Serial Key Free Download 2019.md deleted file mode 100644 index 2b44b8e09b90979b542cea358ddfe5ecba8a281d..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/CCleaner Pro 5.63 [2021] Crack Plus Serial Key Free Download 2019.md +++ /dev/null @@ -1,18 +0,0 @@ - -

    CCleaner Pro 5.63 Crack Plus Serial Key Free Download 2019

    -

    CCleaner Pro 5.63 Crack is a powerful and easy-to-use tool that cleans and optimizes your PC to ensure best performance and security. CCleaner Pro 5.63 Crack can remove unused and temporary files, cache and cookies, browsing history, and other junk that clogs up your operating system and slows down your computer. CCleaner Pro 5.63 Crack can also fix registry errors, uninstall unwanted programs, manage startup items, and wipe free disk space to erase traces of deleted files.

    -

    CCleaner Pro 5.63 Crack Plus Serial Key Free Download 2019


    DOWNLOADhttps://urloso.com/2uyOFX



    -

    CCleaner Pro 5.63 Crack is the latest version of the popular CCleaner software, which has been updated with new features and improvements. CCleaner Pro 5.63 Crack offers a professional version of the software, which includes additional benefits such as real-time monitoring, automatic updates, premium support, and more. CCleaner Pro 5.63 Crack can help you boost your PC speed, protect your privacy, and recover disk space.

    -

    To activate CCleaner Pro 5.63 Crack, you need a valid serial key that can unlock all the premium features of the software. You can find many free CCleaner Pro keys online, but some of them may not work or may be expired. Here are some of the working CCleaner Pro keys that you can try:

    - -

    To use these keys, you need to download CCleaner Pro 5.63 Crack from a reliable source[^1^] [^2^] [^3^], install it on your PC, and enter one of the keys when prompted. You can also check for more keys online[^4^] [^5^] [^6^], but make sure they are valid and safe before using them.

    -

    CCleaner Pro 5.63 Crack is a great tool that can help you keep your PC clean and fast. However, you should always use it with caution and backup your important data before making any changes to your system. You should also avoid downloading cracked versions of software from unknown sources, as they may contain malware or viruses that can harm your PC.

    If you want to learn more about CCleaner Pro 5.63 Crack and how it works, you can visit the official website of the software, where you can find detailed information, tutorials, FAQs, and support. You can also download the free version of CCleaner from the website, which offers basic cleaning and optimization features. However, if you want to enjoy the full benefits of CCleaner Pro 5.63 Crack, you need to purchase a license key from the website or use one of the free keys provided above.

    -

    -

    CCleaner Pro 5.63 Crack is a useful and versatile tool that can help you improve your PC performance and security. By using CCleaner Pro 5.63 Crack regularly, you can keep your PC free of junk, errors, and threats, and make it run faster and smoother. CCleaner Pro 5.63 Crack is easy to use and compatible with Windows XP, Vista, 7, 8, 8.1, and 10. You can download CCleaner Pro 5.63 Crack today and give your PC a new life.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Deadhunt English Patch.md b/spaces/bioriAsaeru/text-to-voice/Deadhunt English Patch.md deleted file mode 100644 index 632a4a74804cf10a8ac088e8eb44ffec90887de1..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Deadhunt English Patch.md +++ /dev/null @@ -1,9 +0,0 @@ -

    Deadhunt English Patch


    Download Zip ->>->>->> https://urloso.com/2uyPPY



    - -Deadhunt is an arcade first-person shooter (FPS), in which the best features of arcade and first-person shooters are combined with fresh ideas and new twists. The game is a mixture of first-person shooter, racing and multiplayer game genres. -The protagonist of the game is a hunter whose goal is to destroy all the monsters that are hiding in the forests, swamps, abandoned buildings and other places. -Deadhunt uses a cover system that allows you to quickly move from attack to cover and back again. -The weapons in the game have a lot of firepower (deals a lot of damage) and can be upgraded depending on the type (for example, the type of ammunition). 8a78ff9644
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/History Of Subcontinent From 712 To 1947 In Urdu Pdf The Cultural and Religious Diversity of the Region.md b/spaces/bioriAsaeru/text-to-voice/History Of Subcontinent From 712 To 1947 In Urdu Pdf The Cultural and Religious Diversity of the Region.md deleted file mode 100644 index 97f82c98dbce60d5f00dba88444d9b89e4e3cd4e..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/History Of Subcontinent From 712 To 1947 In Urdu Pdf The Cultural and Religious Diversity of the Region.md +++ /dev/null @@ -1,17 +0,0 @@ - -

    The history of preceding the country's independence in 1947[1] is shared with that of Afghanistan, India, and Iran. Spanning the western expanse of the Indian subcontinent and the eastern borderlands of the Iranian plateau, the region of present-day Pakistan served both as the fertile ground of a major civilization and as the gateway of South Asia to Central Asia and the Near East.[2][3]

    -

    The Kushan Empire expanded out of what is now Afghanistan into the northwest of the subcontinent under the leadership of their first emperor, Kujula Kadphises, about the middle of the 1st century CE. They were descended from an Indo-European, Central Asian people called the Yuezhi,[54][55] a branch of which was known as the Kushans. By the time of his grandson, Kanishka the Great, the empire spread to encompass much of Afghanistan[56] and the northern parts of the Indian subcontinent at least as far as Saketa and Sarnath near Varanasi (Benares).[57]

    -

    History Of Subcontinent From 712 To 1947 In Urdu Pdf


    Download File ››› https://urloso.com/2uyPMz



    -

    Download CSS Book Hindu Muslim Confrontation 712 to 1947 By Dr Sarfraz Ahmed Mirza for Css compulsory Subject Pakistan Affairs. This booklet covers the period from 712 to the creation of Pakistan 1947. Download this booklet free from The CSS Point.

    -

    From the late 12th century onwards, Muslim empires dominated the subcontinent, most notably the Delhi sultanate and Mughal empire.[2] Various other Muslim kingdoms ruled most of South Asia from the mid-14th to late 18th centuries, including the Bahmani, Bengal, Gujarat, Malwa, Mysore, Carnatic and Deccan Sultanates.[3][4] Though the Muslim dynasties in India were diversed in origin, they were linked together by the Persianate culture and Islam.

    -

    The Mughal empire was the second major Islamic empire to assert dominance over most of the Indian subcontinent between 1526 and 1857. The empire was founded by the Turco-Mongol leader Babur in 1526, when he defeated Ibrahim Lodi, the last ruler of the Delhi Sultanate at the First Battle of Panipat. Babur, Humayun, Akbar, Jahangir, Shah Jahan, and Aurangzeb are known as the six great Mughal Emperors. Apart from the brief interruption by the Afghan Sur dynasty between 1540 and 1556, the Mughals continued to rule in one form or other till 1857.

    -

    When Pakistan became a country on August 14th, 1947, to form the largest Muslim state in the world at that time. The creation of Pakistan was catalyst to the largest demographic movement in recorded history. Nearly seventeen million people-Hindus, Muslims, and Sikhs-are reported to have moved in both directions between India and the two wings of Pakistan (the eastern wing is now Bangladesh). Sixty million of the ninety-five million Muslims on the Indian subcontinent became citizens of Pakistan at the time of its creation. Subsequently, thirty-five million Muslims remained inside India making it the largest Muslim minority in a non-Muslim state.

    -

    After Ayub Khan, General Agha Muhammad Yahya Khan headed the second military regime from 1969-1971. By that time the country had been under military rule for thirteen of its twenty-five years of existence. This second military regime emphasized the extent to which the process of centralization under bureaucratic and military tutelage had fragmented Pakistani society and politics. The general elections of 1970 on the basis of adult franchise revealed for the first time ever in Pakistan's history how regionalism and social conflict had come to dominate politics despite the efforts at controlled development. The Awami League, led by Mujibur Rahman, campaigned on a six-point program of provincial autonomy, capturing all but one seat in East Pakistan and securing an absolute majority in the national assembly. In West Pakistan the Pakistan People's Party, led by Zulfiqar Ali Bhutto, had a populist platform that stole the thunder from the Islamic parties (the Muslim League, the oldest political party captured no more than a few seats) and emerged as the largest single bloc. The prospect of an Awami Leagues government was a threat to politicians in West Pakistan who in conspiracy with the military leadership prevented Mujibur from taking the reins of power. This was the final straw for the east wing who was already fed up with the their under-representation in all sectors of the government, economic deprivation and then the suppression of the democratic process. An armed rebellion in East Pakistan engendered all of these frustrations, which caused Indian military intervention to crush it. Pakistan was now involved in its third war with India, thus clearing the way for the establishment of Bangladesh in 1971.

    -

    -

    History is one of the most interesting disciplines. We learn interesting facts from history. History tells about the origin. Urdu Point has many history books. History books help to understand the history and analyze the present. History books provide history definition as well. Instead of going for history Google, you can get history books at Urdu Point. Urdu Point books section has a specified section for history books. There are many history channels and history TV shows also. There is a list of history books. The list of history books contains many history books. Best history books about history of India, history of Pakistan, and history of sub-continent are also available. You can get the best history books of all time. You can also get the best world history books. If you are looking for best history books to read then go for Urdu Point. History books online, history books examples, and history books to read are available here. Search results about best history books, best ancient history books and Islamic history books are available. You can get the Islamic history books, Islamic history books in Urdu pdf free download, and history books in Urdu. If you are searching for the world history books, history books in Urdu and free pdf books are available. History books have many categories which include Islamic history books, Indian history books and Pakistan history books. People who are fond of reading books can read online books. Search results about online library books, online Urdu books, and online books are found. You can easily read online book here. People want to access digital library to read online books. History books in Urdu can also be searched. You can find the history books on Urdu Point. History books about Indian history timeline, brief history of India and free history books found here. If you want free online book download, free online novels, free books online pdf and free online books for kids then visit us. We provide you access to the free online novels, free books online pdf and free online history books. You can read full length online books here. Some people also search for the free online romance books and read entire books free. If you want to know who the first king of India was, how old India is, medieval Indian history and history of India pdf then read Indian history books. Searches about Pakistan history, Indian history, Islamic history, Indian history online and Indian history pdf also found. To get the history books visit Urdu Point. Tareekhi kitabain are available here. We provide you access to the Tareekhi kitabain. Get the history books, famous history books, Indian history books and online history books at Urdu Point. Come at Urdu Point and get an easy access to history books, history books in Urdu and Pakistan history books.

    -

    Muslim Rule in India 712-1857 - Free download as Powerpoint Presentation (.ppt), PDF File (.pdf), Text File (.txt) .. American Pie Tamil Dubbed Downlo

    history of subcontinent from 712 to 1947 wikipedia


    Umayyad General Iraq Governor, Hijaj bin Yousaf Married his Daughter Zubaida Foundation of Islamic Rule in Subcontinent ... CSS Indo-Pak History Solved MCQs of Paper-II (1985 till Now).

    -

    In 1947, after 200 years of control, the British finally quit the Indian subcontinent. Before leaving, the colonizers drew a line in the sand that formed two new dominions: Muslim-majority Pakistan and Hindu-majority India. Some 15 million people migrated (the largest human migration in history) and one to two million perished in the communal violence that followed.

    -

    Secularism, as conceived in our subcontinent, is a matter of having different religious communities living together in tranquillity and harmony, whereas in Pakistan, especially west Pakistan, from where many minorities choose to move out to India, secularism takes on a different role of being a matter of tranquillity and harmony between different sects of Islam. And yet getting to that point is very hard when the sects are defined in different theological terms and each theology feels that its word is the true interpretation of the word of God.

    -

    From 1947 onwards, when migrants from India, known as the Mohājirs, came to Sindh, the repertoire that dominated the local religiosity was that of a vernacular Sufism, to which both Sindhi Muslims and Hindus of all faiths subscribed. Consequently, in the competition between Sindhis and Mohājirs for the domination of the city of Hyderabad, negotiations between different religious repertoires played a prominent role. My hypothesis is that the Mawlā jā Qadam represented a crucial stake in the showdown between the Sindhis and the Mohājirs, more than a vector for the integration of the Mohājirs in the urban landscape. In what follows, I will demonstrate that, although the onomastic change of the site unambiguously indicates its coming under the control of the Mohājirs, as explained below, the ritual practices show a resilience of the vernacular Sufi substratum of Sindh.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/blaziant/ysda_nlp_ops/Dockerfile b/spaces/blaziant/ysda_nlp_ops/Dockerfile deleted file mode 100644 index 587c772a5722b45d5a3cada3294f1a8de98774b7..0000000000000000000000000000000000000000 --- a/spaces/blaziant/ysda_nlp_ops/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM python:3.9 - -WORKDIR /backend - -COPY ./requirements.txt /backend/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /backend/requirements.txt - -COPY ./app /backend/app -COPY ./templates /backend/templates - -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -WORKDIR $HOME/app - -COPY --chown=user . $HOME/app - -CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/breadlicker45/Text-to-music-longer/utils.py b/spaces/breadlicker45/Text-to-music-longer/utils.py deleted file mode 100644 index d302528fd6fc9be8d782f78b6c44f4d894147d07..0000000000000000000000000000000000000000 --- a/spaces/breadlicker45/Text-to-music-longer/utils.py +++ /dev/null @@ -1,50 +0,0 @@ -import json -import numpy as np -import httpx - -from constants import MUBERT_TAGS, MUBERT_LICENSE, MUBERT_MODE, MUBERT_TOKEN - - -def get_mubert_tags_embeddings(w2v_model): - return w2v_model.encode(MUBERT_TAGS) - - -def get_pat(email: str): - r = httpx.post('https://api-b2b.mubert.com/v2/GetServiceAccess', - json={ - "method": "GetServiceAccess", - "params": { - "email": email, - "license": MUBERT_LICENSE, - "token": MUBERT_TOKEN, - "mode": MUBERT_MODE, - } - }) - - rdata = json.loads(r.text) - assert rdata['status'] == 1, "probably incorrect e-mail" - pat = rdata['data']['pat'] - return pat - - -def find_similar(em, embeddings, method='cosine'): - scores = [] - for ref in embeddings: - if method == 'cosine': - scores.append(1 - np.dot(ref, em) / (np.linalg.norm(ref) * np.linalg.norm(em))) - if method == 'norm': - scores.append(np.linalg.norm(ref - em)) - return np.array(scores), np.argsort(scores) - - -def get_tags_for_prompts(w2v_model, mubert_tags_embeddings, prompts, top_n=3, debug=False): - prompts_embeddings = w2v_model.encode(prompts) - ret = [] - for i, pe in enumerate(prompts_embeddings): - scores, idxs = find_similar(pe, mubert_tags_embeddings) - top_tags = MUBERT_TAGS[idxs[:top_n]] - top_prob = 1 - scores[idxs[:top_n]] - if debug: - print(f"Prompt: {prompts[i]}\nTags: {', '.join(top_tags)}\nScores: {top_prob}\n\n\n") - ret.append((prompts[i], list(top_tags))) - return ret diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/docs/tutorials/lazyconfigs.md b/spaces/brjathu/HMR2.0/vendor/detectron2/docs/tutorials/lazyconfigs.md deleted file mode 100644 index a01101ae40ec12d25d5a3d96892b60ef32dca21e..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/docs/tutorials/lazyconfigs.md +++ /dev/null @@ -1,170 +0,0 @@ -# Lazy Configs - -The traditional yacs-based config system provides basic, standard functionalities. -However, it does not offer enough flexibility for many new projects. -We develop an alternative, non-intrusive config system that can be used with -detectron2 or potentially any other complex projects. - -## Python Syntax - -Our config objects are still dictionaries. Instead of using Yaml to define dictionaries, -we create dictionaries in Python directly. This gives users the following power that -doesn't exist in Yaml: - -* Easily manipulate the dictionary (addition & deletion) using Python. -* Write simple arithmetics or call simple functions. -* Use more data types / objects. -* Import / compose other config files, using the familiar Python import syntax. - -A Python config file can be loaded like this: -```python -# config.py: -a = dict(x=1, y=2, z=dict(xx=1)) -b = dict(x=3, y=4) - -# my_code.py: -from detectron2.config import LazyConfig -cfg = LazyConfig.load("path/to/config.py") # an omegaconf dictionary -assert cfg.a.z.xx == 1 -``` - -After [LazyConfig.load](../modules/config.html#detectron2.config.LazyConfig.load), `cfg` will be a dictionary that contains all dictionaries -defined in the global scope of the config file. Note that: -* All dictionaries are turned to an [omegaconf](https://omegaconf.readthedocs.io/) - config object during loading. This enables access to omegaconf features, - such as its [access syntax](https://omegaconf.readthedocs.io/en/2.1_branch/usage.html#access-and-manipulation) - and [interpolation](https://omegaconf.readthedocs.io/en/2.1_branch/usage.html#variable-interpolation). -* Absolute imports in `config.py` works the same as in regular Python. -* Relative imports can only import dictionaries from config files. - They are simply a syntax sugar for [LazyConfig.load_rel](../modules/config.html#detectron2.config.LazyConfig.load_rel). - They can load Python files at relative path without requiring `__init__.py`. - -[LazyConfig.save](../modules/config.html#detectron2.config.LazyConfig.save) can save a config object to yaml. -Note that this is not always successful if non-serializable objects appear in the config file (e.g. lambdas). -It is up to users whether to sacrifice the ability to save in exchange for flexibility. - -## Recursive Instantiation - -The LazyConfig system heavily uses recursive instantiation, which is a pattern that -uses a dictionary to describe a -call to a function/class. The dictionary consists of: - -1. A "\_target\_" key which contains path to the callable, such as "module.submodule.class_name". -2. Other keys that represent arguments to pass to the callable. Arguments themselves can be defined - using recursive instantiation. - -We provide a helper function [LazyCall](../modules/config.html#detectron2.config.LazyCall) that helps create such dictionaries. -The following code using `LazyCall` -```python -from detectron2.config import LazyCall as L -from my_app import Trainer, Optimizer -cfg = L(Trainer)( - optimizer=L(Optimizer)( - lr=0.01, - algo="SGD" - ) -) -``` -creates a dictionary like this: -```python -cfg = { - "_target_": "my_app.Trainer", - "optimizer": { - "_target_": "my_app.Optimizer", - "lr": 0.01, "algo": "SGD" - } -} -``` - -By representing objects using such dictionaries, a general -[instantiate](../modules/config.html#detectron2.config.instantiate) -function can turn them into actual objects, i.e.: -```python -from detectron2.config import instantiate -trainer = instantiate(cfg) -# equivalent to: -# from my_app import Trainer, Optimizer -# trainer = Trainer(optimizer=Optimizer(lr=0.01, algo="SGD")) -``` - -This pattern is powerful enough to describe very complex objects, e.g.: - -
    - -A Full Mask R-CNN described in recursive instantiation (click to expand) - - -```eval_rst -.. literalinclude:: ../../configs/common/models/mask_rcnn_fpn.py - :language: python - :linenos: -``` - -
    - -There are also objects or logic that cannot be described simply by a dictionary, -such as reused objects or method calls. They may require some refactoring -to work with recursive instantiation. - -## Using Model Zoo LazyConfigs - -We provide some configs in the model zoo using the LazyConfig system, for example: - -* [common baselines](../../configs/common/). -* [new Mask R-CNN baselines](../../configs/new_baselines/) - -After installing detectron2, they can be loaded by the model zoo API -[model_zoo.get_config](../modules/model_zoo.html#detectron2.model_zoo.get_config). - -Using these as references, you're free to define custom config structure / fields for your own -project, as long as your training script can understand them. -Despite of this, our model zoo configs still follow some simple conventions for consistency, e.g. -`cfg.model` defines a model object, `cfg.dataloader.{train,test}` defines dataloader objects, -and `cfg.train` contains training options in key-value form. -In addition to `print()`, a better way to view the structure of a config is like this: -```python -from detectron2.model_zoo import get_config -from detectron2.config import LazyConfig -print(LazyConfig.to_py(get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py"))) -``` -From the output it's easier to find relevant options to change, e.g. -`dataloader.train.total_batch_size` for the batch size, or `optimizer.lr` for base learning rate. - -We provide a reference training script -[tools/lazyconfig_train_net.py](../../tools/lazyconfig_train_net.py), -that can train/eval our model zoo configs. -It also shows how to support command line value overrides. - -To demonstrate the power and flexibility of the new system, we show that -[a simple config file](../../configs/Misc/torchvision_imagenet_R_50.py) -can let detectron2 train an ImageNet classification model from torchvision, even though -detectron2 contains no features about ImageNet classification. -This can serve as a reference for using detectron2 in other deep learning tasks. - -## Summary - -By using recursive instantiation to create objects, -we avoid passing a giant config to many places, because `cfg` is only passed to `instantiate`. -This has the following benefits: - -* It's __non-intrusive__: objects to be constructed are config-agnostic, regular Python - functions/classes. - They can even live in other libraries. For example, - `{"_target_": "torch.nn.Conv2d", "in_channels": 10, "out_channels": 10, "kernel_size": 1}` - defines a conv layer. -* __Clarity__ of what function/classes will be called, and what arguments they use. -* `cfg` doesn't need pre-defined keys and structures. It's valid as long as it translates to valid - code. This gives a lot more __flexibility__. -* You can still pass huge dictionaries as arguments, just like the old way. - -Recursive instantiation and Python syntax are orthogonal: you can use one without the other. -But by putting them together, the config file looks a lot like the code that will be executed: - -![img](./lazyconfig.jpg) - -However, the config file just defines dictionaries, which can be easily manipulated further -by composition or overrides. -The corresponding code will only be executed -later when `instantiate` is called. In some way, -in config files we're writing "editable code" that will be "lazily executed" later when needed. -That's why we call this system "LazyConfig". diff --git a/spaces/cbr/swp/swapper.py b/spaces/cbr/swp/swapper.py deleted file mode 100644 index f7f359961e465004fed3311b8dee0bf51c56b649..0000000000000000000000000000000000000000 --- a/spaces/cbr/swp/swapper.py +++ /dev/null @@ -1,106 +0,0 @@ -import cv2 -import numpy as np -from insightface.utils import face_align -from face_parsing.swap import swap_regions -from utils import add_logo_to_image - -swap_options_list = [ - "All face", - "Age less than", - "Age greater than", - "All Male", - "All Female", - "Specific Face", -] - - -def swap_face(whole_img, target_face, source_face, models): - inswapper = models.get("swap") - face_enhancer = models.get("enhance", None) - face_parser = models.get("face_parser", None) - fe_enable = models.get("enhance_sett", False) - - bgr_fake, M = inswapper.get(whole_img, target_face, source_face, paste_back=False) - image_size = 128 if not fe_enable else 512 - aimg, _ = face_align.norm_crop2(whole_img, target_face.kps, image_size=image_size) - - if face_parser is not None: - fp_enable, includes, smooth_mask, blur_amount = models.get("face_parser_sett") - if fp_enable: - bgr_fake = swap_regions( - bgr_fake, aimg, face_parser, smooth_mask, includes=includes, blur=blur_amount - ) - - if fe_enable: - _, bgr_fake, _ = face_enhancer.enhance( - bgr_fake, paste_back=True, has_aligned=True - ) - bgr_fake = bgr_fake[0] - M /= 0.25 - - IM = cv2.invertAffineTransform(M) - - img_white = np.full((aimg.shape[0], aimg.shape[1]), 255, dtype=np.float32) - bgr_fake = cv2.warpAffine( - bgr_fake, IM, (whole_img.shape[1], whole_img.shape[0]), borderValue=0.0 - ) - img_white = cv2.warpAffine( - img_white, IM, (whole_img.shape[1], whole_img.shape[0]), borderValue=0.0 - ) - img_white[img_white > 20] = 255 - img_mask = img_white - mask_h_inds, mask_w_inds = np.where(img_mask == 255) - mask_h = np.max(mask_h_inds) - np.min(mask_h_inds) - mask_w = np.max(mask_w_inds) - np.min(mask_w_inds) - mask_size = int(np.sqrt(mask_h * mask_w)) - - k = max(mask_size // 10, 10) - img_mask = cv2.erode(img_mask, np.ones((k, k), np.uint8), iterations=1) - - k = max(mask_size // 20, 5) - kernel_size = (k, k) - blur_size = tuple(2 * i + 1 for i in kernel_size) - img_mask = cv2.GaussianBlur(img_mask, blur_size, 0) / 255 - - img_mask = np.reshape(img_mask, [img_mask.shape[0], img_mask.shape[1], 1]) - fake_merged = img_mask * bgr_fake + (1 - img_mask) * whole_img.astype(np.float32) - fake_merged = add_logo_to_image(fake_merged.astype("uint8")) - return fake_merged - - -def swap_face_with_condition( - whole_img, target_faces, source_face, condition, age, models -): - swapped = whole_img.copy() - - for target_face in target_faces: - if condition == "All face": - swapped = swap_face(swapped, target_face, source_face, models) - elif condition == "Age less than" and target_face["age"] < age: - swapped = swap_face(swapped, target_face, source_face, models) - elif condition == "Age greater than" and target_face["age"] > age: - swapped = swap_face(swapped, target_face, source_face, models) - elif condition == "All Male" and target_face["gender"] == 1: - swapped = swap_face(swapped, target_face, source_face, models) - elif condition == "All Female" and target_face["gender"] == 0: - swapped = swap_face(swapped, target_face, source_face, models) - - return swapped - - -def swap_specific(source_specifics, target_faces, whole_img, models, threshold=0.6): - swapped = whole_img.copy() - - for source_face, specific_face in source_specifics: - specific_embed = specific_face["embedding"] - specific_embed /= np.linalg.norm(specific_embed) - - for target_face in target_faces: - target_embed = target_face["embedding"] - target_embed /= np.linalg.norm(target_embed) - cosine_distance = 1 - np.dot(specific_embed, target_embed) - if cosine_distance > threshold: - continue - swapped = swap_face(swapped, target_face, source_face, models) - - return swapped diff --git a/spaces/ccolas/TastyPiano/src/cocktails/config.py b/spaces/ccolas/TastyPiano/src/cocktails/config.py deleted file mode 100644 index bce5b65a666caf9972ea64933a4a74eb4e2532c0..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/cocktails/config.py +++ /dev/null @@ -1,21 +0,0 @@ -import os - -REPO_PATH = '/'.join(os.path.abspath(__file__).split('/')[:-3]) + '/' - -# QUADRUPLETS_PATH = REPO_PATH + 'checkpoints/cocktail_representation/quadruplets.pickle' -INGREDIENTS_LIST_PATH = REPO_PATH + 'checkpoints/cocktail_representation/ingredient_list.csv' -# ING_MATCH_SCORE_Q_PATH = REPO_PATH + 'checkpoints/cocktail_representation/ingredient_match_score_q.txt' -# ING_MATCH_SCORE_COUNT_PATH = REPO_PATH + 'checkpoints/cocktail_representation/ingredient_match_score_count.txt' -# COCKTAIL_DATA_FOLDER_PATH = REPO_PATH + 'checkpoints/cocktail_representation/' -COCKTAILS_CSV_DATA = REPO_PATH + 'checkpoints/cocktail_representation/cocktails_data.csv' -# COCKTAILS_PKL_DATA = REPO_PATH + 'checkpoints/cocktail_representation/cocktails_data.pkl' -# COCKTAILS_URL_DATA = REPO_PATH + 'checkpoints/cocktail_representation/cocktails_names_urls.pkl' -EXPERIMENT_PATH = REPO_PATH + 'experiments/cocktails/representation_learning/' -# ANALYSIS_PATH = REPO_PATH + 'experiments/cocktails/representation_analysis/' -# REPRESENTATIONS_PATH = REPO_PATH + 'experiments/cocktails/learned_representations/' - -FULL_COCKTAIL_REP_PATH = REPO_PATH + "/checkpoints/cocktail_representation/handcoded_reps/cocktail_handcoded_reps_minmax_norm-1_1_dim13_customkeys.txt" -RECIPE2FEATURES_PATH = REPO_PATH + "/checkpoints/cocktail_representation/" # get this by running run_without_vae -COCKTAIL_REP_CHKPT_PATH = REPO_PATH + "/checkpoints/cocktail_representation/handcoded_reps/" -# FULL_COCKTAIL_REP_PATH = REPO_PATH + "experiments/cocktails/representation_analysis/affective_mapping/clustered_representations/all_cocktail_reps_norm-1_1_custom_keys_dim13.txt' -COCKTAIL_NN_PATH = REPO_PATH + "/checkpoints/cocktail_representation/handcoded_reps/nn_model.pickle" \ No newline at end of file diff --git a/spaces/chaitanya9/emotion_recognizer/app.py b/spaces/chaitanya9/emotion_recognizer/app.py deleted file mode 100644 index cd1148960d8588fe8894245690e5c3ee1c671fea..0000000000000000000000000000000000000000 --- a/spaces/chaitanya9/emotion_recognizer/app.py +++ /dev/null @@ -1,19 +0,0 @@ -import gradio as gr -import pickle - -filename = "Our_Trained_knn_model.pickle" - -def predict(inp): - best_classifiers = pickle.load(open(filename, 'rb')) - emotion = best_classifiers.predict(inp) - return emotion - -if __name__ == "__main__": - audio = gr.inputs.Audio(source="upload", type="numpy", label=None, optional=False) - - #gr.Interface(fn=emotion_recognizer, inputs=audio, outputs="text", capture_session=True).launch() - - - iface = gr.Interface(fn=predict, inputs = "audio", outputs = "text") - iface.launch(share=True) - diff --git a/spaces/chansung/textual-inversion-pipeline/constants.py b/spaces/chansung/textual-inversion-pipeline/constants.py deleted file mode 100644 index e2662d9e3e5dadad9291e0741d4d7b88479a19b1..0000000000000000000000000000000000000000 --- a/spaces/chansung/textual-inversion-pipeline/constants.py +++ /dev/null @@ -1,135 +0,0 @@ -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: black; - background: black; - } - input[type='range'] { - accent-color: black; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - #advanced-btn { - font-size: .7rem !important; - line-height: 19px; - margin-top: 12px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - display: none; - margin-bottom: 20px; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - margin-top: 10px; - margin-left: auto; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; - } - #share-btn * { - all: unset; - } - #share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; - } - #share-btn-container .wrap { - display: none !important; - } - - .gr-form{ - flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0; - } - #prompt-container{ - gap: 0; - } - #prompt-text-input, #negative-prompt-text-input{padding: .45rem 0.625rem} - #component-16{border-top-width: 1px!important;margin-top: 1em} - .image_duplication{position: absolute; width: 100px; left: 50px} -""" - - -examples = [ - ["Yoda", "low quality", 40], - ["A red pokemon with green eyes", 40], - ["cute Sundar Pihcai creature", 40], - ["Hello kitty", 40], -] - -num_images_to_gen = 3 - -img_height = img_width = 512 \ No newline at end of file diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/exps/default/yolox_l.py b/spaces/chendl/compositional_test/multimodal/YOLOX/exps/default/yolox_l.py deleted file mode 100644 index 50833ca38c51fe9ac5e327d7c1c0561fb62249aa..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/exps/default/yolox_l.py +++ /dev/null @@ -1,15 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) Megvii, Inc. and its affiliates. - -import os - -from yolox.exp import Exp as MyExp - - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.depth = 1.0 - self.width = 1.0 - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] diff --git a/spaces/chongjie/co-tracker_MVP/app.py b/spaces/chongjie/co-tracker_MVP/app.py deleted file mode 100644 index fa1fc4e5283eddcd4cf9826cc0dc3fa0305bd3f3..0000000000000000000000000000000000000000 --- a/spaces/chongjie/co-tracker_MVP/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import gradio as gr -import os -import torch -import numpy as np - -from PIL import Image -from cotracker.utils.visualizer import Visualizer, read_video_from_path -from cotracker.predictor import CoTrackerPredictor - -checkpoint='./checkpoints/cotracker_stride_4_wind_8.pth' -def cotracker(video_path: str, grid_size: int, grid_query_frame: int, backward_tracking: bool): - # load the input video frame by frame - video = read_video_from_path(video_path) - video = torch.from_numpy(video).permute(0, 3, 1, 2)[None].float() - model = CoTrackerPredictor(checkpoint=checkpoint) - if torch.cuda.is_available(): - model = model.cuda() - video = video.cuda() - else: - print("CUDA is not available!") - - pred_tracks, pred_visibility = model( - video, - grid_size=grid_size, - grid_query_frame=grid_query_frame, - backward_tracking=backward_tracking, - ) - print("computed") - - # save a video with predicted tracks - seq_name = video_path.split("/")[-1] - vis = Visualizer(save_dir="./saved_videos", pad_value=120, linewidth=3) - vis.visualize(video, pred_tracks, query_frame=grid_query_frame) - - return "./saved_videos/video_pred_track.mp4" - -iface = gr.Interface( - fn=cotracker, - inputs=[ - gr.inputs.Video(label='video', type='mp4'), - gr.inputs.Slider(minimum=0, maximum=20, step=1, default=10, label="Grid Size"), - gr.inputs.Slider(minimum=0, maximum=10, step=1, default=0, label="Grid Query Frame"), - gr.inputs.Checkbox(label="Backward Tracking"), - ], - outputs=gr.outputs.Video(label="Output") -) -iface.queue() -iface.launch() \ No newline at end of file diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/setup.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/setup.py deleted file mode 100644 index 6ea944e1887758f91965b080f2d7a8eb9a1cf915..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/setup.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import print_function -from setuptools import setup, find_packages -import os -import shutil -import platform - -# make the faiss python package dir -shutil.rmtree("faiss", ignore_errors=True) -os.mkdir("faiss") -shutil.copytree("contrib", "faiss/contrib") -shutil.copyfile("__init__.py", "faiss/__init__.py") -shutil.copyfile("loader.py", "faiss/loader.py") -shutil.copyfile("class_wrappers.py", "faiss/class_wrappers.py") -shutil.copyfile("gpu_wrappers.py", "faiss/gpu_wrappers.py") -shutil.copyfile("extra_wrappers.py", "faiss/extra_wrappers.py") -shutil.copyfile("array_conversions.py", "faiss/array_conversions.py") - -ext = ".pyd" if platform.system() == 'Windows' else ".so" -prefix = "Release/" * (platform.system() == 'Windows') - -swigfaiss_generic_lib = f"{prefix}_swigfaiss{ext}" -swigfaiss_avx2_lib = f"{prefix}_swigfaiss_avx2{ext}" - -found_swigfaiss_generic = os.path.exists(swigfaiss_generic_lib) -found_swigfaiss_avx2 = os.path.exists(swigfaiss_avx2_lib) - -assert (found_swigfaiss_generic or found_swigfaiss_avx2), \ - f"Could not find {swigfaiss_generic_lib} or " \ - f"{swigfaiss_avx2_lib}. Faiss may not be compiled yet." - -if found_swigfaiss_generic: - print(f"Copying {swigfaiss_generic_lib}") - shutil.copyfile("swigfaiss.py", "faiss/swigfaiss.py") - shutil.copyfile(swigfaiss_generic_lib, f"faiss/_swigfaiss{ext}") - -if found_swigfaiss_avx2: - print(f"Copying {swigfaiss_avx2_lib}") - shutil.copyfile("swigfaiss_avx2.py", "faiss/swigfaiss_avx2.py") - shutil.copyfile(swigfaiss_avx2_lib, f"faiss/_swigfaiss_avx2{ext}") - -long_description=""" -Faiss is a library for efficient similarity search and clustering of dense -vectors. It contains algorithms that search in sets of vectors of any size, - up to ones that possibly do not fit in RAM. It also contains supporting -code for evaluation and parameter tuning. Faiss is written in C++ with -complete wrappers for Python/numpy. Some of the most useful algorithms -are implemented on the GPU. It is developed by Facebook AI Research. -""" -setup( - name='faiss', - version='1.7.4', - description='A library for efficient similarity search and clustering of dense vectors', - long_description=long_description, - url='https://github.com/facebookresearch/faiss', - author='Matthijs Douze, Jeff Johnson, Herve Jegou, Lucas Hosseini', - author_email='matthijs@fb.com', - license='MIT', - keywords='search nearest neighbors', - - install_requires=['numpy'], - packages=['faiss', 'faiss.contrib'], - package_data={ - 'faiss': ['*.so', '*.pyd'], - }, - zip_safe=False, -) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_B_.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_B_.py deleted file mode 100644 index 8a6c14c444595508c35bdc6ebace60b4bbbbdaba..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_B_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .T_S_I_V_ import table_T_S_I_V_ - - -class table_T_S_I_B_(table_T_S_I_V_): - pass diff --git a/spaces/cihyFjudo/fairness-paper-search/American Conquest Divided Nation Patch Windows 10 Download and Install the Latest Version Here.md b/spaces/cihyFjudo/fairness-paper-search/American Conquest Divided Nation Patch Windows 10 Download and Install the Latest Version Here.md deleted file mode 100644 index 1d8c6942a48cfc90938f340e35dc93ca6357d789..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/American Conquest Divided Nation Patch Windows 10 Download and Install the Latest Version Here.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    American Conquest: Fight Back is a stand-alone expansion pack for American Conquest. It features five new nations: Germany, Russia, Haida, Portugal and the Netherlands, and 50 new units. In addition to new campaigns featuring the Mayas, the Germans, the Haida and the Russians, a new 'battlefield' game mode is available. The German campaign briefly chronicles the expedition of Ambrosius Ehinger and Georg Hohermuth whereas the Russian campaign concerns the Alaskan campaign under Alexander Baranov. The new Haida campaign is from the Haida point of view of the Russian expedition. The Mayas campaign covers details from the Spanish conquest of Yucatán.

    -

    American Conquest Divided Nation Patch Windows 10


    Download Ziphttps://tinurli.com/2uwjDW



    -

    A total conversion mod for the game was released in 2006, with patches and different versions released up until 2009, called European Warfare: Napoleonica that transferred the player back to 19th Century war-torn Europe during the Napoleonic Wars. The project was undertaken by Gexozoid (helped by the Hawks group and other associates) in 2007 and since then had a fairly active community on GameRanger and forums up until 2015. The Hawks Group recreated a vast database of historical battles that can be played in multiplayer by up to 7 players at the same time, sharing armies or fighting in co-op. It can still be downloaded at their original website or on ModDB. The Mod features over 200 new units and around 20 new buildings that range from a faction's Barracks to fortifications in the form of manned cannon towers and breastworks much like in Cossacks. 12 fully playable nations include: France, England, Poland, Austria, Prussia, Russia, Spain, Italy, the Ottoman Empire, Confederacy of Rhine, Sweden and the USA.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cllatMTK/TransformerAnalyzer/calc_util.py b/spaces/cllatMTK/TransformerAnalyzer/calc_util.py deleted file mode 100644 index 7dcbf6f19b864037aadcb43daadbfec305e39e38..0000000000000000000000000000000000000000 --- a/spaces/cllatMTK/TransformerAnalyzer/calc_util.py +++ /dev/null @@ -1,420 +0,0 @@ -import numpy as np -from collections import defaultdict -from functools import partial -from typing import List -from model_util import get_module_tensors_matched - -def calc_model_size_from_model(model_config, inference_config): - get_module_tensors_matched_partial = partial(get_module_tensors_matched, module_classes_dict = model_config['module_classes']) - - parameter_count = defaultdict(float) - parameter_count['word_embedding'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'embed' in x and 'pos' not in x)]) - parameter_count['positional_embedding'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'embed' in x and 'pos' in x)]) - - parameter_count['attention_Q'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'att' in x and 'q' in x)]) - parameter_count['attention_K'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'att' in x and 'k' in x)]) - parameter_count['attention_V'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'att' in x and 'v' in x)]) - parameter_count['attention_out'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'att' in x and ('out_' in x or 'o_' in x))]) - - parameter_count['layernorm'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'norm' in x)]) - parameter_count['mlp_weights'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'fc' in x or 'mlp' in x)]) - - parameter_count['embedding_weights'] = parameter_count['word_embedding'] + parameter_count['positional_embedding'] - parameter_count['attention_weights'] = parameter_count['attention_out'] + parameter_count['attention_Q'] + parameter_count['attention_K'] + parameter_count['attention_V'] - - return parameter_count - -def model_size_estimate(model_config, inference_config): - parameter_count = {} - parameter_count['word_embedding'] = model_config['vocab_size']*model_config['hidden_size'] - parameter_count['positional_embedding'] = model_config['max_position_embeddings']*model_config['hidden_size'] - - parameter_count['attention_Q'] = model_config['num_hidden_layers']*model_config['hidden_size']*model_config['hidden_size']/model_config['num_attention_heads']*model_config['num_attention_heads'] - parameter_count['attention_K'] = model_config['num_hidden_layers']*model_config['hidden_size']*model_config['hidden_size']/model_config['num_attention_heads']*model_config['num_attention_heads'] - parameter_count['attention_V'] = model_config['num_hidden_layers']*model_config['hidden_size']*model_config['hidden_size']/model_config['num_attention_heads']*model_config['num_attention_heads'] - parameter_count['attention_out'] = model_config['num_hidden_layers']*model_config['hidden_size']*model_config['hidden_size']/model_config['num_attention_heads']*model_config['num_attention_heads'] - - parameter_count['layernorm'] = 2*model_config['layernorm_operation']*model_config['num_hidden_layers']*model_config['hidden_size'] - parameter_count['mlp1'] = model_config['num_hidden_layers']*model_config['hidden_size']*model_config['intermediate_size'] - parameter_count['mlp2'] = model_config['num_hidden_layers']*model_config['hidden_size']*model_config['intermediate_size'] - parameter_count['embedding_weights'] = parameter_count['word_embedding'] + parameter_count['positional_embedding'] - parameter_count['attention_weights'] = parameter_count['attention_out'] + parameter_count['attention_Q'] + parameter_count['attention_K'] + parameter_count['attention_V'] - parameter_count['mlp_weights'] = parameter_count['mlp1'] + parameter_count['mlp2'] - - return parameter_count - -def multiplication_in_int64(array): - return np.cumprod(np.array(array, dtype=np.int64))[-1] - -def matrix_operation(shapeA, shapeB): - assert(shapeA[-1] == shapeB[0]) - op = np.cumprod(np.array(shapeA[:-1], np.float64)) - return multiplication_in_int64([2, op[-1], shapeA[-1], shapeB[-1]]) - -def word_embedding_operation(model_config, inference_config): - #Given: - #\begin{itemize} - # \item Matrix \( X \) of size \( B \times s \) (representing the batch size and sequence length respectively). - # \item Embedding matrix \( W_e \) of size \( n_{vocab} \times d_{model} \). - #\end{itemize} - - #The resultant matrix after the multiplication will be of size \( B \times s \times d_{model} \). - #For each element in this resultant matrix, the number of FLOPs required is \( 2 \times n_{vocab} \). This is because for a single element in the output matrix, we have \( 2N \) FLOPs (with \( N \) being the common dimension), leading to the matrix multiplication FLOP count as: - #\begin{equation} - #2 \times B \times s \times n_{v ocab} \times d_{model} - #\end{equation} - if model_config['module_classes']: - modules = get_module_tensors_matched(lambda x: 'embed' in x and 'pos' not in x, model_config['module_classes']) - if len(modules) > 0: - A = [inference_config['batchsize'], inference_config['input_seq_length'], modules[0][0]] - B = modules[0] - op_count = matrix_operation(A, B) - return op_count - - A = [inference_config['batchsize'], inference_config['input_seq_length'], model_config['vocab_size']] - B = [model_config['vocab_size'], model_config['hidden_size']] - op_count = matrix_operation(A, B) - return op_count - - -def positional_embedding_operation(model_config, inference_config): - if model_config['module_classes']: - modules = get_module_tensors_matched(lambda x: 'embed' in x and 'pos' in x, model_config['module_classes']) - if len(modules) > 0: - return multiplication_in_int64([inference_config['batchsize'], inference_config['input_seq_length'], modules[0][-1]]) - - return multiplication_in_int64([inference_config['batchsize'], inference_config['input_seq_length'], model_config['hidden_size']]) - -### Below three are the same -def attention_K_operation(model_config, inference_config, seq_length): - if model_config['module_classes']: - modules = get_module_tensors_matched(lambda x: 'att' in x and 'k' in x , model_config['module_classes']) - if len(modules) > 0: - total = 0 - for module in modules: - if len(module) > 1: - A = [inference_config['batchsize'], seq_length, model_config['hidden_size']] - B = [model_config['hidden_size'], model_config['hidden_size_per_head']] - total += model_config['num_attention_heads']*matrix_operation(A, B) - else: - total += model_config['hidden_size'] - return total - - A = [inference_config['batchsize'], seq_length, model_config['hidden_size']] - B = [model_config['hidden_size'], model_config['hidden_size_per_head']] - return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * matrix_operation(A, B) - -def attention_Q_operation(model_config, inference_config, seq_length): - if model_config['module_classes']: - modules = get_module_tensors_matched(lambda x: 'att' in x and 'q' in x , model_config['module_classes']) - if len(modules) > 0: - total = 0 - for module in modules: - if len(module) > 1: - A = [inference_config['batchsize'], seq_length, model_config['hidden_size']] - B = [model_config['hidden_size'], model_config['hidden_size_per_head']] - total += model_config['num_attention_heads']*matrix_operation(A, B) - else: - total += model_config['hidden_size'] - return total - - A = [inference_config['batchsize'], seq_length, model_config['hidden_size']] - B = [model_config['hidden_size'], model_config['hidden_size_per_head']] - return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * matrix_operation(A, B) - -def attention_V_operation(model_config, inference_config, seq_length): - if model_config['module_classes']: - modules = get_module_tensors_matched(lambda x: 'att' in x and 'v' in x , model_config['module_classes']) - if len(modules) > 0: - total = 0 - for module in modules: - if len(module) > 1: - A = [inference_config['batchsize'], seq_length, model_config['hidden_size']] - B = [model_config['hidden_size'], model_config['hidden_size_per_head']] - total += model_config['num_attention_heads']*matrix_operation(A, B) - else: - total += model_config['hidden_size'] - return total - - A = [inference_config['batchsize'], seq_length, model_config['hidden_size']] - B = [model_config['hidden_size'], model_config['hidden_size_per_head']] - return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * matrix_operation(A, B) - -## -def attention_QK_operation(model_config, inference_config, seq_length_Q, seq_length_K): - A = [inference_config['batchsize'], seq_length_Q, model_config['hidden_size_per_head']] - B = [model_config['hidden_size_per_head'], seq_length_K] - return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * matrix_operation(A, B) - -def attention_softmax_operation(model_config, inference_config,seq_length): - # Ref: Ouyang, A. (2023). Understanding the Performance of Transformer Inference (Doctoral dissertation, Massachusetts Institute of Technology). - # 3 is a modeled value - softmax_operation = (3*inference_config['batchsize']*seq_length*seq_length) - return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * softmax_operation - -def attention_multV_operation(model_config, inference_config, seq_length_Q, seq_length_V): - A = [inference_config['batchsize'], seq_length_Q, seq_length_V] - B = [seq_length_V, model_config['hidden_size_per_head']] - return model_config['num_hidden_layers'] * model_config['num_attention_heads']* matrix_operation(A, B) - -def attention_out_operation(model_config, inference_config, seq_length): - if model_config['module_classes']: - modules = get_module_tensors_matched(lambda x: 'att' in x and 'k' in x , model_config['module_classes']) - if len(modules) > 0: - total = 0 - for module in modules: - if len(module) > 1: - A = [inference_config['batchsize'], seq_length, model_config['hidden_size']] - B = [model_config['hidden_size'], model_config['hidden_size']] - total += matrix_operation(A, B) - else: - total += model_config['hidden_size'] - return total - - A = [inference_config['batchsize'], seq_length, model_config['hidden_size']] - B = [model_config['hidden_size'], model_config['hidden_size']] - return model_config['num_hidden_layers'] * matrix_operation(A, B) - -def layernorm_operation(model_config, inference_config, seq_length): - # Ref: Ouyang, A. (2023). Understanding the Performance of Transformer Inference (Doctoral dissertation, Massachusetts Institute of Technology). - # 5 is a modeled value - if model_config['module_classes']: - modules = get_module_tensors_matched(lambda x: 'norm' in x, model_config['module_classes']) - if len(modules) > 0: - total = 0 - for module in modules: - total += model_config['hidden_size'] - return 5*total - - layernorm_operation = (5*inference_config['batchsize']*seq_length*model_config['hidden_size']) - return model_config['num_hidden_layers'] * model_config['layernorm_operation'] * layernorm_operation - - -def mlp_operation(model_config, inference_config, seq_length): - if model_config['module_classes']: - modules = get_module_tensors_matched(lambda x: 'fc' in x or 'mlp' in x, model_config['module_classes']) - if len(modules) > 0: - total = 0 - for module in modules: - if len(module) > 1: - A = [inference_config['batchsize'], seq_length, module[1]] - B = [module[1], module[0]] - total += matrix_operation(A, B) - else: - total += modules[-1][0] - return total - - A = [inference_config['batchsize'], seq_length, model_config['hidden_size']] - B = [model_config['hidden_size'], model_config['intermediate_size']] - return model_config['num_hidden_layers'] * (2*matrix_operation(A, B)) - - -def prefilling_operation(model_config, inference_config): - prefilling_operation_count = {} - prefilling_operation_count['word_embedding'] = word_embedding_operation(model_config, inference_config) - prefilling_operation_count['positional_embedding'] = positional_embedding_operation(model_config, inference_config) - - prefilling_operation_count['attention_Q'] = attention_Q_operation(model_config, inference_config, inference_config['input_seq_length']) - prefilling_operation_count['attention_K'] = attention_K_operation(model_config, inference_config, inference_config['input_seq_length']) - prefilling_operation_count['attention_V'] = attention_V_operation(model_config, inference_config, inference_config['input_seq_length']) - prefilling_operation_count['attention_QK'] = attention_QK_operation(model_config, inference_config, inference_config['input_seq_length'], inference_config['input_seq_length']) - prefilling_operation_count['attention_softmax'] = attention_softmax_operation(model_config, inference_config, inference_config['input_seq_length']) - prefilling_operation_count['attention_multV'] = attention_multV_operation(model_config, inference_config, inference_config['input_seq_length'], inference_config['input_seq_length']) - prefilling_operation_count['attention_out'] = attention_out_operation(model_config, inference_config, inference_config['input_seq_length']) - - prefilling_operation_count['layernorm'] =layernorm_operation(model_config, inference_config, inference_config['input_seq_length']) - - prefilling_operation_count['mlp'] = mlp_operation(model_config, inference_config, inference_config['input_seq_length']) - - prefilling_operation_count['embeddings'] = prefilling_operation_count['word_embedding'] + prefilling_operation_count['positional_embedding'] - prefilling_operation_count['attention'] = sum([v for k,v in prefilling_operation_count.items() if 'attention' in k]) - prefilling_operation_count['total'] = (prefilling_operation_count['embeddings'] + prefilling_operation_count['attention'] + prefilling_operation_count['mlp'] + prefilling_operation_count['layernorm']) - - return prefilling_operation_count - -def generation_operation(model_config, inference_config): - generation_operation_count = {} - generation_operation_count['word_embedding'] = 0 - generation_operation_count['positional_embedding'] = 0 - generation_operation_count['attention_K'] = 0 - generation_operation_count['attention_V'] = 0 - generation_operation_count['attention_Q'] = 0 - generation_operation_count['attention_QK'] = 0 - generation_operation_count['attention_softmax'] = 0 - generation_operation_count['attention_multV'] = 0 - generation_operation_count['attention_out'] = 0 - generation_operation_count['mlp'] = 0 - generation_operation_count['layernorm'] = 0 - - for t in range(inference_config['output_seq_length']): - if inference_config['KV_cache']: - generation_operation_count['attention_K'] += attention_K_operation(model_config, inference_config, 1) - generation_operation_count['attention_V'] += attention_V_operation(model_config, inference_config, 1) - generation_operation_count['attention_Q'] += attention_Q_operation(model_config, inference_config, 1) - generation_operation_count['attention_QK'] += attention_QK_operation(model_config, inference_config, seq_length_Q=1, seq_length_K=(t+1)+inference_config['input_seq_length']) - generation_operation_count['attention_softmax'] += attention_softmax_operation(model_config, inference_config, 1) - generation_operation_count['attention_multV'] += attention_multV_operation(model_config, inference_config, seq_length_Q=1, seq_length_V=(t+1)+inference_config['input_seq_length']) - generation_operation_count['attention_out'] += attention_out_operation(model_config, inference_config, 1) - generation_operation_count['mlp'] += mlp_operation(model_config, inference_config, 1) - else: - generation_operation_count['attention_K'] += attention_K_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - generation_operation_count['attention_V'] += attention_V_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - generation_operation_count['attention_Q'] += attention_Q_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - generation_operation_count['attention_QK'] += attention_QK_operation(model_config, inference_config, seq_length_Q=(t+1)+inference_config['input_seq_length'], seq_length_K=(t+1)+inference_config['input_seq_length']) - generation_operation_count['attention_softmax'] += attention_softmax_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - generation_operation_count['attention_multV'] += attention_multV_operation(model_config, inference_config, seq_length_Q=(t+1)+inference_config['input_seq_length'], seq_length_V=(t+1)+inference_config['input_seq_length']) - generation_operation_count['attention_out'] += attention_out_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - generation_operation_count['mlp'] += mlp_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - - generation_operation_count['layernorm'] += layernorm_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - - generation_operation_count['embeddings'] = generation_operation_count['word_embedding'] + generation_operation_count['positional_embedding'] - generation_operation_count['attention'] = sum([v for k,v in generation_operation_count.items() if 'attention' in k]) - generation_operation_count['total'] = (generation_operation_count['attention'] + generation_operation_count['mlp'] + generation_operation_count['layernorm']) - - return generation_operation_count - - -def word_embedding_activation_memory(model_config, inference_config, seq_length): - return inference_config['batchsize'] * seq_length * (model_config['vocab_size'] + model_config['hidden_size']) - -def positional_embedding_activation_memory(model_config, inference_config, seq_length): - return 2 * inference_config['batchsize'] * seq_length * model_config['hidden_size'] - -def attention_K_activation_memory(model_config, inference_config, seq_length): - per_head_per_layer = inference_config['batchsize'] * seq_length * (model_config['hidden_size'] + model_config['hidden_size_per_head']) - return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * per_head_per_layer - -def attention_V_activation_memory(model_config, inference_config, seq_length): - per_head_per_layer = inference_config['batchsize'] * seq_length * (model_config['hidden_size'] + model_config['hidden_size_per_head']) - return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * per_head_per_layer - -def attention_Q_activation_memory(model_config, inference_config, seq_length): - per_head_per_layer = inference_config['batchsize'] * seq_length * (model_config['hidden_size'] + model_config['hidden_size_per_head']) - return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * per_head_per_layer - -def attention_QK_activation_memory(model_config, inference_config, seq_length_Q, seq_length_K): - inputs_Q = inference_config['batchsize'] * seq_length_Q * model_config['hidden_size_per_head'] - inputs_K = inference_config['batchsize'] * seq_length_K * model_config['hidden_size_per_head'] - outputs = inference_config['batchsize'] * seq_length_Q * seq_length_K - return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * (inputs_Q + inputs_K + outputs) - -def attention_softmax_activation_memory(model_config, inference_config, seq_length): - per_head_per_layer = (2 * inference_config['batchsize'] * seq_length * seq_length) - return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * per_head_per_layer - -def attention_multV_activation_memory(model_config, inference_config, seq_length_Q, seq_length_V): - per_head_per_layer = inference_config['batchsize'] * seq_length_Q * seq_length_V + inference_config['batchsize'] * seq_length_Q * model_config['hidden_size_per_head'] + inference_config['batchsize'] * seq_length_V * model_config['hidden_size_per_head'] - return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * per_head_per_layer - -def attention_out_activation_memory(model_config, inference_config, seq_length): - per_head_per_layer = 2 * inference_config['batchsize'] * seq_length * model_config['hidden_size'] - return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * per_head_per_layer - -def layernorm_activation_memory(model_config, inference_config, seq_length): - per_layernorm_per_layer = 2 * inference_config['batchsize'] * seq_length * model_config['hidden_size'] - return model_config['num_hidden_layers'] * model_config['layernorm_operation'] * per_layernorm_per_layer - -def mlp_activation_memory(model_config, inference_config, seq_length): - # two mlp layer - per_layer = 2 * inference_config['batchsize'] * seq_length * (model_config['hidden_size'] + model_config['intermediate_size']) - return model_config['num_hidden_layers'] * per_layer - -def prefilling_activation_memory(model_config, inference_config): - activation_memory = {} - - activation_memory['word_embedding'] = word_embedding_activation_memory(model_config, inference_config, inference_config['input_seq_length']) - activation_memory['positional_embedding'] = positional_embedding_activation_memory(model_config, inference_config, inference_config['input_seq_length']) - - activation_memory['attention_Q'] = attention_Q_activation_memory(model_config, inference_config, inference_config['input_seq_length']) - activation_memory['attention_K'] = attention_K_activation_memory(model_config, inference_config, inference_config['input_seq_length']) - activation_memory['attention_V'] = attention_V_activation_memory(model_config, inference_config, inference_config['input_seq_length']) - activation_memory['attention_QK'] = attention_QK_activation_memory(model_config, inference_config, inference_config['input_seq_length'], inference_config['input_seq_length']) - activation_memory['attention_softmax'] = attention_softmax_activation_memory(model_config, inference_config, inference_config['input_seq_length']) - activation_memory['attention_multV'] = attention_multV_activation_memory(model_config, inference_config, inference_config['input_seq_length'], inference_config['input_seq_length']) - activation_memory['attention_out'] = attention_out_activation_memory(model_config, inference_config, inference_config['input_seq_length']) - - activation_memory['layernorm'] = layernorm_activation_memory(model_config, inference_config, inference_config['input_seq_length']) - - activation_memory['mlp'] = mlp_activation_memory(model_config, inference_config, inference_config['input_seq_length']) - - activation_memory['embeddings'] = activation_memory['word_embedding'] + activation_memory['positional_embedding'] - activation_memory['attention'] = ( - activation_memory['attention_Q'] + activation_memory['attention_K'] + - activation_memory['attention_V'] + activation_memory['attention_QK'] + - activation_memory['attention_softmax'] + activation_memory['attention_multV'] + - activation_memory['attention_out'] - ) - activation_memory['total'] = ( - activation_memory['embeddings'] + activation_memory['attention'] + - activation_memory['mlp'] + activation_memory['layernorm'] - ) - - activation_memory['embeddings'] = activation_memory['word_embedding'] + activation_memory['positional_embedding'] - activation_memory['attention'] = sum([v for k,v in activation_memory.items() if 'attention' in k]) - activation_memory['total'] = (activation_memory['attention'] + activation_memory['mlp'] + activation_memory['layernorm']) - - return activation_memory - -def generation_activation_memory(model_config, inference_config): - activation_memory = {} - - activation_memory['word_embedding'] = 0 - activation_memory['positional_embedding'] = 0 - activation_memory['attention_K'] = 0 - activation_memory['attention_V'] = 0 - activation_memory['attention_Q'] = 0 - activation_memory['attention_QK'] = 0 - activation_memory['attention_softmax'] = 0 - activation_memory['attention_multV'] = 0 - activation_memory['attention_out'] = 0 - activation_memory['mlp'] = 0 - activation_memory['layernorm'] = 0 - - for t in range(inference_config['output_seq_length']): - if inference_config['KV_cache']: - activation_memory['attention_K'] += attention_K_activation_memory(model_config, inference_config, 1) - activation_memory['attention_V'] += attention_V_activation_memory(model_config, inference_config, 1) - activation_memory['attention_Q'] += attention_Q_activation_memory(model_config, inference_config, 1) - activation_memory['attention_QK'] += attention_QK_activation_memory(model_config, inference_config, seq_length_Q=1, seq_length_K=(t+1)+inference_config['input_seq_length']) - activation_memory['attention_softmax'] += attention_softmax_activation_memory(model_config, inference_config, 1) - activation_memory['attention_multV'] += attention_multV_activation_memory(model_config, inference_config, seq_length_Q=1, seq_length_V=(t+1)+inference_config['input_seq_length']) - activation_memory['attention_out'] += attention_out_activation_memory(model_config, inference_config, 1) - activation_memory['mlp'] += mlp_activation_memory(model_config, inference_config, 1) - else: - activation_memory['attention_K'] += attention_K_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - activation_memory['attention_V'] += attention_V_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - activation_memory['attention_Q'] += attention_Q_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - activation_memory['attention_QK'] += attention_QK_activation_memory(model_config, inference_config, seq_length_Q=(t+1)+inference_config['input_seq_length'], seq_length_K=(t+1)+inference_config['input_seq_length']) - activation_memory['attention_softmax'] += attention_softmax_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - activation_memory['attention_multV'] += attention_multV_activation_memory(model_config, inference_config, seq_length_Q=(t+1)+inference_config['input_seq_length'], seq_length_V=(t+1)+inference_config['input_seq_length']) - activation_memory['attention_out'] += attention_out_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - activation_memory['mlp'] += mlp_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - - activation_memory['layernorm'] += layernorm_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length']) - - activation_memory['embeddings'] = activation_memory['word_embedding'] + activation_memory['positional_embedding'] - activation_memory['attention'] = ( - activation_memory['attention_K'] + activation_memory['attention_V'] + - activation_memory['attention_Q'] + activation_memory['attention_QK'] + - activation_memory['attention_softmax'] + activation_memory['attention_multV'] + - activation_memory['attention_out'] - ) - activation_memory['total'] = ( - activation_memory['embeddings'] + activation_memory['attention'] + - activation_memory['mlp'] + activation_memory['layernorm'] - ) - - return activation_memory - - -def calc_prefilling_throughput(model_config, inference_config, inference_info): - inference_info['prefilling_throughput'] = inference_config['input_seq_length']*inference_config['batchsize'] / max([inference_info['inference_prefilling_time'], inference_info['prefilling_memory_latency']]) - inference_info['prefilling_bound_type'] = "memory" if inference_info['inference_prefilling_time'] < inference_info['prefilling_memory_latency'] else "arithmetic" - -def calc_generation_throughput(model_config, inference_config, inference_info): - inference_info['generation_throughput'] = inference_config['input_seq_length']*inference_config['batchsize'] / max([inference_info['inference_generation_time'], inference_info['generation_memory_latency']]) - inference_info['generation_bound_type'] = "memory" if inference_info['inference_generation_time'] < inference_info['generation_memory_latency'] else "arithmetic" - - total_time = max([inference_info['inference_prefilling_time'], inference_info['prefilling_memory_latency']]) + max([inference_info['inference_generation_time'], inference_info['generation_memory_latency']]) - inference_info['client_generation_throughput'] = inference_config['output_seq_length']*inference_config['batchsize'] / total_time \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/types.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/types.py deleted file mode 100644 index 7adf565a7b6b7d4f1eed3adf6a96faab66fe517c..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/types.py +++ /dev/null @@ -1,11 +0,0 @@ -import types -from enum import Enum -from typing import Any, Callable, Dict, Set, Type, TypeVar, Union - -from pydantic import BaseModel - -DecoratedCallable = TypeVar("DecoratedCallable", bound=Callable[..., Any]) -UnionType = getattr(types, "UnionType", Union) -NoneType = getattr(types, "UnionType", None) -ModelNameMap = Dict[Union[Type[BaseModel], Type[Enum]], str] -IncEx = Union[Set[int], Set[str], Dict[int, Any], Dict[str, Any]] diff --git a/spaces/cncn102/bingo1/src/components/ui/textarea.tsx b/spaces/cncn102/bingo1/src/components/ui/textarea.tsx deleted file mode 100644 index e25af722c7a5dc1121a9ab58d6716952f9f76081..0000000000000000000000000000000000000000 --- a/spaces/cncn102/bingo1/src/components/ui/textarea.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface TextareaProps - extends React.TextareaHTMLAttributes {} - -const Textarea = React.forwardRef( - ({ className, ...props }, ref) => { - return ( -