diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PATCHED Facebook Messenger For Samsung Gt-e2252 Tools Sesiones Speed.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PATCHED Facebook Messenger For Samsung Gt-e2252 Tools Sesiones Speed.md deleted file mode 100644 index b892e323370f788632f5b7a79bf5a0c4810797f7..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PATCHED Facebook Messenger For Samsung Gt-e2252 Tools Sesiones Speed.md +++ /dev/null @@ -1,32 +0,0 @@ - -

How to Download Facebook Messenger for Samsung E2252

-

Facebook Messenger is a popular app that allows you to chat with your friends and family on the social media platform. You can also send text messages, images, videos, stickers, voice notes, and more in free group chats. But how can you download Facebook Messenger for Samsung E2252, a feature phone that runs on a proprietary operating system?

-

Download Facebook Messenger For Samsung Gt-e2252 tools sesiones speed


DOWNLOADhttps://byltly.com/2uKwkO



-

In this article, we will show you the steps to download and install Facebook Messenger for Samsung E2252 using a third-party website called Apkpure. Apkpure is a website that offers free APK files of various apps and games for Android devices. APK files are the installation packages that contain all the necessary files and data for an app to run on your device.

-

Before we begin, please note that downloading and installing APK files from unknown sources may pose some risks to your device and personal data. You should only download APK files from trusted and verified websites, and scan them with an antivirus software before opening them. We are not responsible for any damage or loss that may occur as a result of following this guide.

-

Steps to Download Facebook Messenger for Samsung E2252

-
    -
  1. On your Samsung E2252, open the browser app and go to https://apkpure.com/facebook-messenger/com.facebook.orca. This is the official page of Facebook Messenger on Apkpure.
  2. -
  3. Scroll down and tap on the green "Download APK" button. This will start downloading the APK file of Facebook Messenger to your device.
  4. -
  5. Once the download is complete, go to your file manager app and locate the downloaded APK file. It should be in the "Downloads" folder or a similar location.
  6. -
  7. Tap on the APK file to open it. You may see a warning message that says "Install blocked" or "For security, your phone is set to block installation of apps obtained from unknown sources". If you see this message, tap on "Settings" and enable the option to allow installation of apps from unknown sources. You may need to enter your password or PIN to confirm this action.
  8. -
  9. After enabling the option, go back to the APK file and tap on it again. You should see a screen that shows the app's permissions and information. Tap on "Install" to start installing Facebook Messenger on your device.
  10. -
  11. Wait for the installation process to finish. You should see a message that says "App installed" when it is done.
  12. -
  13. Tap on "Open" to launch Facebook Messenger on your device. You may need to sign in with your Facebook account or create a new one if you don't have one already.
  14. -
-

Congratulations! You have successfully downloaded and installed Facebook Messenger for Samsung E2252. You can now enjoy chatting with your friends and family on Facebook using this app.

-

Tips and Tricks for Using Facebook Messenger on Samsung E2252

-

Now that you have Facebook Messenger on your device, you may want to know some tips and tricks to make the most out of it. Here are some of them:

- -

These are some of the tips and tricks for using Facebook Messenger on Samsung E2252. We hope you find them useful and enjoy using this app.

81aa517590
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Carte Maroc Format Fbl.md b/spaces/1gistliPinn/ChatGPT4/Examples/Carte Maroc Format Fbl.md deleted file mode 100644 index 8b0c953f5a640ef8b07a6bc3ea934027bd922f43..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Carte Maroc Format Fbl.md +++ /dev/null @@ -1,6 +0,0 @@ -

Carte maroc format fbl


Download Zip ✑ ✑ ✑ https://imgfil.com/2uxYoa



- - 3cee63e6c2
-
-
-

diff --git a/spaces/1phancelerku/anime-remove-background/City of Angels - The Masterpiece by Thirty Seconds to Mars How to Download MP3.md b/spaces/1phancelerku/anime-remove-background/City of Angels - The Masterpiece by Thirty Seconds to Mars How to Download MP3.md deleted file mode 100644 index fd0f4a626257d7a2a3688aa1c6e67109b69be02f..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/City of Angels - The Masterpiece by Thirty Seconds to Mars How to Download MP3.md +++ /dev/null @@ -1,116 +0,0 @@ - -

City of Angels MP3 Download 30 Seconds to Mars

-

If you are looking for a song that will inspire you, touch your emotions, and make you feel alive, then you should check out City of Angels by 30 Seconds to Mars. This song is one of the most popular and acclaimed tracks by the American rock band, and it has a lot to offer to the listeners. In this article, we will tell you everything you need to know about City of Angels, and how you can download it as an MP3 file for your convenience.

-

What is City of Angels by 30 Seconds to Mars?

-

City of Angels is a song by 30 Seconds to Mars, released in 2013 as the fourth single from their fourth studio album, Love Lust Faith + Dreams. The song was written and produced by the lead vocalist and founder of the band, Jared Leto, who also directed the music video and the short film based on the song.

-

city of angels mp3 download 30 seconds to mars


DOWNLOADhttps://jinyurl.com/2uNNFd



-

The meaning and inspiration behind the song

-

City of Angels is a tribute to Los Angeles, the city where 30 Seconds to Mars was formed and where they achieved their success. The song explores the themes of dreams, hopes, struggles, and realities that people face in the city, as well as the beauty and diversity that it offers. Jared Leto said that he wanted to capture the spirit and essence of Los Angeles in the song, and that he was inspired by his own personal experiences and stories from other people who live or have lived in the city.

-

The music video and the short film

-

The music video for City of Angels was released in October 2013, and it features a series of interviews with various celebrities, artists, musicians, athletes, and ordinary people who share their thoughts and feelings about Los Angeles. Some of the famous faces that appear in the video include Kanye West, James Franco, Selena Gomez, Lindsay Lohan, Olivia Wilde, Steve Nash, Corey Feldman, Ashley Olsen, Alan Cumming, Juliette Lewis, Shaun White, Lily Collins, and many more. The video also shows scenes of the band performing the song in different locations around the city.

-

The short film for City of Angels was released in November 2013, and it is an extended version of the music video that runs for about 11 minutes. The short film includes more interviews and footage that were not shown in the music video, as well as some additional narration by Jared Leto. The short film was praised by critics and fans alike for its artistic vision and emotional impact.

-

How to download City of Angels MP3 by 30 Seconds to Mars?

-

If you want to enjoy City of Angels by 30 Seconds to Mars anytime and anywhere, you might want to download it as an MP3 file that you can store on your device or transfer to other devices. There are several ways you can do this, depending on your preferences and budget.

-

The official sources and platforms

-

The most recommended way to download City of Angels MP3 by 30 Seconds to Mars is to use the official sources and platforms that are authorized by the band and their record label. This way, you can support the band financially and legally, as well as get the best quality and security for your download. Some of the official sources and platforms that you can use are:

- -

The alternative methods and tools

-

If you don't want to use the official sources and platforms, or if you want to save some money, you can also try some alternative methods and tools to download City of Angels MP3 by 30 Seconds to Mars. However, you should be aware that these methods and tools are not endorsed by the band or their record label, and they may violate some copyright laws or terms of service. You should also be careful about the quality and security of your download, as some of these methods and tools may contain viruses, malware, or spyware. Some of the alternative methods and tools that you can use are:

- -

Why you should listen to City of Angels MP3 by 30 Seconds to Mars?

-

Now that you know how to download City of Angels MP3 by 30 Seconds to Mars, you might be wondering why you should listen to it in the first place. Well, there are many reasons why listening to music in general, and City of Angels in particular, can be beneficial for you.

-

The benefits of listening to music

-

Listening to music can have many positive effects on your physical, mental, and emotional well-being. Some of the benefits of listening to music are:

-

city of angels 30 seconds to mars free mp3
-download city of angels by 30 seconds to mars
-city of angels song 30 seconds to mars mp3
-30 seconds to mars city of angels remix mp3
-city of angels 30stm mp3 download
-30 seconds to mars city of angels acoustic mp3
-city of angels thirty seconds to mars mp3
-30 seconds to mars city of angels official video mp3
-city of angels 30 seconds to mars lyrics mp3
-30 seconds to mars city of angels live mp3
-city of angels soundtrack 30 seconds to mars mp3
-30 seconds to mars city of angels piano mp3
-city of angels 30 seconds to mars instrumental mp3
-30 seconds to mars city of angels album mp3
-city of angels 30 seconds to mars karaoke mp3
-30 seconds to mars city of angels radio edit mp3
-city of angels 30 seconds to mars guitar mp3
-30 seconds to mars city of angels unplugged mp3
-city of angels 30 seconds to mars cover mp3
-30 seconds to mars city of angels extended mp3
-city of angels 30 seconds to mars spotify mp3
-30 seconds to mars city of angels youtube mp3
-city of angels 30 seconds to mars itunes mp3
-30 seconds to mars city of angels soundcloud mp3
-city of angels 30 seconds to mars amazon mp3
-30 seconds to mars city of angels vevo mp3
-city of angels 30 seconds to mars apple music mp3
-30 seconds to mars city of angels last.fm mp3
-city of angels 30 seconds to mars markus schulz remix mp3
-30 seconds to mars city of angels single mp3
-city of angels love lust faith + dreams 30 seconds to mars mp3
-download lagu city of angels 30 seconds to mars mp3
-descargar city of angels de 30 seconds to mars mp3
-baixar musica city of angels de 30 seconds to mars mp3
-telecharger musique city of angels de 30 seconds to mars mp3
-scaricare musica city of angels di 30 seconds to mars mp3
-herunterladen musik city of angels von 30 seconds to mars mp3
-pobierz muzyke city of angels zespolu 30 seconds to mars mp3
-indir muzik sehir melekleri otuz saniye marsempireye ait olan sarki.mp3 (Turkish)
-skachat' muzyku gorod angelov ot tridtsati sekund do marsempireye pesnya.mp3 (Russian)
-xiazai yinyue shengshi tianshi cong sanshi miaozhong dao huoxingempireye gequ.mp3 (Chinese)
-daunrodeu eumak seongsi cheonsa buleoseu san-sibcho dong-an hwa-seongempireye nolae.mp3 (Korean)
-kounyuu ongaku tenshi no machi kara sanjuppun made kaseiempireye kyoku.mp3 (Japanese)
-descargar musica ciutat dels àngels de trenta segons a marsempireye cançó.mp3 (Catalan)
-aflaai musiek stad van engele van dertig sekondes tot marsempireye lied.mp3 (Afrikaans)
-stiahnut hudbu mesto anjelov od tridsiatich sekund do marsempireye piesen.mp3 (Slovak)
-letoltes zene angyalok varosa harminc masodpercig marsempireye dal.mp3 (Hungarian)
-preuzimanje glazbe grad andjela od trideset sekundi do marsempireye pjesma.mp3 (Croatian)
-lataa musiikkia enkelten kaupunki kolmekymmentä sekuntia marsempireye laulu.mp3 (Finnish)

- -

The reasons why City of Angels is a great song

-

Besides the general benefits of listening to music, City of Angels by 30 Seconds to Mars has some specific qualities that make it a great song to listen to. Some of the reasons why City of Angels is a great song are:

- -

Conclusion

-

In conclusion, City of Angels by 30 Seconds to Mars is a song

In conclusion, City of Angels by 30 Seconds to Mars is a song that you should definitely listen to and download as an MP3 file. It is a song that celebrates Los Angeles, the city of dreams, and the people who live or have lived there. It is a song that has a beautiful and meaningful message, a catchy and melodic tune, a rich and diverse sound, a passionate and expressive performance, and a stunning and inspiring visual representation. It is a song that can make you feel inspired, touched, and alive.

-

If you want to download City of Angels MP3 by 30 Seconds to Mars, you can use the official sources and platforms that are authorized by the band and their record label, such as iTunes, Amazon, Spotify, YouTube, or YouTube Music. Alternatively, you can use some online converters, desktop software, or mobile apps that can convert YouTube videos or other online audio files into MP3 files. However, you should be careful about the quality and security of your download, and respect the rights of the band and their record label.

-

So what are you waiting for? Go ahead and download City of Angels MP3 by 30 Seconds to Mars today, and enjoy this amazing song that will make you feel like you are in the city of angels.

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download dan Nonton Film Ashfall (2019) Sub Indo Full Movie HD.md b/spaces/1phancelerku/anime-remove-background/Download dan Nonton Film Ashfall (2019) Sub Indo Full Movie HD.md deleted file mode 100644 index 88078297a4c7c4298aefb2b31263338b28d00c78..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download dan Nonton Film Ashfall (2019) Sub Indo Full Movie HD.md +++ /dev/null @@ -1,133 +0,0 @@ -
-

Download Film Ashfall Full Movie Sub Indo: A Review of the Epic Disaster Film

-

If you are a fan of action-packed disaster movies, you might have heard of Ashfall, a 2019 South Korean film that depicts the aftermath of a volcanic eruption on the Korean peninsula. But how can you watch this film online with Indonesian subtitles? And is it worth your time and money? In this article, I will give you a brief overview of what Ashfall is about, how to download film ashfall full movie sub indo legally and safely, and my personal opinion on whether you should watch it or not.

-

What is Ashfall?

-

Ashfall (Korean: 백두산; Hanja: 白頭山; RR: Baekdusan), also known as Mount Paektu, is a 2019 South Korean disaster film directed by Lee Hae-jun and Kim Byung-seo, starring Lee Byung-hun, Ha Jung-woo, Ma Dong-seok, Jeon Hye-jin and Bae Suzy. The film was released in December 2019 in South Korea and became one of the highest-grossing films of the year.

-

download film ashfall full movie sub indo


Download Ziphttps://jinyurl.com/2uNM3g



-

The Plot of Ashfall

-

The film follows the events that unfold when Paektu Mountain, an active volcano straddling the China–North Korea border, suddenly erupts, causing severe earthquakes in both North and South Korea. To prevent another disaster, Jeon Yoo-kyung (Jeon Hye-jin), a government official, plans an operation based on a theory by Professor Kang Bong-rae (Ma Dong-seok), who had studied Mount Paektu and its possible future eruptions. Jo In-chang (Ha Jung-woo) is assigned to be the captain of a special forces team taking part in the operation. He contacts Lee Joon-pyeong (Lee Byung-hun), who is part of the Korean People's Army in North Korea as a spy. Joon-pyeong is the only one who knows where to find nuclear warheads that can be used to stop the volcano from erupting again. Meanwhile, Jo In-chang's pregnant wife Choi Ji-young (Bae Suzy) is alone in Seoul and struggling to survive amidst the chaos.

-

The Cast and Characters of Ashfall

-

The film features a star-studded cast of some of the most popular actors and actresses in South Korea. Here are some of the main characters and their roles:

- -

The Special Effects and Cinematography of Ashfall

-

One of the most impressive aspects of Ashfall is the realistic and spectacular depiction of the volcanic eruption and its consequences. The film used a combination of practical and computer-generated effects to create the scenes of destruction, chaos, and panic. The film also employed various techniques such as aerial shots, drone shots, handheld shots, and slow-motion shots to capture the scale and intensity of the disaster. The film's cinematography was praised by critics and audiences alike for its stunning visuals and immersive experience.

-

How to Download Film Ashfall Full Movie Sub Indo

-

If you are interested in watching Ashfall with Indonesian subtitles, you might be wondering how to download film ashfall full movie sub indo online. There are two main ways to do this: legal and safe ways, and illegal and risky ways. Let's take a look at each option and weigh their pros and cons.

-

Legal and Safe Ways to Watch Ashfall Online

-

The best way to watch Ashfall online is to use legal and safe streaming services that offer the film with Indonesian subtitles. This way, you can enjoy the film without worrying about breaking the law, harming your device, or compromising your personal information. Here are some of the streaming services that offer Ashfall:

-

Streaming Services that Offer Ashfall

- - - - - - - -
Streaming ServicePriceAvailability
Netflix$8.99-$17.99 per monthWorldwide (except China, Syria, North Korea, and Crimea)
iQIYI$2.99-$19.99 per monthAsia-Pacific (except Japan)
Viu$2.99-$6.99 per monthAsia-Pacific (except China)
Iflix$0-$9.99 per monthAsia-Pacific (except China, Japan, Taiwan, Hong Kong, Macau)
HOOQ$1.99-$7.99 per monthAsia-Pacific (except China, Japan)
-

Websites that Provide Ashfall Subtitles

-

If you already have access to Ashfall through a streaming service or a DVD, but you need Indonesian subtitles, you can also download them from websites that provide subtitles for various languages. However, you should be careful when downloading subtitles from unknown sources, as they might contain malware or viruses that can harm your device or steal your data. Here are some of the websites that provide Ashfall subtitles:

- -

Illegal and Risky Ways to Download Ashfall for Free

-

Another way to download film ashfall full movie sub indo online is to use illegal and risky methods such as torrent sites or file-sharing platforms. These methods allow you to download the film for free, but they also come with many drawbacks and dangers. Here are some of the reasons why you should avoid using these methods:

-

download film ashfall full movie sub indo lk21
-download film ashfall full movie sub indo gratis
-download film ashfall full movie sub indo 480p
-download film ashfall full movie sub indo 720p
-download film ashfall full movie sub indo bluray
-download film ashfall full movie sub indo mp4
-download film ashfall full movie sub indo hd
-download film ashfall full movie sub indo streaming
-download film ashfall full movie sub indo xxi
-download film ashfall full movie sub indo bioskopkeren
-download film ashfall full movie sub indo terbaru
-download film ashfall full movie sub indo pusatfilm21
-download film ashfall full movie sub indo nontongratis88
-download film ashfall full movie sub indo rebahin
-download film ashfall full movie sub indo cgvido
-download film ashfall full movie sub indo idlix
-download film ashfall full movie sub indo cinema 21
-download film ashfall full movie sub indo ganool
-download film ashfall full movie sub indo layarkaca21
-download film ashfall full movie sub indo dunia21
-download film ashfall full movie sub indo gudangmovies21
-download film ashfall full movie sub indo indomovie88
-download film ashfall full movie sub indo juraganfilm
-download film ashfall full movie sub indo kawanfilm21
-download film ashfall full movie sub indo lebahmovie
-download film ashfall full movie sub indo melongfilm
-download film ashfall full movie sub indo nontonfilm168
-download film ashfall full movie sub indo nontonfilm77
-download film ashfall full movie sub indo nontonfilm88
-download film ashfall full movie sub indo nontonfilm99
-download film ashfall full movie sub indo nontonindoxxi
-download film ashfall full movie sub indo nontonplus
-download film ashfall full movie sub indo nontonxxi
-download film ashfall full movie sub indo pahe.in
-download film ashfall full movie sub indo savefilm21
-download film ashfall full movie sub indo sobatkeren21
-download film ashfall full movie sub indo sogafime
-download film ashfall full movie sub indo terbit21
-download film ashfall full movie sub indo viu.com
-watch online film ashfall full movie with english subtitles

-

Torrent Sites that Host Ashfall Files

-

Torrent sites are websites that allow users to share files through peer-to-peer networks. Some of the torrent sites that host Ashfall files are:

-

Risks and Consequences of Using Torrent Sites

-

Using torrent sites to download film ashfall full movie sub indo online might seem tempting, but it also comes with many risks and consequences. Some of them are:

- -

Conclusion: Is Ashfall Worth Watching?

-

Now that you know what Ashfall is about and how to download film ashfall full movie sub indo online, you might be wondering if it is worth watching. To help you decide, here are some of the pros and cons of Ashfall:

-

The Pros and Cons of Ashfall

- - - - - - -
ProsCons
A thrilling and exciting disaster film with realistic and spectacular special effects and cinematography.A clichéd and predictable plot with some logical flaws and inconsistencies.
A star-studded cast with impressive performances and chemistry.A lack of character development and depth for some of the main characters.
A message of hope and unity in the face of adversity and conflict.A simplistic and idealistic portrayal of the political and social situation on the Korean peninsula.
A cultural and commercial success that showcases the potential of the South Korean film industry.A limited availability and accessibility for international audiences who might not be familiar with the context or language of the film.
-

My Personal Opinion on Ashfall

-

In my personal opinion, Ashfall is a film that is worth watching if you are a fan of disaster movies or Korean cinema. It is a film that delivers on its promise of providing an entertaining and thrilling experience with stunning visuals and sound. It is also a film that has a heart and a message that resonates with the current times. However, it is not a film that is perfect or flawless. It has its share of weaknesses and shortcomings that might disappoint some viewers who expect more from it. It is also not a film that is easy to watch or understand for everyone. It requires some background knowledge and appreciation of the culture and history of Korea. Therefore, I would recommend Ashfall to anyone who is looking for a fun and exciting movie night, but not to anyone who is looking for a deep and meaningful cinematic masterpiece.

-

Frequently Asked Questions

-

Here are some of the frequently asked questions about Ashfall:

-

Q: Is Ashfall based on a true story?

-

A: No, Ashfall is not based on a true story. It is a fictional story that imagines what would happen if Mount Paektu erupted in the present day. However, Mount Paektu is a real volcano that has erupted in the past and could erupt again in the future. The film was inspired by the historical and scientific research on Mount Paektu and its potential eruptions.

-

Q: How accurate is Ashfall?

-

A: Ashfall is not meant to be a realistic or accurate depiction of a volcanic eruption or its consequences. It is a fictional story that exaggerates and dramatizes some aspects of the disaster for entertainment purposes. The film does not follow the scientific facts or data on Mount Paektu or its eruptions. The film also does not reflect the actual political or social situation on the Korean peninsula or its relations with other countries.

-

Q: How Q: How did Ashfall perform at the box office?

-

A: Ashfall was a commercial success at the box office, both domestically and internationally. It grossed over $61 million in South Korea, becoming the fourth highest-grossing film of 2019 and the 11th highest-grossing film of all time in the country. It also grossed over $24 million in other countries, mainly in Asia, bringing its worldwide total to over $85 million. It was also nominated for several awards, including the Baeksang Arts Awards, the Blue Dragon Film Awards, and the Grand Bell Awards.

-

Q: Where can I find more information about Ashfall?

-

A: If you want to learn more about Ashfall, you can visit the following websites:

- -

Q: What are some other films like Ashfall?

-

A: If you enjoyed Ashfall and want to watch more films like it, you might like these films:

- -

-

Thank you for reading my article on how to download film ashfall full movie sub indo online. I hope you found it helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Have a great day!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/801artistry/RVC801/demucs/augment.py b/spaces/801artistry/RVC801/demucs/augment.py deleted file mode 100644 index bb36d3298d89470f306316322e7587187819c94b..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/demucs/augment.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random -import torch as th -from torch import nn - - -class Shift(nn.Module): - """ - Randomly shift audio in time by up to `shift` samples. - """ - def __init__(self, shift=8192): - super().__init__() - self.shift = shift - - def forward(self, wav): - batch, sources, channels, time = wav.size() - length = time - self.shift - if self.shift > 0: - if not self.training: - wav = wav[..., :length] - else: - offsets = th.randint(self.shift, [batch, sources, 1, 1], device=wav.device) - offsets = offsets.expand(-1, -1, channels, -1) - indexes = th.arange(length, device=wav.device) - wav = wav.gather(3, indexes + offsets) - return wav - - -class FlipChannels(nn.Module): - """ - Flip left-right channels. - """ - def forward(self, wav): - batch, sources, channels, time = wav.size() - if self.training and wav.size(2) == 2: - left = th.randint(2, (batch, sources, 1, 1), device=wav.device) - left = left.expand(-1, -1, -1, time) - right = 1 - left - wav = th.cat([wav.gather(2, left), wav.gather(2, right)], dim=2) - return wav - - -class FlipSign(nn.Module): - """ - Random sign flip. - """ - def forward(self, wav): - batch, sources, channels, time = wav.size() - if self.training: - signs = th.randint(2, (batch, sources, 1, 1), device=wav.device, dtype=th.float32) - wav = wav * (2 * signs - 1) - return wav - - -class Remix(nn.Module): - """ - Shuffle sources to make new mixes. - """ - def __init__(self, group_size=4): - """ - Shuffle sources within one batch. - Each batch is divided into groups of size `group_size` and shuffling is done within - each group separatly. This allow to keep the same probability distribution no matter - the number of GPUs. Without this grouping, using more GPUs would lead to a higher - probability of keeping two sources from the same track together which can impact - performance. - """ - super().__init__() - self.group_size = group_size - - def forward(self, wav): - batch, streams, channels, time = wav.size() - device = wav.device - - if self.training: - group_size = self.group_size or batch - if batch % group_size != 0: - raise ValueError(f"Batch size {batch} must be divisible by group size {group_size}") - groups = batch // group_size - wav = wav.view(groups, group_size, streams, channels, time) - permutations = th.argsort(th.rand(groups, group_size, streams, 1, 1, device=device), - dim=1) - wav = wav.gather(1, permutations.expand(-1, -1, -1, channels, time)) - wav = wav.view(batch, streams, channels, time) - return wav - - -class Scale(nn.Module): - def __init__(self, proba=1., min=0.25, max=1.25): - super().__init__() - self.proba = proba - self.min = min - self.max = max - - def forward(self, wav): - batch, streams, channels, time = wav.size() - device = wav.device - if self.training and random.random() < self.proba: - scales = th.empty(batch, streams, 1, 1, device=device).uniform_(self.min, self.max) - wav *= scales - return wav diff --git a/spaces/A00001/bingothoo/tests/kblob.ts b/spaces/A00001/bingothoo/tests/kblob.ts deleted file mode 100644 index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/tests/kblob.ts +++ /dev/null @@ -1,27 +0,0 @@ -import FormData from 'form-data' - -import { fetch } from '@/lib/isomorphic' - -const formData = new FormData() - -const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}} - -formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - - -fetch('https://bing.vcanbb.top/images/kblob', - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": "https://bing.vcanbb.top/web/index.html", - "Referrer-Policy": "origin-when-cross-origin", - ...formData.getHeaders() - } - - } -).then(res => res.text()) -.then(res => console.log('res', res)) diff --git a/spaces/AAYUSH27/Neuro/app.py b/spaces/AAYUSH27/Neuro/app.py deleted file mode 100644 index d5e99d5ee95267d23a7307e70b2c04e76dc99e0a..0000000000000000000000000000000000000000 --- a/spaces/AAYUSH27/Neuro/app.py +++ /dev/null @@ -1,98 +0,0 @@ -import streamlit as st -from streamlit_chat import message -from langchain.chains import ConversationalRetrievalChain -from langchain.document_loaders import PyPDFLoader, DirectoryLoader -from langchain.embeddings import HuggingFaceEmbeddings -from langchain.llms import CTransformers -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.vectorstores import FAISS -from langchain.memory import ConversationBufferMemory - -# load the pdf files from the path -loader = DirectoryLoader("data/", glob="*.pdf", loader_cls=PyPDFLoader) -documents = loader.load() - -# split text into chunks -text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50) -text_chunks = text_splitter.split_documents(documents) - -# create embeddings -embeddings = HuggingFaceEmbeddings( - model_name="sentence-transformers/all-MiniLM-L6-v2", model_kwargs={"device": "cpu"} -) - -# vectorstore -vector_store = FAISS.from_documents(text_chunks, embeddings) - -# create llm -llm = CTransformers( - model="llama-2-7b-chat.ggmlv3.q4_0.bin", - model_type="llama", - config={"max_new_tokens": 128, "temperature": 0.01}, -) - -memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) - -chain = ConversationalRetrievalChain.from_llm( - llm=llm, - chain_type="stuff", - retriever=vector_store.as_retriever(search_kwargs={"k": 2}), - memory=memory, -) - -st.title("Neuro Health-Care Chat Bot") - - -def conversation_chat(query): - result = chain({"question": query, "chat_history": st.session_state["history"]}) - st.session_state["history"].append((query, result["answer"])) - return result["answer"] - - -def initialize_session_state(): - if "history" not in st.session_state: - st.session_state["history"] = [] - - if "generated" not in st.session_state: - st.session_state["generated"] = ["Hello! Ask me anything about Neuro"] - - if "past" not in st.session_state: - st.session_state["past"] = ["Hello!"] - - -def display_chat_history(): - reply_container = st.container() - container = st.container() - - with container: - with st.form(key="my_form", clear_on_submit=True): - user_input = st.text_input( - "Question:", placeholder="Ask anything about Neuro", key="input" - ) - submit_button = st.form_submit_button(label="Send") - - if submit_button and user_input: - output = conversation_chat(user_input) - - st.session_state["past"].append(user_input) - st.session_state["generated"].append(output) - - if st.session_state["generated"]: - with reply_container: - for i in range(len(st.session_state["generated"])): - message( - st.session_state["past"][i], - is_user=True, - key=str(i) + "_user", - avatar_style="person", - ) - - message( - st.session_state["generated"][i], - key=str(i), - logo="https://img.icons8.com/?size=96&id=19625&format=png", - ) -# Initialize session state -initialize_session_state() -# Display chat history -display_chat_history() diff --git a/spaces/AFRAC/NCM_DEMO/README.md b/spaces/AFRAC/NCM_DEMO/README.md deleted file mode 100644 index 851142e2a11c5e1b4bdf3aaf9c1faf5a78e08e89..0000000000000000000000000000000000000000 --- a/spaces/AFRAC/NCM_DEMO/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: NCM DEMO -emoji: 🧾 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIConsultant/MusicGen/audiocraft/modules/transformer.py b/spaces/AIConsultant/MusicGen/audiocraft/modules/transformer.py deleted file mode 100644 index 048c06dfbb0ab4167afce95dffb73dcc343c2344..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/modules/transformer.py +++ /dev/null @@ -1,747 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Transformer model, with streaming support, xformer attention support -and easy causal attention with a potentially finite receptive field. - -See `StreamingTransformer` for more information. - -Unlike regular PyTorch Transformer, we make the hard choice that batches are first. -""" - -import typing as tp - -from einops import rearrange -import torch -import torch.nn as nn -from torch.nn import functional as F -from torch.utils.checkpoint import checkpoint as torch_checkpoint -from xformers import ops - -from .rope import RotaryEmbedding -from .streaming import StreamingModule - -_efficient_attention_backend: str = 'torch' - - -def set_efficient_attention_backend(backend: str = 'torch'): - # Using torch by default, it seems a bit faster on older P100 GPUs (~20% faster). - global _efficient_attention_backend - assert _efficient_attention_backend in ['xformers', 'torch'] - _efficient_attention_backend = backend - - -def _get_attention_time_dimension() -> int: - if _efficient_attention_backend == 'torch': - return 2 - else: - return 1 - - -def _is_profiled() -> bool: - # Return true if we are currently running with a xformers profiler activated. - try: - from xformers.profiler import profiler - except ImportError: - return False - return profiler._Profiler._CURRENT_PROFILER is not None - - -def create_norm_fn(norm_type: str, dim: int, **kwargs) -> nn.Module: - """Create normalization module for transformer encoder layer. - - Args: - norm_type (str): Normalization method. - dim (int): Dimension of the normalized layer. - **kwargs (dict): Additional parameters for normalization layer. - Returns: - nn.Module: Normalization module. - """ - if norm_type == 'layer_norm': - return nn.LayerNorm(dim, eps=1e-5, **kwargs) - else: - raise ValueError(f"Unknown norm type: {norm_type}") - - -def create_sin_embedding(positions: torch.Tensor, dim: int, max_period: float = 10000, - dtype: torch.dtype = torch.float32) -> torch.Tensor: - """Create sinusoidal positional embedding, with shape `[B, T, C]`. - - Args: - positions (torch.Tensor): LongTensor of positions. - dim (int): Dimension of the embedding. - max_period (float): Maximum period of the cosine/sine functions. - dtype (torch.dtype or str): dtype to use to generate the embedding. - Returns: - torch.Tensor: Sinusoidal positional embedding. - """ - # We aim for BTC format - assert dim % 2 == 0 - half_dim = dim // 2 - positions = positions.to(dtype) - adim = torch.arange(half_dim, device=positions.device, dtype=dtype).view(1, 1, -1) - max_period_tensor = torch.full([], max_period, device=positions.device, dtype=dtype) # avoid sync point - phase = positions / (max_period_tensor ** (adim / (half_dim - 1))) - return torch.cat([torch.cos(phase), torch.sin(phase)], dim=-1) - - -def expand_repeated_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor: - """torch.repeat_interleave(x, dim=2, repeats=n_rep) from xlformers.""" - if n_rep == 1: - return x - if _efficient_attention_backend == 'torch': - bs, n_kv_heads, slen, head_dim = x.shape - return ( - x[:, :, None, :, :] - .expand(bs, n_kv_heads, n_rep, slen, head_dim) - .reshape(bs, n_kv_heads * n_rep, slen, head_dim) - ) - else: - bs, slen, n_kv_heads, head_dim = x.shape - return ( - x[:, :, :, None, :] - .expand(bs, slen, n_kv_heads, n_rep, head_dim) - .reshape(bs, slen, n_kv_heads * n_rep, head_dim) - ) - - -class LayerScale(nn.Module): - """Layer scale from [Touvron et al 2021] (https://arxiv.org/pdf/2103.17239.pdf). - This rescales diagonally the residual outputs close to 0, with a learnt scale. - - Args: - channels (int): Number of channels. - init (float): Initial scale. - channel_last (bool): If True, expect `[*, C]` shaped tensors, otherwise, `[*, C, T]`. - device (torch.device or str, optional): Device on which to initialize the module. - dtype (torch.dtype, optional): dtype to use to initialize the module. - """ - def __init__(self, channels: int, init: float = 1e-4, channel_last: bool = True, - device=None, dtype=None): - super().__init__() - self.channel_last = channel_last - self.scale = nn.Parameter( - torch.full((channels,), init, - requires_grad=True, device=device, dtype=dtype)) - - def forward(self, x: torch.Tensor): - if self.channel_last: - return self.scale * x - else: - return self.scale[:, None] * x - - -class StreamingMultiheadAttention(StreamingModule): - """Similar to `nn.MultiheadAttention` but with support for streaming, causal evaluation. - - Args: - embed_dim (int): Dimension to project to. - num_heads (int): Number of heads. - dropout (float): Dropout level. - bias (bool): Use bias in projections. - causal (bool): Causal mask applied automatically. - past_context (int, optional): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - rope (`RotaryEmbedding`, optional): Rope embedding to use. - cross_attention: Should be true when used as a cross attention. - All keys and values must be available at once, streaming is only for the queries. - Cannot be used with `causal` or `rope` (as it wouldn't make sens to - interpret the time steps in the keys relative to those in the queries). - safe_streaming (bool): Bug fix, will go away with xformers update. - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device, optional): Device on which to initialize. - dtype (torch.dtype, optional): dtype to use. - """ - def __init__(self, embed_dim: int, num_heads: int, dropout: float = 0.0, bias: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - rope: tp.Optional[RotaryEmbedding] = None, cross_attention: bool = False, - safe_streaming: bool = True, qk_layer_norm: bool = False, kv_repeat: int = 1, - device=None, dtype=None): - super().__init__() - factory_kwargs = {'device': device, 'dtype': dtype} - if past_context is not None: - assert causal - - self.embed_dim = embed_dim - self.causal = causal - self.past_context = past_context - self.memory_efficient = memory_efficient - self.attention_as_float32 = attention_as_float32 - self.rope = rope - self.cross_attention = cross_attention - self.safe_streaming = safe_streaming - self.num_heads = num_heads - self.dropout = dropout - self.kv_repeat = kv_repeat - if cross_attention: - assert not causal, "Causal cannot work with cross attention." - assert rope is None, "Rope cannot work with cross attention." - - if memory_efficient: - _verify_xformers_memory_efficient_compat() - - self.custom = _is_custom(custom, memory_efficient) - if self.custom: - out_dim = embed_dim - assert num_heads % kv_repeat == 0 - assert not cross_attention or kv_repeat == 1 - num_kv = num_heads // kv_repeat - kv_dim = (embed_dim // num_heads) * num_kv - out_dim += 2 * kv_dim - in_proj = nn.Linear(embed_dim, out_dim, bias=bias, **factory_kwargs) - # We try to follow the default PyTorch MHA convention, to easily compare results. - self.in_proj_weight = in_proj.weight - self.in_proj_bias = in_proj.bias - if bias: - self.in_proj_bias.data.zero_() # Following Pytorch convention - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias, **factory_kwargs) - if bias: - self.out_proj.bias.data.zero_() - else: - assert not qk_layer_norm - assert kv_repeat == 1 - self.mha = nn.MultiheadAttention( - embed_dim, num_heads, dropout=dropout, bias=bias, batch_first=True, - **factory_kwargs) - self.qk_layer_norm = qk_layer_norm - if qk_layer_norm: - assert self.custom - assert kv_repeat == 1 - ln_dim = embed_dim - self.q_layer_norm = nn.LayerNorm(ln_dim) - self.k_layer_norm = nn.LayerNorm(ln_dim) - - def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs): - if not self.custom: - # Support compat with regular MHA - keys = [n for n, _ in self.mha.named_parameters()] - for key in keys: - if prefix + key in state_dict: - state_dict[prefix + "mha." + key] = state_dict.pop(prefix + key) - super()._load_from_state_dict(state_dict, prefix, *args, **kwargs) - - def _get_mask(self, current_steps: int, device: torch.device, dtype: torch.dtype): - # Return a causal mask, accounting for potentially stored past keys/values - # We actually return a bias for the attention score, as this has the same - # convention both in the builtin MHA in Pytorch, and Xformers functions. - time_dim = _get_attention_time_dimension() - if self.memory_efficient: - from xformers.ops import LowerTriangularMask - if current_steps == 1: - # If we only have one step, then we do not need a mask. - return None - elif 'past_keys' in self._streaming_state: - raise RuntimeError("Not supported at the moment") - else: - # Then we can safely use a lower triangular mask - return LowerTriangularMask() - if self._streaming_state: - past_keys = self._streaming_state['past_keys'] - past_steps = past_keys.shape[time_dim] - else: - past_steps = 0 - - queries_pos = torch.arange( - past_steps, current_steps + past_steps, device=device).view(-1, 1) - keys_pos = torch.arange(past_steps + current_steps, device=device).view(1, -1) - delta = queries_pos - keys_pos - valid = delta >= 0 - if self.past_context is not None: - valid &= (delta <= self.past_context) - return torch.where( - valid, - torch.zeros([], device=device, dtype=dtype), - torch.full([], float('-inf'), device=device, dtype=dtype)) - - def _complete_kv(self, k, v): - time_dim = _get_attention_time_dimension() - if self.cross_attention: - # With cross attention we assume all keys and values - # are already available, and streaming is with respect - # to the queries only. - return k, v - # Complete the key/value pair using the streaming state. - if self._streaming_state: - pk = self._streaming_state['past_keys'] - nk = torch.cat([pk, k], dim=time_dim) - if v is k: - nv = nk - else: - pv = self._streaming_state['past_values'] - nv = torch.cat([pv, v], dim=time_dim) - else: - nk = k - nv = v - - assert nk.shape[time_dim] == nv.shape[time_dim] - offset = 0 - if self.past_context is not None: - offset = max(0, nk.shape[time_dim] - self.past_context) - if self._is_streaming: - self._streaming_state['past_keys'] = nk[:, offset:] - if v is not k: - self._streaming_state['past_values'] = nv[:, offset:] - if 'offset' in self._streaming_state: - self._streaming_state['offset'] += offset - else: - self._streaming_state['offset'] = torch.tensor(0) - return nk, nv - - def _apply_rope(self, query: torch.Tensor, key: torch.Tensor): - # TODO: fix and verify layout. - assert _efficient_attention_backend == 'xformers', "Rope not supported with torch attn." - # Apply rope embeddings to query and key tensors. - assert self.rope is not None - if 'past_keys' in self._streaming_state: - past_keys_offset = self._streaming_state['past_keys'].shape[1] - else: - past_keys_offset = 0 - if 'offset' in self._streaming_state: - past_context_offset = int(self._streaming_state['offset'].item()) - else: - past_context_offset = 0 - streaming_offset = past_context_offset + past_keys_offset - return self.rope.rotate_qk(query, key, start=streaming_offset) - - def forward(self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor, - key_padding_mask=None, need_weights=False, attn_mask=None, - average_attn_weights=True, is_causal=False): - assert attn_mask is None - assert not is_causal, ("New param added in torch 2.0.1 not supported, " - "use the causal args in the constructor.") - - time_dim = _get_attention_time_dimension() - if time_dim == 2: - layout = "b h t d" - else: - layout = "b t h d" - dtype = query.dtype - if self._is_streaming: - assert self.causal or self.cross_attention, \ - "Streaming only available for causal or cross attention" - - if self.causal: - # At the moment we specialize only for the self-attention case. - assert query.shape[1] == key.shape[1], "Causal only for same length query / key / value" - assert value.shape[1] == key.shape[1], "Causal only for same length query / key / value" - attn_mask = self._get_mask(query.shape[1], query.device, query.dtype) - - if self.custom: - # custom implementation - assert need_weights is False - assert key_padding_mask is None - if self.cross_attention: - # Different queries, keys, values, we have to spit manually the weights - # before applying the linear. - dim = self.in_proj_weight.shape[0] // 3 - if self.in_proj_bias is None: - bias_q, bias_k, bias_v = None, None, None - else: - bias_q = self.in_proj_bias[:dim] - bias_k = self.in_proj_bias[dim: 2 * dim] - bias_v = self.in_proj_bias[2 * dim:] - q = nn.functional.linear(query, self.in_proj_weight[:dim], bias_q) - # todo: when streaming, we could actually save k, v and check the shape actually match. - k = nn.functional.linear(key, self.in_proj_weight[dim: 2 * dim], bias_k) - v = nn.functional.linear(value, self.in_proj_weight[2 * dim:], bias_v) - if self.qk_layer_norm is True: - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k, v = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k, v]] - else: - if not _is_profiled(): - # profiling breaks that propertysomehow. - assert query is key, "specialized implementation" - assert value is key, "specialized implementation" - projected = nn.functional.linear(query, self.in_proj_weight, self.in_proj_bias) - if self.kv_repeat == 1: - if time_dim == 2: - bound_layout = "b h p t d" - else: - bound_layout = "b t p h d" - packed = rearrange(projected, f"b t (p h d) -> {bound_layout}", p=3, h=self.num_heads) - q, k, v = ops.unbind(packed, dim=2) - else: - embed_dim = self.embed_dim - per_head_dim = (embed_dim // self.num_heads) - kv_heads = self.num_heads // self.kv_repeat - q = projected[:, :, :embed_dim] - start = embed_dim - end = start + per_head_dim * kv_heads - k = projected[:, :, start: end] - v = projected[:, :, end:] - q = rearrange(q, f"b t (h d) -> {layout}", h=self.num_heads) - k = rearrange(k, f"b t (h d) -> {layout}", h=kv_heads) - v = rearrange(v, f"b t (h d) -> {layout}", h=kv_heads) - - if self.qk_layer_norm is True: - assert self.kv_repeat == 1 - q, k = [rearrange(x, f"{layout} -> b t (h d)") for x in [q, k]] - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k]] - if self.rope: - q, k = self._apply_rope(q, k) - k, v = self._complete_kv(k, v) - if self.kv_repeat > 1: - k = expand_repeated_kv(k, self.kv_repeat) - v = expand_repeated_kv(v, self.kv_repeat) - if self.attention_as_float32: - q, k, v = [x.float() for x in [q, k, v]] - if self.memory_efficient: - p = self.dropout if self.training else 0 - if _efficient_attention_backend == 'torch': - x = torch.nn.functional.scaled_dot_product_attention( - q, k, v, is_causal=attn_mask is not None, dropout_p=p) - else: - x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p) - else: - # We include the dot product as float32, for consistency - # with the other implementations that include that step - # as part of the attention. Note that when using `autocast`, - # the einsums would be done as bfloat16, but the softmax - # would be done as bfloat16, so `attention_as_float32` will - # extend a bit the range of operations done in float32, - # although this should make no difference. - q = q / q.shape[-1] ** 0.5 - key_layout = layout.replace('t', 'k') - query_layout = layout - if self._is_streaming and self.safe_streaming and q.device.type == 'cuda': - with torch.autocast(device_type=q.device.type, dtype=torch.float32): - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - else: - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - if attn_mask is not None: - pre_w = pre_w + attn_mask - w = torch.softmax(pre_w, dim=-1) - w = F.dropout(w, self.dropout, training=self.training).to(v) - # Key and value have the same format. - x = torch.einsum(f"b h t k, {key_layout} -> {layout}", w, v) - x = x.to(dtype) - x = rearrange(x, f"{layout} -> b t (h d)", h=self.num_heads) - x = self.out_proj(x) - else: - key, value = self._complete_kv(key, value) - if self.attention_as_float32: - query, key, value = [x.float() for x in [query, key, value]] - x, _ = self.mha( - query, key, value, key_padding_mask, - need_weights, attn_mask, average_attn_weights) - x = x.to(dtype) - - return x, None - - -class StreamingTransformerLayer(nn.TransformerEncoderLayer): - """TransformerLayer with Streaming / Causal support. - This also integrates cross_attention, when passing `cross_attention=True`, - rather than having two separate classes like in PyTorch. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int, optional): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product in attention. - qk_layer_norm_cross (bool): Same for the cross attention. - cross_attention (bool): If True, expect to get secondary input for cross-attention. - Cross attention will use the default MHA, as it typically won't require - special treatment. - layer_scale (float, optional): If not None, LayerScale will be used with - the given value as initial scale. - rope (`RotaryEmbedding`, optional): Rope embedding to use. - attention_dropout (float, optional): If not None, separate the value of the dimension dropout - in FFN and of the attention dropout. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device, optional): Device on which to initialize. - dtype (torch.dtype, optional): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, dim_feedforward: int = 2048, dropout: float = 0.1, - bias_ff: bool = True, bias_attn: bool = True, causal: bool = False, - past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - qk_layer_norm: bool = False, qk_layer_norm_cross: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - rope: tp.Optional[RotaryEmbedding] = None, attention_dropout: tp.Optional[float] = None, - kv_repeat: int = 1, norm: str = 'layer_norm', device=None, dtype=None, **kwargs): - super().__init__(d_model, num_heads, dim_feedforward, dropout, - device=device, dtype=dtype, batch_first=True, **kwargs) - factory_kwargs = {'device': device, 'dtype': dtype} - # Redefine self_attn to our streaming multi-head attention - attn_kwargs: tp.Dict[str, tp.Any] = { - 'embed_dim': d_model, - 'num_heads': num_heads, - 'dropout': dropout if attention_dropout is None else attention_dropout, - 'bias': bias_attn, - 'custom': custom, - 'memory_efficient': memory_efficient, - 'attention_as_float32': attention_as_float32, - } - self.self_attn: StreamingMultiheadAttention = StreamingMultiheadAttention( - causal=causal, past_context=past_context, rope=rope, qk_layer_norm=qk_layer_norm, - kv_repeat=kv_repeat, **attn_kwargs, **factory_kwargs) # type: ignore - # Redefine feedforward layers to expose bias parameter - self.linear1 = nn.Linear(d_model, dim_feedforward, bias=bias_ff, **factory_kwargs) - self.linear2 = nn.Linear(dim_feedforward, d_model, bias=bias_ff, **factory_kwargs) - - self.layer_scale_1: nn.Module - self.layer_scale_2: nn.Module - if layer_scale is None: - self.layer_scale_1 = nn.Identity() - self.layer_scale_2 = nn.Identity() - else: - self.layer_scale_1 = LayerScale(d_model, layer_scale, **factory_kwargs) - self.layer_scale_2 = LayerScale(d_model, layer_scale, **factory_kwargs) - - self.cross_attention: tp.Optional[nn.Module] = None - if cross_attention: - self.cross_attention = StreamingMultiheadAttention( - cross_attention=True, qk_layer_norm=qk_layer_norm_cross, - **attn_kwargs, **factory_kwargs) - # Norm and dropout - self.dropout_cross = nn.Dropout(dropout) - # eps value matching that used in PyTorch reference implementation. - self.norm_cross = nn.LayerNorm(d_model, eps=1e-5, **factory_kwargs) - self.layer_scale_cross: nn.Module - if layer_scale is None: - self.layer_scale_cross = nn.Identity() - else: - self.layer_scale_cross = LayerScale(d_model, layer_scale, **factory_kwargs) - self.norm1 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - self.norm2 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - - def _cross_attention_block(self, src: torch.Tensor, - cross_attention_src: torch.Tensor) -> torch.Tensor: - assert self.cross_attention is not None - # queries are from src, keys and values from cross_attention_src. - x = self.cross_attention( - src, cross_attention_src, cross_attention_src, need_weights=False)[0] - return self.dropout_cross(x) # type: ignore - - def forward(self, src: torch.Tensor, src_mask: tp.Optional[torch.Tensor] = None, # type: ignore - src_key_padding_mask: tp.Optional[torch.Tensor] = None, - cross_attention_src: tp.Optional[torch.Tensor] = None): - if self.cross_attention is None: - assert cross_attention_src is None - else: - assert cross_attention_src is not None - x = src - if self.norm_first: - x = x + self.layer_scale_1( - self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)) - if cross_attention_src is not None: - x = x + self.layer_scale_cross( - self._cross_attention_block( - self.norm_cross(x), cross_attention_src)) - x = x + self.layer_scale_2(self._ff_block(self.norm2(x))) - else: - x = self.norm1(x + self.layer_scale_1( - self._sa_block(x, src_mask, src_key_padding_mask))) - if cross_attention_src is not None: - x = self.norm_cross( - x + self.layer_scale_cross( - self._cross_attention_block(src, cross_attention_src))) - x = self.norm2(x + self.layer_scale_2(self._ff_block(x))) - return x - - -class StreamingTransformer(StreamingModule): - """Transformer with Streaming / Causal support. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int, optional): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - cross_attention (bool): If True, expect to get secondary input for cross-attention. - layer_scale (float, optional): If not None, LayerScale will be used - with the given value as initial scale. - positional_embedding (str): Positional embedding strategy (sin, rope, or sin_rope). - max_period (float): Maximum period of the time embedding. - positional_scale (float): Scale of positional embedding, set to 0 to deactivate. - xpos (bool): Apply xpos exponential decay to positional embedding (rope only). - lr (float, optional): learning rate override through the `make_optim_group` API. - weight_decay (float, optional): Weight_decay override through the `make_optim_group` API. - layer_class: (subclass of `StreamingTransformerLayer): class to use - to initialize the layers, allowing further customization outside of AudioCraft. - checkpointing (str): Checkpointing strategy to reduce memory usage. - No checkpointing if set to 'none'. Per layer checkpointing using PyTorch - if set to 'torch' (entire layer checkpointed, i.e. linears are evaluated twice, - minimal memory usage, but maximal runtime). Finally, `xformers_default` provide - a policy for opting-out some operations of the checkpointing like - linear layers and attention, providing a middle ground between speed and memory. - device (torch.device, optional): Device on which to initialize. - dtype (torch.dtype, optional): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, num_layers: int, dim_feedforward: int = 2048, - dropout: float = 0.1, bias_ff: bool = True, bias_attn: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, - custom: bool = False, memory_efficient: bool = False, attention_as_float32: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - positional_embedding: str = 'sin', max_period: float = 10_000, positional_scale: float = 1., - xpos: bool = False, lr: tp.Optional[float] = None, weight_decay: tp.Optional[float] = None, - layer_class: tp.Type[StreamingTransformerLayer] = StreamingTransformerLayer, - checkpointing: str = 'none', device=None, dtype=None, **kwargs): - super().__init__() - assert d_model % num_heads == 0 - - self.positional_embedding = positional_embedding - self.max_period = max_period - self.positional_scale = positional_scale - self.weight_decay = weight_decay - self.lr = lr - - assert positional_embedding in ['sin', 'rope', 'sin_rope'] - self.rope: tp.Optional[RotaryEmbedding] = None - if self.positional_embedding in ['rope', 'sin_rope']: - assert _is_custom(custom, memory_efficient) - self.rope = RotaryEmbedding(d_model // num_heads, max_period=max_period, - xpos=xpos, scale=positional_scale, device=device) - - self.checkpointing = checkpointing - - assert checkpointing in ['none', 'torch', 'xformers_default', 'xformers_mm'] - if self.checkpointing.startswith('xformers'): - _verify_xformers_internal_compat() - - self.layers = nn.ModuleList() - for idx in range(num_layers): - self.layers.append( - layer_class( - d_model=d_model, num_heads=num_heads, dim_feedforward=dim_feedforward, - dropout=dropout, bias_ff=bias_ff, bias_attn=bias_attn, - causal=causal, past_context=past_context, custom=custom, - memory_efficient=memory_efficient, attention_as_float32=attention_as_float32, - cross_attention=cross_attention, layer_scale=layer_scale, rope=self.rope, - device=device, dtype=dtype, **kwargs)) - - if self.checkpointing != 'none': - for layer in self.layers: - # see audiocraft/optim/fsdp.py, magic signal to indicate this requires fixing the - # backward hook inside of FSDP... - layer._magma_checkpointed = True # type: ignore - assert layer.layer_drop == 0., "Need further checking" # type: ignore - - def _apply_layer(self, layer, *args, **kwargs): - method = self.checkpointing - if method == 'none': - return layer(*args, **kwargs) - elif method == 'torch': - return torch_checkpoint(layer, *args, use_reentrant=False, **kwargs) - elif method.startswith('xformers'): - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy - if method == 'xformers_default': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "xformers.efficient_attention_forward_cutlass.default", - "xformers_flash.flash_fwd.default", - "aten.addmm.default", - "aten.mm.default", - ] - elif method == 'xformers_mm': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "aten.addmm.default", - "aten.mm.default", - ] - else: - raise ValueError(f"xformers checkpointing xformers policy {method} is not known.") - policy_fn = _get_default_policy(allow_list) - return checkpoint(layer, *args, policy_fn=policy_fn, **kwargs) - else: - raise ValueError(f"Checkpointing method {method} is unknown.") - - def forward(self, x: torch.Tensor, *args, **kwargs): - B, T, C = x.shape - - if 'offsets' in self._streaming_state: - offsets = self._streaming_state['offsets'] - else: - offsets = torch.zeros(B, dtype=torch.long, device=x.device) - - if self.positional_embedding in ['sin', 'sin_rope']: - positions = torch.arange(T, device=x.device).view(1, -1, 1) - positions = positions + offsets.view(-1, 1, 1) - pos_emb = create_sin_embedding(positions, C, max_period=self.max_period, dtype=x.dtype) - x = x + self.positional_scale * pos_emb - - for layer in self.layers: - x = self._apply_layer(layer, x, *args, **kwargs) - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return x - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - if self.weight_decay is not None: - group["weight_decay"] = self.weight_decay - return group - - -# special attention related function - -def _verify_xformers_memory_efficient_compat(): - try: - from xformers.ops import memory_efficient_attention, LowerTriangularMask # noqa - except ImportError: - raise ImportError( - "xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _verify_xformers_internal_compat(): - try: - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy # noqa - except ImportError: - raise ImportError( - "Francisco's fairinternal xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _is_custom(custom: bool, memory_efficient: bool): - return custom or memory_efficient diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/utils/export.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/utils/export.py deleted file mode 100644 index b513b52267f7bf5aae09282c15b0a2e20c8a8fee..0000000000000000000000000000000000000000 --- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/utils/export.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility to export a training checkpoint to a lightweight release checkpoint. -""" - -from pathlib import Path -import typing as tp - -from omegaconf import OmegaConf, DictConfig -import torch - - -def _clean_lm_cfg(cfg: DictConfig): - OmegaConf.set_struct(cfg, False) - # This used to be set automatically in the LM solver, need a more robust solution - # for the future. - cfg['transformer_lm']['card'] = 2048 - cfg['transformer_lm']['n_q'] = 4 - # Experimental params no longer supported. - bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters', - 'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop'] - for name in bad_params: - del cfg['transformer_lm'][name] - OmegaConf.set_struct(cfg, True) - return cfg - - -def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['ema']['state']['model'], - 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']), - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file - - -def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['fsdp_best_state']['model'], - 'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg'])) - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/utils/extend.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/utils/extend.py deleted file mode 100644 index 5c919a5cb740e14ca8751d68a0ab16d9400d35d6..0000000000000000000000000000000000000000 --- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/utils/extend.py +++ /dev/null @@ -1,332 +0,0 @@ -from tabnanny import verbose -import torch -import math -from audiocraft.models import MusicGen -import numpy as np -from PIL import Image, ImageDraw, ImageFont, ImageColor -import string -import tempfile -import os -import textwrap -import requests -from io import BytesIO -from huggingface_hub import hf_hub_download -import librosa - - -INTERRUPTING = False - -def separate_audio_segments(audio, segment_duration=30, overlap=1): - sr, audio_data = audio[0], audio[1] - - segment_samples = sr * segment_duration - total_samples = max(min((len(audio_data) // segment_samples), 25), 0) - overlap_samples = sr * overlap - - segments = [] - start_sample = 0 - # handle the case where the audio is shorter than the segment duration - if total_samples == 0: - total_samples = 1 - segment_samples = len(audio_data) - overlap_samples = 0 - while total_samples >= segment_samples: - # Collect the segment - # the end sample is the start sample plus the segment samples, - # the start sample, after 0, is minus the overlap samples to account for the overlap - end_sample = start_sample + segment_samples - segment = audio_data[start_sample:end_sample] - segments.append((sr, segment)) - - start_sample += segment_samples - overlap_samples - total_samples -= segment_samples - - # Collect the final segment - if total_samples > 0: - segment = audio_data[-segment_samples:] - segments.append((sr, segment)) - print(f"separate_audio_segments: {len(segments)} segments of length {segment_samples // sr} seconds") - return segments - -def generate_music_segments(text, melody, seed, MODEL, duration:int=10, overlap:int=1, segment_duration:int=30, prompt_index:int=0, harmony_only:bool= False): - # generate audio segments - melody_segments = separate_audio_segments(melody, segment_duration, 0) - - # Create lists to store the melody tensors for each segment - melodys = [] - output_segments = [] - last_chunk = [] - text += ", seed=" + str(seed) - prompt_segment = None - # prevent hacking - duration = min(duration, 720) - overlap = min(overlap, 15) - - # Calculate the total number of segments - total_segments = max(math.ceil(duration / segment_duration),1) - #calculate duration loss from segment overlap - duration_loss = max(total_segments - 1,0) * math.ceil(overlap / 2) - #calc excess duration - excess_duration = segment_duration - (total_segments * segment_duration - duration) - print(f"total Segments to Generate: {total_segments} for {duration} seconds. Each segment is {segment_duration} seconds. Excess {excess_duration} Overlap Loss {duration_loss}") - duration += duration_loss - while excess_duration + duration_loss > segment_duration: - total_segments += 1 - #calculate duration loss from segment overlap - duration_loss += math.ceil(overlap / 2) - #calc excess duration - excess_duration = segment_duration - (total_segments * segment_duration - duration) - print(f"total Segments to Generate: {total_segments} for {duration} seconds. Each segment is {segment_duration} seconds. Excess {excess_duration} Overlap Loss {duration_loss}") - if excess_duration + duration_loss > segment_duration: - duration += duration_loss - duration_loss = 0 - total_segments = min(total_segments, (720 // segment_duration)) - - # If melody_segments is shorter than total_segments, repeat the segments until the total_segments is reached - if len(melody_segments) < total_segments: - #fix melody_segments - for i in range(total_segments - len(melody_segments)): - segment = melody_segments[i] - melody_segments.append(segment) - print(f"melody_segments: {len(melody_segments)} fixed") - - # Iterate over the segments to create list of Meldoy tensors - for segment_idx in range(total_segments): - if INTERRUPTING: - return [], duration - print(f"segment {segment_idx + 1} of {total_segments} \r") - - if harmony_only: - # REMOVE PERCUSION FROM MELODY - # Apply HPSS using librosa - verse_harmonic, verse_percussive = librosa.effects.hpss(melody_segments[segment_idx][1]) - # Convert the separated components back to torch.Tensor - #harmonic_tensor = torch.from_numpy(verse_harmonic) - #percussive_tensor = torch.from_numpy(verse_percussive) - sr, verse = melody_segments[segment_idx][0], torch.from_numpy(verse_harmonic).to(MODEL.device).float().t().unsqueeze(0) - else: - sr, verse = melody_segments[segment_idx][0], torch.from_numpy(melody_segments[segment_idx][1]).to(MODEL.device).float().t().unsqueeze(0) - - print(f"shape:{verse.shape} dim:{verse.dim()}") - if verse.dim() == 2: - verse = verse[None] - verse = verse[..., :int(sr * MODEL.lm.cfg.dataset.segment_duration)] - - # Append the segment to the melodys list - melodys.append(verse) - - torch.manual_seed(seed) - - # If user selects a prompt segment, generate a new prompt segment to use on all segments - #default to the first segment for prompt conditioning - prompt_verse = melodys[0] - if prompt_index > 0: - # Get a prompt segment from the selected verse, normally the first verse - prompt_verse = melodys[prompt_index if prompt_index <= (total_segments - 1) else (total_segments -1)] - - # set the prompt segment MODEL generation params - MODEL.set_generation_params( - use_sampling=True, - top_k=MODEL.generation_params["top_k"], - top_p=MODEL.generation_params["top_p"], - temperature=MODEL.generation_params["temp"], - cfg_coef=MODEL.generation_params["cfg_coef"], - duration=segment_duration, - two_step_cfg=False, - rep_penalty=0.5 - ) - # Generate a new prompt segment. This will be applied to all segments for consistency - print(f"Generating New Prompt Segment: {text} from verse {prompt_index}\r") - prompt_segment = MODEL.generate_with_all( - descriptions=[text], - melody_wavs=prompt_verse, - sample_rate=sr, - progress=False, - prompt=None, - ) - - for idx, verse in enumerate(melodys): - if INTERRUPTING: - return output_segments, duration - - print(f'Segment duration: {segment_duration}, duration: {duration}, overlap: {overlap} Overlap Loss: {duration_loss}') - # Compensate for the length of final segment - if ((idx + 1) == len(melodys)) or (duration < segment_duration): - mod_duration = max(min(duration, segment_duration),1) - print(f'Modify verse length, duration: {duration}, overlap: {overlap} Overlap Loss: {duration_loss} to mod duration: {mod_duration}') - MODEL.set_generation_params( - use_sampling=True, - top_k=MODEL.generation_params["top_k"], - top_p=MODEL.generation_params["top_p"], - temperature=MODEL.generation_params["temp"], - cfg_coef=MODEL.generation_params["cfg_coef"], - duration=mod_duration, - two_step_cfg=False, - rep_penalty=0.5 - ) - try: - # get last chunk - verse = verse[:, :, -mod_duration*MODEL.sample_rate:] - prompt_segment = prompt_segment[:, :, -mod_duration*MODEL.sample_rate:] - except: - # get first chunk - verse = verse[:, :, :mod_duration*MODEL.sample_rate] - prompt_segment = prompt_segment[:, :, :mod_duration*MODEL.sample_rate] - - - print(f"Generating New Melody Segment {idx + 1}: {text}\r") - output = MODEL.generate_with_all( - descriptions=[text], - melody_wavs=verse, - sample_rate=sr, - progress=False, - prompt=prompt_segment, - ) - # If user selects a prompt segment, use the prompt segment for all segments - # Otherwise, use the previous segment as the prompt - if prompt_index < 0: - prompt_segment = output - - # Append the generated output to the list of segments - #output_segments.append(output[:, :segment_duration]) - output_segments.append(output) - print(f"output_segments: {len(output_segments)}: shape: {output.shape} dim {output.dim()}") - #track duration - if duration > segment_duration: - duration -= segment_duration - return output_segments, excess_duration - -def save_image(image): - """ - Saves a PIL image to a temporary file and returns the file path. - - Parameters: - - image: PIL.Image - The PIL image object to be saved. - - Returns: - - str or None: The file path where the image was saved, - or None if there was an error saving the image. - - """ - temp_dir = tempfile.gettempdir() - temp_file = tempfile.NamedTemporaryFile(suffix=".png", dir=temp_dir, delete=False) - temp_file.close() - file_path = temp_file.name - - try: - image.save(file_path) - - except Exception as e: - print("Unable to save image:", str(e)) - return None - finally: - return file_path - -def hex_to_rgba(hex_color): - try: - # Convert hex color to RGBA tuple - rgba = ImageColor.getcolor(hex_color, "RGBA") - except ValueError: - # If the hex color is invalid, default to yellow - rgba = (255,255,0,255) - return rgba - -def load_font(font_name, font_size=16): - """ - Load a font using the provided font name and font size. - - Parameters: - font_name (str): The name of the font to load. Can be a font name recognized by the system, a URL to download the font file, - a local file path, or a Hugging Face model hub identifier. - font_size (int, optional): The size of the font. Default is 16. - - Returns: - ImageFont.FreeTypeFont: The loaded font object. - - Notes: - This function attempts to load the font using various methods until a suitable font is found. If the provided font_name - cannot be loaded, it falls back to a default font. - - The font_name can be one of the following: - - A font name recognized by the system, which can be loaded using ImageFont.truetype. - - A URL pointing to the font file, which is downloaded using requests and then loaded using ImageFont.truetype. - - A local file path to the font file, which is loaded using ImageFont.truetype. - - A Hugging Face model hub identifier, which downloads the font file from the Hugging Face model hub using hf_hub_download - and then loads it using ImageFont.truetype. - - Example: - font = load_font("Arial.ttf", font_size=20) - """ - font = None - if not "http" in font_name: - try: - font = ImageFont.truetype(font_name, font_size) - except (FileNotFoundError, OSError): - print("Font not found. Using Hugging Face download..\n") - - if font is None: - try: - font_path = ImageFont.truetype(hf_hub_download(repo_id=os.environ.get('SPACE_ID', ''), filename="assets/" + font_name, repo_type="space"), encoding="UTF-8") - font = ImageFont.truetype(font_path, font_size) - except (FileNotFoundError, OSError): - print("Font not found. Trying to download from local assets folder...\n") - if font is None: - try: - font = ImageFont.truetype("assets/" + font_name, font_size) - except (FileNotFoundError, OSError): - print("Font not found. Trying to download from URL...\n") - - if font is None: - try: - req = requests.get(font_name) - font = ImageFont.truetype(BytesIO(req.content), font_size) - except (FileNotFoundError, OSError): - print(f"Font not found: {font_name} Using default font\n") - if font: - print(f"Font loaded {font.getname()}") - else: - font = ImageFont.load_default() - return font - - -def add_settings_to_image(title: str = "title", description: str = "", width: int = 768, height: int = 512, background_path: str = "", font: str = "arial.ttf", font_color: str = "#ffffff"): - # Create a new RGBA image with the specified dimensions - image = Image.new("RGBA", (width, height), (255, 255, 255, 0)) - # If a background image is specified, open it and paste it onto the image - if background_path == "": - background = Image.new("RGBA", (width, height), (255, 255, 255, 255)) - else: - background = Image.open(background_path).convert("RGBA") - - #Convert font color to RGBA tuple - font_color = hex_to_rgba(font_color) - - # Calculate the center coordinates for placing the text - text_x = width // 2 - text_y = height // 2 - # Draw the title text at the center top - title_font = load_font(font, 26) # Replace with your desired font and size - - title_text = '\n'.join(textwrap.wrap(title, width // 12)) - title_x, title_y, title_text_width, title_text_height = title_font.getbbox(title_text) - title_x = max(text_x - (title_text_width // 2), title_x, 0) - title_y = text_y - (height // 2) + 10 # 10 pixels padding from the top - title_draw = ImageDraw.Draw(image) - title_draw.multiline_text((title_x, title_y), title, fill=font_color, font=title_font, align="center") - # Draw the description text two lines below the title - description_font = load_font(font, 16) # Replace with your desired font and size - description_text = '\n'.join(textwrap.wrap(description, width // 12)) - description_x, description_y, description_text_width, description_text_height = description_font.getbbox(description_text) - description_x = max(text_x - (description_text_width // 2), description_x, 0) - description_y = title_y + title_text_height + 20 # 20 pixels spacing between title and description - description_draw = ImageDraw.Draw(image) - description_draw.multiline_text((description_x, description_y), description_text, fill=font_color, font=description_font, align="center") - # Calculate the offset to center the image on the background - bg_w, bg_h = background.size - offset = ((bg_w - width) // 2, (bg_h - height) // 2) - # Paste the image onto the background - background.paste(image, offset, mask=image) - - # Save the image and return the file path - return save_image(background) \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/template.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/template.ts deleted file mode 100644 index 87360c88fe6c655fff39f7947da9c6b345402a60..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/template.ts +++ /dev/null @@ -1,28 +0,0 @@ -import type { Message } from "$lib/types/Message"; -import type { LegacyParamatersTemplateInput } from "$lib/types/Template"; -import Handlebars from "handlebars"; - -Handlebars.registerHelper("ifUser", function (this: Pick, options) { - if (this.from == "user") return options.fn(this); -}); - -Handlebars.registerHelper( - "ifAssistant", - function (this: Pick, options) { - if (this.from == "assistant") return options.fn(this); - } -); - -export function compileTemplate(input: string, model: LegacyParamatersTemplateInput) { - const template = Handlebars.compile(input, { - knownHelpers: { ifUser: true, ifAssistant: true }, - knownHelpersOnly: true, - noEscape: true, - strict: true, - preventIndent: true, - }); - - return function render(inputs: T, options?: RuntimeOptions) { - return template({ ...model, ...inputs }, options); - }; -} diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/models.py b/spaces/AchyuthGamer/OpenGPT/g4f/models.py deleted file mode 100644 index b42477036ba65315d4e867e5f9df3c4b0ed901d3..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/models.py +++ /dev/null @@ -1,274 +0,0 @@ -from __future__ import annotations -from dataclasses import dataclass -from .typing import Union -from .Provider import BaseProvider, RetryProvider -from .Provider import ( - AItianhuSpace, - ChatgptLogin, - ChatgptDemo, - ChatgptDuo, - Vitalentum, - ChatgptAi, - ChatForAi, - AItianhu, - ChatBase, - Liaobots, - Yqcloud, - Myshell, - FreeGpt, - Vercel, - DeepAi, - Aichat, - GPTalk, - GptGod, - AiAsk, - GptGo, - Ylokh, - Bard, - Aibn, - Bing, - You, - H2o -) - -@dataclass(unsafe_hash=True) -class Model: - name: str - base_provider: str - best_provider: Union[type[BaseProvider], RetryProvider] = None - -default = Model( - name = "", - base_provider = "", - best_provider = RetryProvider([ - Bing, # Not fully GPT 3 or 4 - Yqcloud, # Answers short questions in chinese - ChatBase, # Don't want to answer creatively - ChatgptDuo, # Include search results - Aibn, Aichat, ChatForAi, ChatgptAi, ChatgptLogin, DeepAi, FreeGpt, GptGo, Myshell, Ylokh, - ]) -) - -# GPT-3.5 too, but all providers supports long responses and a custom timeouts -gpt_35_long = Model( - name = 'gpt-3.5-turbo', - base_provider = 'openai', - best_provider = RetryProvider([ - AiAsk, Aibn, Aichat, ChatForAi, ChatgptAi, ChatgptDemo, ChatgptDuo, - FreeGpt, GptGo, Liaobots, Myshell, Vitalentum, Ylokh, You, Yqcloud, - GPTalk, GptGod - ]) -) - -# GPT-3.5 / GPT-4 -gpt_35_turbo = Model( - name = 'gpt-3.5-turbo', - base_provider = 'openai', - best_provider = RetryProvider([ - DeepAi, ChatgptLogin, ChatgptAi, GptGo, AItianhu, Aichat, AItianhuSpace, Myshell, Aibn, ChatForAi, FreeGpt, Ylokh - ]) -) - -gpt_4 = Model( - name = 'gpt-4', - base_provider = 'openai', - best_provider = Bing -) - -# Bard -palm = Model( - name = 'palm', - base_provider = 'google', - best_provider = Bard) - -# H2o -falcon_7b = Model( - name = 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3', - base_provider = 'huggingface', - best_provider = H2o) - -falcon_40b = Model( - name = 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v1', - base_provider = 'huggingface', - best_provider = H2o) - -llama_13b = Model( - name = 'h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-13b', - base_provider = 'huggingface', - best_provider = H2o) - -# Vercel -claude_instant_v1 = Model( - name = 'claude-instant-v1', - base_provider = 'anthropic', - best_provider = Vercel) - -claude_v1 = Model( - name = 'claude-v1', - base_provider = 'anthropic', - best_provider = Vercel) - -claude_v2 = Model( - name = 'claude-v2', - base_provider = 'anthropic', - best_provider = Vercel) - -command_light_nightly = Model( - name = 'command-light-nightly', - base_provider = 'cohere', - best_provider = Vercel) - -command_nightly = Model( - name = 'command-nightly', - base_provider = 'cohere', - best_provider = Vercel) - -gpt_neox_20b = Model( - name = 'EleutherAI/gpt-neox-20b', - base_provider = 'huggingface', - best_provider = Vercel) - -oasst_sft_1_pythia_12b = Model( - name = 'OpenAssistant/oasst-sft-1-pythia-12b', - base_provider = 'huggingface', - best_provider = Vercel) - -oasst_sft_4_pythia_12b_epoch_35 = Model( - name = 'OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5', - base_provider = 'huggingface', - best_provider = Vercel) - -santacoder = Model( - name = 'bigcode/santacoder', - base_provider = 'huggingface', - best_provider = Vercel) - -bloom = Model( - name = 'bigscience/bloom', - base_provider = 'huggingface', - best_provider = Vercel) - -flan_t5_xxl = Model( - name = 'google/flan-t5-xxl', - base_provider = 'huggingface', - best_provider = Vercel) - -code_davinci_002 = Model( - name = 'code-davinci-002', - base_provider = 'openai', - best_provider = Vercel) - -gpt_35_turbo_16k = Model( - name = 'gpt-3.5-turbo-16k', - base_provider = 'openai', - best_provider = Vercel) - -gpt_35_turbo_16k_0613 = Model( - name = 'gpt-3.5-turbo-16k-0613', - base_provider = 'openai') - -gpt_35_turbo_0613 = Model( - name = 'gpt-3.5-turbo-0613', - base_provider = 'openai' -) - -gpt_4_0613 = Model( - name = 'gpt-4-0613', - base_provider = 'openai' -) - -gpt_4_32k = Model( - name = 'gpt-4-32k', - base_provider = 'openai' -) - -gpt_4_32k_0613 = Model( - name = 'gpt-4-32k-0613', - base_provider = 'openai' -) - -text_ada_001 = Model( - name = 'text-ada-001', - base_provider = 'openai', - best_provider = Vercel) - -text_babbage_001 = Model( - name = 'text-babbage-001', - base_provider = 'openai', - best_provider = Vercel) - -text_curie_001 = Model( - name = 'text-curie-001', - base_provider = 'openai', - best_provider = Vercel) - -text_davinci_002 = Model( - name = 'text-davinci-002', - base_provider = 'openai', - best_provider = Vercel) - -text_davinci_003 = Model( - name = 'text-davinci-003', - base_provider = 'openai', - best_provider = Vercel) - -llama13b_v2_chat = Model( - name = 'replicate:a16z-infra/llama13b-v2-chat', - base_provider = 'replicate', - best_provider = Vercel) - -llama7b_v2_chat = Model( - name = 'replicate:a16z-infra/llama7b-v2-chat', - base_provider = 'replicate', - best_provider = Vercel) - - -class ModelUtils: - convert: dict[str, Model] = { - # gpt-3.5 - 'gpt-3.5-turbo' : gpt_35_turbo, - 'gpt-3.5-turbo-0613' : gpt_35_turbo_0613, - 'gpt-3.5-turbo-16k' : gpt_35_turbo_16k, - 'gpt-3.5-turbo-16k-0613' : gpt_35_turbo_16k_0613, - - # gpt-4 - 'gpt-4' : gpt_4, - 'gpt-4-0613' : gpt_4_0613, - 'gpt-4-32k' : gpt_4_32k, - 'gpt-4-32k-0613' : gpt_4_32k_0613, - - # Bard - 'palm2' : palm, - 'palm' : palm, - 'google' : palm, - 'google-bard' : palm, - 'google-palm' : palm, - 'bard' : palm, - - # H2o - 'falcon-40b' : falcon_40b, - 'falcon-7b' : falcon_7b, - 'llama-13b' : llama_13b, - - # Vercel - 'claude-instant-v1' : claude_instant_v1, - 'claude-v1' : claude_v1, - 'claude-v2' : claude_v2, - 'command-nightly' : command_nightly, - 'gpt-neox-20b' : gpt_neox_20b, - 'santacoder' : santacoder, - 'bloom' : bloom, - 'flan-t5-xxl' : flan_t5_xxl, - 'code-davinci-002' : code_davinci_002, - 'text-ada-001' : text_ada_001, - 'text-babbage-001' : text_babbage_001, - 'text-curie-001' : text_curie_001, - 'text-davinci-002' : text_davinci_002, - 'text-davinci-003' : text_davinci_003, - 'llama13b-v2-chat' : llama13b_v2_chat, - 'llama7b-v2-chat' : llama7b_v2_chat, - - 'oasst-sft-1-pythia-12b' : oasst_sft_1_pythia_12b, - 'oasst-sft-4-pythia-12b-epoch-3.5' : oasst_sft_4_pythia_12b_epoch_35, - 'command-light-nightly' : command_light_nightly, - } diff --git a/spaces/AfrodreamsAI/afrodreams/ex_app.py b/spaces/AfrodreamsAI/afrodreams/ex_app.py deleted file mode 100644 index 94fbdab3b3266feff108e8004028b9134cacb7af..0000000000000000000000000000000000000000 --- a/spaces/AfrodreamsAI/afrodreams/ex_app.py +++ /dev/null @@ -1,95 +0,0 @@ -import neural_style -import streamlit as st -import os -import random -import numpy as np -from PIL import Image, ImageEnhance -from io import BytesIO -#import streamlit_ext as ste #for download button not to rerun -from huggingface_hub import upload_file - -HF_TOKEN = os.environ.get("HF_TOKEN") - -st.set_page_config(layout="wide") -#Create two columns with different width -col1, col2 = st.columns( [0.8, 0.2]) -with col1: # To display the header text using css style - st.markdown(""" """, unsafe_allow_html=True) - st.markdown('

Upload your photo here...

', unsafe_allow_html=True) - st.subheader("This app takes in your image and styles it with a unique african art.") - -#Add a header and expander in side bar -st.sidebar.markdown('

Afrodreams.AI

', unsafe_allow_html=True) -with st.sidebar.expander("About the App"): - st.write(""" - This app takes in your image and styles it with a unique african art.""") - - -#Add file uploader to allow users to upload photos -uploaded_file = st.file_uploader("", type=['jpg','png','jpeg']) - -# add slider to side bar -style_weight = st.slider("Select Style Weight", min_value=10, max_value=100, value=12) - -#Add 'before' and 'after' columns -if uploaded_file is not None: - image = Image.open(uploaded_file) - - col1, col2 = st.columns( [0.5, 0.5]) - with col1: - st.markdown('

Before

',unsafe_allow_html=True) - st.image(image,width=300) - - with col2: - st.markdown('

After

',unsafe_allow_html=True) - - # add a button - run = st.button('Generate Art') - my_bar = st.progress(0) - params = neural_style.TransferParams() - params.gpu = 0#"c" - params.backend = "mkl" - params.image_size = 400 - params.content_image = uploaded_file - params.style_weight = style_weight - keep_style = False - if run==True: - # run image selection if keep style is false - if keep_style==False: - path = 'stylesv2' - styles = os.listdir(path) - params.style_image = path + '/' + random.choice(styles) - - st.session_state.submitted = True - with st.spinner('Wait for it...'): - neural_style.transfer(params) - - #display image when done. - with col2: - if 'submitted' in st.session_state: - result = Image.open('out.png') - st.image(result, width=300) - buf = BytesIO() - result.save(buf, format="png") - if len(os.listdir('generated_samples')) <= 10: - img_file_name = f"generated_samples/{str(len(os.listdir('generated_samples')))}.png" - - _ = upload_file(path_or_fileobj = 'out.png', - path_in_repo = img_file_name, - repo_id='AfrodreamsAI/afrodreams', - repo_type='space', - token=HF_TOKEN - ) - - byte_im = buf.getvalue() - #run =ste.download_button(button_text="Download Image", data=byte_im, download_filename='afrodreams.jpg', mime="image/png") - #keeping the current style by update the weight - keep_style = st.sidebar.checkbox("Keep current style") - - - - - - diff --git a/spaces/AlexWortega/food_calories/app.py b/spaces/AlexWortega/food_calories/app.py deleted file mode 100644 index 779d200948b5700d083debcc784294b6bc8b70ac..0000000000000000000000000000000000000000 --- a/spaces/AlexWortega/food_calories/app.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -from rudalle import get_tokenizer, get_vae -from rudalle.utils import seed_everything - -import sys -from rudolph.model.utils import get_i2t_attention_mask, get_t2t_attention_mask -from rudolph.model import get_rudolph_model, ruDolphModel, FP16Module -from rudolph.pipelines import generate_codebooks, self_reranking_by_image, self_reranking_by_text, show, generate_captions, generate_texts -from rudolph.pipelines import zs_clf - -import gradio as gr -from rudolph import utils -from PIL import Image - -device = 'cpu' -if device=='cuda': - half = True -else: - half = False -model = get_rudolph_model('350M', fp16=half, device=device) -model.load_state_dict(torch.load("awesomemodel__dalle_1500.pt",map_location=torch.device('cpu'))) -tokenizer = get_tokenizer() -vae = get_vae(dwt=False).to(device) - - -template = 'белков: ' - - - -# Download human-readable labels for ImageNet. - - - -def classify_image(inp): - print(type(inp)) - inp = Image.fromarray(inp) - texts = generate_captions(inp, tokenizer, model, vae, template=template, top_k=16, captions_num=1, bs=16, top_p=0.6, seed=43, temperature=0.8) - rp = texts[0].replace('белков','protein').replace('жиров','fat').replace('углеводов','carbs').replace('calories','ккал') - print(rp) - - - return rp - -image = gr.inputs.Image(shape=(128, 128)) -label = gr.outputs.Label(num_top_classes=3) - - -iface = gr.Interface(fn=classify_image, description="https://github.com/sberbank-ai/ru-dolph RuDoplh by SBER AI finetuned for a image2text task to predict food calories by https://t.me/lovedeathtransformers Alex Wortega", inputs=image, outputs="text",examples=[ - ['b9c277a3.jpeg']]) -iface.launch() diff --git a/spaces/AllAideas/SegmentacionVideo/utils/predict.py b/spaces/AllAideas/SegmentacionVideo/utils/predict.py deleted file mode 100644 index e4edb3d986d42fdd8dba8da9ab795350b3b2fa9f..0000000000000000000000000000000000000000 --- a/spaces/AllAideas/SegmentacionVideo/utils/predict.py +++ /dev/null @@ -1,104 +0,0 @@ -#from .custom_layers import TransformerEncoder, PositionalEmbedding -from .constants import MAX_SEQ_LENGTH, NUM_FEATURES, IMG_SIZE, CLASS_VOCAB -from huggingface_hub import from_pretrained_keras -from tensorflow import keras -from keras import layers -import numpy as np -import imageio -import cv2 - -#model = from_pretrained_keras("shivi/video-classification",custom_objects={"PositionalEmbedding":PositionalEmbedding,"TransformerEncoder": TransformerEncoder}) - -model = from_pretrained_keras("keras-io/video-transformers") - -""" -Below code is taken from the Video-Transformers example on keras-io by Sayak Paul -""" -def build_feature_extractor(): - feature_extractor = keras.applications.DenseNet121( - weights="imagenet", - include_top=False, - pooling="avg", - input_shape=(IMG_SIZE, IMG_SIZE, 3), - ) - preprocess_input = keras.applications.densenet.preprocess_input - - inputs = keras.Input((IMG_SIZE, IMG_SIZE, 3)) - preprocessed = preprocess_input(inputs) - - outputs = feature_extractor(preprocessed) - return keras.Model(inputs, outputs, name="feature_extractor") - - -feature_extractor = build_feature_extractor() - - - -def crop_center(frame): - center_crop_layer = layers.CenterCrop(IMG_SIZE, IMG_SIZE) - cropped = center_crop_layer(frame[None, ...]) - cropped = cropped.numpy().squeeze() - return cropped - -def load_video(path, max_frames=0): - cap = cv2.VideoCapture(path) - frames = [] - try: - while True: - ret, frame = cap.read() - if not ret: - break - frame = crop_center(frame) - frame = frame[:, :, [2, 1, 0]] - frames.append(frame) - - if len(frames) == max_frames: - break - finally: - cap.release() - return np.array(frames) - -def prepare_single_video(frames): - frame_features = np.zeros(shape=(1, MAX_SEQ_LENGTH, NUM_FEATURES), dtype="float32") - - # Pad shorter videos. - if len(frames) < MAX_SEQ_LENGTH: - diff = MAX_SEQ_LENGTH - len(frames) - padding = np.zeros((diff, IMG_SIZE, IMG_SIZE, 3)) - frames = np.concatenate(frames, padding) - - frames = frames[None, ...] - - # Extract features from the frames of the current video. - for i, batch in enumerate(frames): - video_length = batch.shape[0] - length = min(MAX_SEQ_LENGTH, video_length) - for j in range(length): - if np.mean(batch[j, :]) > 0.0: - frame_features[i, j, :] = feature_extractor.predict(batch[None, j, :]) - else: - frame_features[i, j, :] = 0.0 - - return frame_features - - -def predict_action(path): - frames = load_video(path) - frame_features = prepare_single_video(frames) - probabilities = model.predict(frame_features)[0] - confidences = {} - - for i in np.argsort(probabilities)[::-1]: - confidences[CLASS_VOCAB[i]] = float(probabilities[i]) - - gif_out = to_gif(frames[:MAX_SEQ_LENGTH]) - - print(confidences) - return confidences, gif_out - - -def to_gif(images): - converted_images = images.astype(np.uint8) - imageio.mimsave("animation.gif", converted_images, fps=10) - return "animation.gif" - diff --git a/spaces/Aloento/9Nine-PITS/app.py b/spaces/Aloento/9Nine-PITS/app.py deleted file mode 100644 index 90b8019a1d2a9af0ae4eef108a0e38193d898b7b..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-PITS/app.py +++ /dev/null @@ -1,186 +0,0 @@ -import argparse - -import gradio as gr -import torch - -import commons -import utils -from models import SynthesizerTrn -from text import cleaned_text_to_sequence -from text.cleaners import clean_text -from text.symbols import symbols - - -# we use Kyubyong/g2p for demo instead of our internal g2p -# https://github.com/Kyubyong/g2p -def get_text(text, hps): - cleaned_text, lang = clean_text(text) - text_norm = cleaned_text_to_sequence(cleaned_text) - if hps.data.add_blank: - text_norm, lang = commons.intersperse_with_language_id(text_norm, lang, 0) - text_norm = torch.LongTensor(text_norm) - lang = torch.LongTensor(lang) - return text_norm, lang, cleaned_text - - -class GradioApp: - - def __init__(self, args): - self.hps = utils.get_hparams_from_file(args.config) - self.device = "cpu" - - self.net_g = SynthesizerTrn( - len(symbols), - self.hps.data.filter_length // 2 + 1, - self.hps.train.segment_size // - self.hps.data.hop_length, - midi_start=-5, - midi_end=75, - octave_range=24, - n_speakers=len(self.hps.data.speakers), - **self.hps.model - ).to(self.device) - - _ = self.net_g.eval() - _ = utils.load_checkpoint(args.checkpoint_path, model_g=self.net_g) - self.interface = self._gradio_interface() - - def get_phoneme(self, text): - cleaned_text, lang = clean_text(text) - text_norm = cleaned_text_to_sequence(cleaned_text) - - if self.hps.data.add_blank: - text_norm, lang = commons.intersperse_with_language_id(text_norm, lang, 0) - - text_norm = torch.LongTensor(text_norm) - lang = torch.LongTensor(lang) - - return text_norm, lang, cleaned_text - - def inference(self, text, speaker_id_val, seed, scope_shift, duration): - seed = int(seed) - scope_shift = int(scope_shift) - torch.manual_seed(seed) - text_norm, tone, phones = self.get_phoneme(text) - x_tst = text_norm.to(self.device).unsqueeze(0) - t_tst = tone.to(self.device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([text_norm.size(0)]).to(self.device) - speaker_id = torch.LongTensor([speaker_id_val]).to(self.device) - - decoder_inputs, *_ = self.net_g.infer_pre_decoder( - x_tst, - t_tst, - x_tst_lengths, - sid=speaker_id, - noise_scale=0.667, - noise_scale_w=0.8, - length_scale=duration, - scope_shift=scope_shift - ) - - audio = self.net_g.infer_decode_chunk( - decoder_inputs, sid=speaker_id - )[0, 0].data.cpu().float().numpy() - - del decoder_inputs, - - return phones, (self.hps.data.sampling_rate, audio) - - def _gradio_interface(self): - title = "9Nine - PITS" - - self.inputs = [ - gr.Textbox( - label="Text (150 words limitation)", - value="[JA]そんなわけないじゃない。どうしてこうなるだろう。始めて好きな人ができた。一生ものの友达ができた。嬉しいことが二つ重なて。" - "その二つの嬉しさがまたたくさんの嬉しさをつれて来てくれて。梦のように幸せの时间を手に入れたはずなのに。なのにどうして、こうなちょうだろう。[JA]", - elem_id="tts-input" - ), - gr.Dropdown( - list(self.hps.data.speakers), - value=self.hps.data.speakers[1], - label="Speaker Identity", - type="index" - ), - gr.Slider( - 0, 65536, value=0, step=1, label="random seed" - ), - gr.Slider( - -15, 15, value=0, step=1, label="scope-shift" - ), - gr.Slider( - 0.5, 2., value=1., step=0.1, label="duration multiplier" - ), - ] - - self.outputs = [ - gr.Textbox(label="Phonemes"), - gr.Audio(type="numpy", label="Output audio") - ] - - description = "9Nine - PITS" - article = "Github: https://github.com/Aloento/VariTTS" - examples = [["[JA]こんにちは、私は綾地寧々です。[JA]"]] - - return gr.Interface( - fn=self.inference, - inputs=self.inputs, - outputs=self.outputs, - title=title, - description=description, - article=article, - cache_examples=False, - examples=examples, - ) - - def launch(self): - return self.interface.launch(share=False) - - -def parsearg(): - parser = argparse.ArgumentParser() - parser.add_argument( - '-c', - '--config', - type=str, - default="./configs/config_cje.yaml", - help='Path to configuration file' - ) - parser.add_argument( - '-m', - '--model', - type=str, - default='9Nine', - help='Model name' - ) - parser.add_argument( - '-r', - '--checkpoint_path', - type=str, - default='./9Nine_Eval_71200.pth', - help='Path to checkpoint for resume' - ) - parser.add_argument( - '-f', - '--force_resume', - type=str, - help='Path to checkpoint for force resume' - ) - parser.add_argument( - '-d', - '--dir', - type=str, - default='/DATA/audio/pits_samples', - help='root dir' - ) - args = parser.parse_args() - return args - - -if __name__ == "__main__": - import nltk - nltk.download('cmudict') - - args = parsearg() - app = GradioApp(args) - app.launch() diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/policy.h b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/policy.h deleted file mode 100644 index f88ab5d8cb343f97026966b402eaeed8831e356a..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/policy.h +++ /dev/null @@ -1,25 +0,0 @@ -#pragma once - -#include - -#include "libipc/def.h" -#include "libipc/prod_cons.h" - -#include "libipc/circ/elem_array.h" - -namespace ipc { -namespace policy { - -template