diff --git a/spaces/101-5/gpt4free/g4f/.v1/testing/gptworldai_test.py b/spaces/101-5/gpt4free/g4f/.v1/testing/gptworldai_test.py
deleted file mode 100644
index 3dfb32ce17b645e21991d07124421b3dc11cbfb1..0000000000000000000000000000000000000000
--- a/spaces/101-5/gpt4free/g4f/.v1/testing/gptworldai_test.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import gptworldAi
-
-# single completion
-for chunk in gptworldAi.Completion.create("你是谁", "127.0.0.1:7890"):
- print(chunk, end="", flush=True)
-print()
-
-# chat completion
-message = []
-while True:
- prompt = input("请输入问题:")
- message.append({"role": "user", "content": prompt})
- text = ""
- for chunk in gptworldAi.ChatCompletion.create(message, '127.0.0.1:7890'):
- text = text + chunk
- print(chunk, end="", flush=True)
- print()
- message.append({"role": "assistant", "content": text})
diff --git a/spaces/1368565466ki/Satdia/monotonic_align/__init__.py b/spaces/1368565466ki/Satdia/monotonic_align/__init__.py
deleted file mode 100644
index e97eecc595dd3bd97d0104ec62799e2e5efea57c..0000000000000000000000000000000000000000
--- a/spaces/1368565466ki/Satdia/monotonic_align/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from numpy import zeros, int32, float32
-from torch import from_numpy
-
-from .core import maximum_path_jit
-
-
-def maximum_path(neg_cent, mask):
- """ numba optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(float32)
- path = zeros(neg_cent.shape, dtype=int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32)
- maximum_path_jit(path, neg_cent, t_t_max, t_s_max)
- return from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chaos Group V-Ray Next ADV v4.30.01 for 3ds Max 2013-2020 Win x64 Free Trial and Discount Offers.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chaos Group V-Ray Next ADV v4.30.01 for 3ds Max 2013-2020 Win x64 Free Trial and Discount Offers.md
deleted file mode 100644
index 7ae00669cb1149baba7ccecec58691545c077481..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chaos Group V-Ray Next ADV v4.30.01 for 3ds Max 2013-2020 Win x64 Free Trial and Discount Offers.md
+++ /dev/null
@@ -1,170 +0,0 @@
-
-
Chaos Group V-Ray Next ADV v4.30.01 for 3ds Max 2013-2020 Win x64
-
Introduction
-
If you are a 3D artist, designer, or animator who uses Autodesk 3ds Max, you probably know how important it is to have a powerful and reliable rendering plugin. Rendering is the process of turning your 3D models and scenes into realistic images or animations that can be used for various purposes, such as presentations, marketing, entertainment, or education.
-
Chaos Group V-Ray Next ADV v4.30.01 for 3ds Max 2013-2020 Win x64
One of the most popular and widely used rendering plugins for 3ds Max is V-Ray, developed by Chaos Group. V-Ray is a versatile and flexible tool that can handle any type of project, from architectural visualization to visual effects. V-Ray has been used by many professionals and studios around the world, such as Digital Domain, Blur Studio, Method Studios, Framestore, and more.
-
In this article, we will introduce you to the latest version of V-Ray for 3ds Max, which is V-Ray Next ADV v4.30.01. We will show you how to install it, how to use it, and how to optimize it for your workflow and projects. We will also highlight some of the new features and improvements that make V-Ray Next faster, smarter, and more powerful than ever before.
-
How to install V-Ray Next for 3ds Max?
-
System requirements
-
Before you install V-Ray Next for 3ds Max, you need to make sure that your system meets the minimum requirements for running it smoothly. Here are the system requirements for V-Ray Next:
-
-
-
Component
-
Minimum
-
Recommended
-
-
-
Processor
-
Intel® Pentium® IV or compatible processor with SSE4.2 support
NVIDIA® GeForce® GTX 1060 (6 GB VRAM) or equivalent
-
NVIDIA® GeForce® RTX 2080 Ti (11 GB VRAM) or higher
-
-
-
Hard disk space
-
2 GB free disk space (10 GB recommended)
-
20 GB free disk space or higher
-
-
-
Software
-
Autodesk® 3ds Max® versions 2013-2020 (64-bit)
-
The latest version of Autodesk® 3ds Max® (64-bit)
-
-
-
Note that these are the requirements for running V-Ray Next on a single machine. If you want to use distributed rendering or network rendering, you will need additional hardware and software components. You can find more information about distributed rendering here.
-
Installation steps
-
To install V-Ray Next for 3ds Max, you need to follow these steps:
-
-
Download the installer from the official website. You will need to register or log in with your Chaos account to access the download link.
-
Run the installer as administrator and follow the instructions on the screen. You will need to accept the license agreement, choose the installation type (workstation or render slave), select the components you want to install (V-Ray core files, license server, Swarm manager, etc.), and specify the installation folder.
-
If you have a previous version of V-Ray installed on your machine, you will be asked if you want to uninstall it or keep it. It is recommended that you uninstall any previous versions of V-Ray before installing V-Ray Next.
-
If you have chosen to install the license server component, you will need to activate your license online or offline. You can find more information about licensing here.
-
After the installation is complete, you can launch 3ds Max and start using V-Ray Next.
-
-
How to use V-Ray Next for 3ds Max?
-
V-Ray Next interface and settings
-
V-Ray Next integrates seamlessly with 3ds Max and adds several new menus, toolbars, panels, and windows to its interface. Here are some of the main elements of the V-Ray interface:
-
-
The V-Ray menu, located in the main menu bar of 3ds Max, gives you access to various commands and options related to V-Ray.
-
The VFB window, which stands for Virtual Frame Buffer, is where you can see your rendered image or animation and adjust its parameters using various tools and controls.
-
The VFB toolbar, located at the top of the VFB window, contains buttons for rendering modes, camera settings, color corrections, denoising options, history settings, etc.
-
The VFB history window, located at the bottom of the VFB window, allows you to compare different versions of your rendered image or animation using thumbnails.
-
The VFB color corrections window, located at the right side of the VFB window, lets you apply various color adjustments to your rendered image or animation using sliders.
-
The VFB render elements window, located at the left side of the VFB window, shows you different layers or channels of your rendered image or animation that can be used for compositing or post-processing.
-
The VFB statistics window, located at the top right corner of the VFB window, displays information about your rendering process such as time elapsed, memory usage, samples per pixel, etc.
-
The VFB lens effects window, located at the bottom right corner of the VFB window, enables you to add various optical effects to your rendered image or animation such as glare, bloom, vignette, etc.
-
The VFB settings window, accessible by clicking on the gear icon in the VFB toolbar, allows you to customize various aspects of the VFB such as resolution, quality, format, output, etc.
The V-Ray toolbar , located in any viewport toolbar area of 3ds Max, contains buttons for quick access to common functions such as rendering, interactive rendering, lighting analysis, camera exposure, etc.
The V-Ray Asset Editor , accessible by clicking on its icon in the main toolbar or in any viewport toolbar area of 3ds Max, is where you can manage all your assets related to V-Ray such as materials, lights, textures, geometries, render elements, etc.
The V-Ray Render Settings , accessible by clicking on its icon in any viewport toolbar area of 3ds Max or by going to Rendering > Render Setup, is where you can adjust all your global settings related to rendering such as engine type, quality presets, sampling parameters, environment options, output options,
V-Ray Next rendering modes and options
-
V-Ray Next offers you different rendering modes and options depending on your needs and preferences. You can choose between:
-
-
Production rendering, which is the standard mode for creating high-quality images or animations with all the features and settings available in V-Ray.
-
Interactive rendering, which is a fast and responsive mode that updates your image as you make changes to your scene, camera, lights, materials, etc. This mode is ideal for testing and previewing your scene before production rendering.
-
GPU rendering, which is a mode that uses your graphics card (GPU) instead of your processor (CPU) to render your scene. This mode can be faster and more efficient for certain types of scenes and effects, such as volumetrics, denoising, etc.
-
Hybrid rendering, which is a mode that combines both CPU and GPU rendering to utilize all your hardware resources and speed up your rendering process.
-
-
You can switch between these modes and options in the V-Ray Render Settings window, under the Renderer rollout. You can also adjust various parameters related to sampling, ray tracing, global illumination, motion blur, depth of field, etc.
-
V-Ray Next Scene Intelligence for 3ds Max 2013-2020 Win x64[^1^]
-V-Ray GPU rendering for 3ds Max 2013-2020 Win x64[^1^]
-NVIDIA AI Denoiser for V-Ray Next in 3ds Max 2013-2020 Win x64[^1^]
-V-Ray Next Update 3 for 3ds Max 2013-2020 Win x64 download[^2^]
-V-Ray Physical Camera for 3ds Max 2013-2020 Win x64[^1^]
-V-Ray Next free trial for 3ds Max 2013-2020 Win x64[^3^]
-V-Ray Next for 3ds Max tutorial[^1^]
-V-Ray Next features for 3ds Max 2013-2020 Win x64[^1^]
-V-Ray Next vs V-Ray for 3ds Max 2013-2020 Win x64 comparison
-V-Ray Next system requirements for 3ds Max 2013-2020 Win x64
-V-Ray Next price for 3ds Max 2013-2020 Win x64
-V-Ray Next review for 3ds Max 2013-2020 Win x64
-V-Ray Next crack for 3ds Max 2013-2020 Win x64
-V-Ray Next license for 3ds Max 2013-2020 Win x64
-V-Ray Next installation guide for 3ds Max 2013-2020 Win x64
-V-Ray Next Adaptive Dome Light for 3ds Max 2013-2020 Win x64[^1^]
-V-Ray Next GPU-accelerated volume rendering for 3ds Max 2013-2020 Win x64[^1^]
-V-Ray Next Lighting Analysis tools for 3ds Max 2013-2020 Win x64[^1^]
-V-Ray Next IPR for interactive production rendering in 3ds Max 2013-2020 Win x64[^1^]
-V-Ray Next Denoiser for noise reduction in rendering in 3ds Max 2013-2020 Win x64[^1^]
-V-Ray Next Resumable Rendering for stopping and resuming renders in 3ds Max 2013-2020 Win x64[^1^]
-V-Ray Next webinars for learning tips and tricks in 3ds Max 2013-2020 Win x64[^1^]
-V-Ray Next support for troubleshooting issues in 3ds Max 2013-2020 Win x64
-V-Ray Next forum for discussing topics related to V-Ray Next in 3ds Max
-V-Ray Next documentation for learning how to use V-Ray Next in 3ds Max
-V-Ray Next presets for saving and loading render settings in V-Ray Next in 3ds Max
-V-Ray Next materials library for accessing a collection of ready-to-use materials in V-Ray Next in 3ds Max
-V-Ray Next render elements for creating render passes in V-Ray Next in 3ds Max
-V-Ray Next proxy objects for managing complex geometry in V-Ray Next in 3ds Max
-V-Ray Next displacement mapping for adding detail to surfaces in V-Ray Next in 3ds Max
-
V-Ray Next lighting and materials
-
V-Ray Next provides you with a wide range of lighting and material options to create realistic and stunning scenes. You can use:
-
-
V-Ray lights, which are special types of lights that are optimized for V-Ray rendering. You can create different kinds of V-Ray lights such as dome light, sun light, sky light, sphere light, rectangle light, mesh light, etc.
-
V-Ray materials, which are special types of materials that are optimized for V-Ray rendering. You can create different kinds of V-Ray materials such as standard material, blend material, car paint material, hair material, subsurface scattering material, etc.
-
V-Ray textures, which are special types of textures that are optimized for V-Ray rendering. You can use different kinds of V-Ray textures such as bitmap texture, noise texture, gradient texture, dirt texture, curvature texture, etc.
-
V-Ray shaders, which are special types of nodes that can be used to create custom effects and functions for your materials and textures. You can use different kinds of V-Ray shaders such as color correction shader, falloff shader, fresnel shader, triplanar shader, etc.
-
-
You can manage all your lighting and material assets in the V-Ray Asset Editor window, where you can create, edit, assign, organize, and preview them. You can also import and export assets from external sources such as Substance Designer or PBR materials.
-
V-Ray Next effects and post-processing
-
V-Ray Next allows you to add various effects and post-processing adjustments to your rendered image or animation without leaving 3ds Max or using external applications. You can use:
-
-
V-Ray render elements, which are separate layers or channels of your rendered image or animation that can be used for compositing or post-processing. You can create different kinds of V-Ray render elements such as diffuse element, specular element, reflection element, refraction element, shadow element, lighting element, etc.
V-Ray frame buffer tools, which are tools and controls that you can use to modify your rendered image or animation in the VFB window. You can use different kinds of VFB tools such as color corrections tool, lens effects tool, denoiser tool, history tool, statistics tool, etc.
You can access all your render elements and frame buffer tools in the VFB window, where you can enable, disable, edit, save, load, and compare them. You can also export them to external applications such as Photoshop or Nuke.
How to optimize V-Ray Next for 3ds Max?
-
V-Ray Next scene intelligence
-
V-Ray Next introduces a new feature called scene intelligence that automatically analyzes your scene and optimizes your rendering settings accordingly. Scene intelligence includes:
-
-
Automatic exposure and white balance, which adjusts the camera exposure and color temperature based on the lighting conditions of your scene. This feature eliminates the need for manual tweaking and ensures a balanced and realistic image.
-
Adaptive dome light, which samples only the parts of the dome light that contribute to the illumination of your scene. This feature speeds up your rendering time by up to 7 times for scenes with image-based lighting.
-
Point-and-shoot camera, which sets the camera focus distance automatically based on where you click in the viewport. This feature simplifies the process of creating depth of field effects.
-
Automatic memory management, which optimizes the memory usage of your scene by dynamically loading and unloading assets as needed. This feature allows you to render large scenes with complex geometries and textures without running out of memory.
-
-
You can enable or disable these features in the V-Ray Render Settings window, under the Camera rollout (for automatic exposure and white balance), Environment rollout (for adaptive dome light), Physical Camera rollout (for point-and-shoot camera), and System rollout (for automatic memory management).
-
V-Ray Next adaptive dome light
-
V-Ray Next introduces a new feature called adaptive dome light that automatically samples only the parts of the dome light that contribute to the illumination of your scene. This feature speeds up your rendering time by up to 7 times for scenes with image-based lighting.
-
V-Ray Next GPU rendering and denoising
-
V-Ray Next introduces a new feature called GPU rendering and denoising that allows you to use your graphics card (GPU) instead of your processor (CPU) to render your scene and remove noise from your image or animation. This feature can be faster and more efficient for certain types of scenes and effects, such as volumetrics, denoising, etc.
-
GPU rendering is a mode that uses your graphics card (GPU) instead of your processor (CPU) to render your scene. This mode can be faster and more efficient for certain types of scenes and effects, such as volumetrics, denoising, etc. You can switch to GPU rendering in the V-Ray Render Settings window, under the Renderer rollout, by choosing CUDA or RTX as the engine type. You can also select which GPUs or CPUs you want to use for rendering.
-
Denoising is a process that removes noise from your image or animation without losing detail or quality. Noise is a common problem in rendering, especially when using low sampling settings or complex lighting scenarios. V-Ray Next offers you different options for denoising, such as:
-
-
V-Ray Denoiser, which is a built-in denoiser that works on both CPU and GPU rendering modes. You can enable it in the V-Ray Render Settings window, under the V-Ray Denoiser rollout. You can also adjust various parameters related to quality, blend amount, radius, etc.
-
NVIDIA AI Denoiser, which is an external denoiser that works only on GPU rendering mode. You can enable it in the VFB window, by clicking on its icon in the VFB toolbar. This denoiser uses artificial intelligence to remove noise instantly and interactively.
-
Render element denoiser, which is a new feature that allows you to denoise individual render elements separately. You can enable it in the Render Elements tab of the Render Setup window, by checking the Denoise option for each render element. This feature ensures that the denoised render elements match the denoised beauty image.
-
-
You can access all your denoising options and tools in the VFB window, where you can enable, disable, edit, save, load, and compare them. You can also export them to external applications such as Photoshop or Nuke.
-
Conclusion
-
Summary of the main points
-
In this article, we have introduced you to V-Ray Next ADV v4.30.01 for 3ds Max 2013-2020 Win x64, which is the latest version of V-Ray for 3ds Max. We have shown you how to install it, how to use it, and how to optimize it for your workflow and projects. We have also highlighted some of the new features and improvements that make V-Ray Next faster, smarter, and more powerful than ever before.
-
Some of the main features and improvements of V-Ray Next are:
-
-
Scene intelligence, which automatically analyzes your scene and optimizes your rendering settings accordingly.
-
Adaptive dome light, which samples only the parts of the dome light that contribute to the illumination of your scene.
-
GPU rendering and denoising, which allows you to use your graphics card (GPU) instead of your processor (CPU) to render your scene and remove noise from your image or animation.
-
Render element denoiser, which allows you to denoise individual render elements separately.
-
New lighting analysis tools, which make it easier to visualize a scene’s real-world illumination values in lux or footcandles.
-
New metalness material properties, which offer improved compatibility with Substance Designer and PBR materials.
-
-
Call to action and links
-
If you are interested in trying out V-Ray Next for 3ds Max yourself, you can download a free 30-day trial from the official website. You will need to register or log in with your Chaos account to access the download link.
-
If you want to learn more about V-Ray Next for 3ds Max, you can visit the following links:
V-Ray for 3ds Max Blog, which features news, updates, tips, tricks, and showcases on V-Ray for 3ds Max.
-
V-Ray Academy, which offers online courses and webinars on V-Ray for 3ds Max and other Chaos products.
-
V-Ray for 3ds Max Forum, which is a place where you can ask questions, get answers, share feedback, and connect with other V-Ray users.
We hope you enjoyed this article and found it useful. If you have any comments or suggestions, please let us know in the comments section below. Thank you for reading!
FAQs
-
What is V-Ray Next?
-
V-Ray Next is the latest version of V-Ray for 3ds Max, which is a powerful and versatile rendering plugin that can handle any type of project, from architectural visualization to visual effects.
What are the main features of V-Ray Next?
-
Some of the main features of V-Ray Next are scene intelligence, adaptive dome light, GPU rendering and denoising, render element denoiser, new lighting analysis tools, and new metalness material properties.
How to install V-Ray Next for 3ds Max?
-
To install V-Ray Next for 3ds Max, you need to download the installer from the official website, run it as administrator, follow the instructions on the screen, uninstall any previous versions of V-Ray, and activate your license online or offline.
How to use V-Ray Next for 3ds Max?
-
To use V-Ray Next for 3ds Max, you need to switch the renderer to V-Ray GPU in the Render Setup window, access various commands and options from the V-Ray menu, manage your assets in the V-Ray Asset Editor window, adjust your settings in the V-Ray Render Settings window, and view your results in the VFB window.
How to optimize V-Ray Next for 3ds Max?
-
To optimize V-Ray Next for 3ds Max, you need to enable or disable various features in the Render Setup window, such as automatic exposure and white balance, adaptive dome light, GPU rendering and denoising, and automatic memory management.
0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Film Babuji Ek Ticket Bambai Love Full Movie !NEW!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Film Babuji Ek Ticket Bambai Love Full Movie !NEW!.md
deleted file mode 100644
index 0ee4db31e33487f370239554aa0abfd3f0c176ef..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Film Babuji Ek Ticket Bambai Love Full Movie !NEW!.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
Download Film Babuji Ek Ticket Bambai Love Full Movie
-
-Our understanding of mother culture has to do with a lot of work, sustained over many generations, of feeding children and supporting them emotionally, physically, intellectually, culturally, spiritually and psychologically and that is what is being lost. This film tries to understand the powerful culture of India, how it is integral to us and then what happens when there are no families and children left to support it. This piece of work is made in the hope that other directors, who work in film and television, will be inspired and will make works in this genre that respect that culture and make a small contribution to saving it.
-
-Ajantha Matiyar: So, there is this curious notion that you are trying to depict a dying culture. What is it that makes it dying?
-
-Sarojini Rikhye: The urban-based Bengali culture that you see around you when you step out of the airport, the metro station or the bus stand -- that is not just about Bengali culture but that is an indicator of what is happening all over India. It is the culture of the housewife, of the woman who has no work and has to be ‘put in a position’ to have children, and we need to understand that this phenomenon is not a Bengali phenomenon. It is happening all over India. The culture of so many generations of women and children has been trying to support their families through a woman’s right to her own body. They have no land to get agricultural produce from, to sell. There is no daycare centre for the children, no schools for the children, no health centres, no spaces for the men to socialise and their entire value system is against women. As a result of this, the children are faced with a lot of physical and mental agony and the women are facing depression and other mental health issues.
-
-AM: But, what is interesting is that, while many of us have heard about so many negative things that are happening in India, we do not know that this culture is facing a crisis.
-
-SR: The focus should be on the culture of the society, not just on this one thing or another.
-
-AM: So, what is it that makes it dying?
-
-SR: A lot of change that you see in Bengal since the 1960s has been the result of educated, urban middle-class, mobile and progressive young women, who came from all over Bengal, from all over India, from other cities and towns, and came to Bengal 4fefd39f24
-
-
-
diff --git a/spaces/1line/AutoGPT/autogpt/spinner.py b/spaces/1line/AutoGPT/autogpt/spinner.py
deleted file mode 100644
index 4e33d74213881352546f334ccb1eb4772b8b7b70..0000000000000000000000000000000000000000
--- a/spaces/1line/AutoGPT/autogpt/spinner.py
+++ /dev/null
@@ -1,65 +0,0 @@
-"""A simple spinner module"""
-import itertools
-import sys
-import threading
-import time
-
-
-class Spinner:
- """A simple spinner class"""
-
- def __init__(self, message: str = "Loading...", delay: float = 0.1) -> None:
- """Initialize the spinner class
-
- Args:
- message (str): The message to display.
- delay (float): The delay between each spinner update.
- """
- self.spinner = itertools.cycle(["-", "/", "|", "\\"])
- self.delay = delay
- self.message = message
- self.running = False
- self.spinner_thread = None
-
- def spin(self) -> None:
- """Spin the spinner"""
- while self.running:
- sys.stdout.write(f"{next(self.spinner)} {self.message}\r")
- sys.stdout.flush()
- time.sleep(self.delay)
- sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r")
-
- def __enter__(self):
- """Start the spinner"""
- self.running = True
- self.spinner_thread = threading.Thread(target=self.spin)
- self.spinner_thread.start()
-
- return self
-
- def __exit__(self, exc_type, exc_value, exc_traceback) -> None:
- """Stop the spinner
-
- Args:
- exc_type (Exception): The exception type.
- exc_value (Exception): The exception value.
- exc_traceback (Exception): The exception traceback.
- """
- self.running = False
- if self.spinner_thread is not None:
- self.spinner_thread.join()
- sys.stdout.write(f"\r{' ' * (len(self.message) + 2)}\r")
- sys.stdout.flush()
-
- def update_message(self, new_message, delay=0.1):
- """Update the spinner message
- Args:
- new_message (str): New message to display
- delay: Delay in seconds before updating the message
- """
- time.sleep(delay)
- sys.stdout.write(
- f"\r{' ' * (len(self.message) + 2)}\r"
- ) # Clear the current message
- sys.stdout.flush()
- self.message = new_message
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Getting Over It for PC - The Ultimate Challenge Game from Ocean of Games.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Getting Over It for PC - The Ultimate Challenge Game from Ocean of Games.md
deleted file mode 100644
index 6032c1577ea0187183cb55b458655e646d1251a5..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Getting Over It for PC - The Ultimate Challenge Game from Ocean of Games.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
Download Getting Over It PC Ocean of Games: A Guide to the Most Frustrating Game Ever
-
Have you ever played a game that made you want to smash your keyboard, throw your mouse, or scream at your monitor? If not, then you might want to try Getting Over It with Bennett Foddy, a game that is designed to hurt you. And if you are looking for a way to download this game for free, then you might be interested in Ocean of Games, a website that offers a variety of PC games for download. But before you do that, let's take a look at what Getting Over It is, why it is so frustrating, how to download it from Ocean of Games, and how to beat it.
-
What is Getting Over It?
-
Getting Over It with Bennett Foddy is an action game that was released in 2017 for Windows, Mac, iOS, and Android. It is developed by Bennett Foddy, an independent game designer and academic who is known for creating experimental and challenging games such as QWOP, GIRP, and CLOP.
The premise of Getting Over It is simple: you are a man named Diogenes who is stuck in a cauldron and has a hammer as his only tool. Your goal is to climb up an enormous mountain made of various objects such as rocks, trees, furniture, pipes, and even other games. You move the hammer with the mouse or the touch screen, and that's all there is. There are no checkpoints, no saves, no levels, no scores, no achievements. Just you and the mountain.
-
The gameplay of Getting Over It is deceptively simple as well. You can use the hammer to push yourself off the ground, hook onto objects, swing around them, or launch yourself into the air. With practice, you'll be able to jump, swing, climb, and fly. However, this is easier said than done. The game's physics are realistic but unforgiving. A slight mistake can send you tumbling down the mountain, losing all your progress in an instant. And there is nothing to stop you from falling all the way back to the beginning.
-
The developer and the inspiration
-
Bennett Foddy is an Australian-born game designer who is currently a professor at New York University's Game Center. He has a PhD in philosophy from Oxford University and has written several papers on topics such as ethics, aesthetics, and game design. He is also a musician who plays bass guitar for the band Cut Copy.
-
Foddy has stated that he made Getting Over It as a tribute to Jazzuo's 2002 B-Game classic Sexy Hiking, which has a similar concept of climbing a mountain with a hammer. He also said that he wanted to make a game for a certain kind of person: someone who likes hard games, who likes frustration games, who likes roguelikes, who likes speedrunning. He wanted to make a game that would hurt them.
-
The reception and the reviews
-
Getting Over It has received mostly positive reviews from critics and players alike. It has a score of 77/100 on Metacritic and an \"Overwhelmingly Positive\" rating on Steam. Many reviewers praised the game's originality, humor, challenge, and satisfaction. They also appreciated the game's commentary by F
oddy, who provides insights, jokes, quotes, and encouragement throughout the game. Some reviewers also noted that the game can be seen as a metaphor for life, art, or game development itself.
-
However, not everyone enjoyed Getting Over It. Some reviewers criticized the game's difficulty, frustration, and repetitiveness. They also complained about the game's lack of features, options, and accessibility. Some players also reported technical issues, bugs, and crashes. And of course, some players simply hated the game for making them rage quit.
-
Why is Getting Over It so frustrating?
-
Getting Over It is not a game for everyone. It is a game that tests your patience, skill, and sanity. It is a game that can make you feel angry, sad, hopeless, or even depressed. But why is it so frustrating? Here are some of the reasons:
-
How to download getting over it for pc free from ocean of games
-Getting over it pc game download full version ocean of games
-Ocean of games getting over it with bennett foddy pc download
-Download getting over it highly compressed pc game ocean of games
-Getting over it pc download google drive link ocean of games
-Ocean of games getting over it pc system requirements and features
-Getting over it pc game review and gameplay ocean of games
-Download getting over it for windows 10/8/7 pc ocean of games
-Ocean of games getting over it pc installation guide and tips
-Getting over it pc download latest version 2023 ocean of games
-Ocean of games getting over it pc cheats and mods download
-Getting over it pc download torrent file ocean of games
-Ocean of games getting over it pc best alternatives and similar games
-Getting over it pc download no emulator needed ocean of games
-Ocean of games getting over it pc offline mode and multiplayer support
-Getting over it pc download crack and patch ocean of games
-Ocean of games getting over it pc minimum and recommended specs
-Getting over it pc download size and file format ocean of games
-Ocean of games getting over it pc direct download link no survey
-Getting over it pc download error and bug fixes ocean of games
-
The controls and the physics
-
The controls of Getting Over It are simple but hard to master. You only need to move the mouse or the touch screen to control the hammer, but that's easier said than done. The hammer's movement is sensitive and precise, but also erratic and unpredictable. You need to have a good sense of timing, distance, and angle to move effectively. And you need to constantly adjust your grip and position to avoid losing balance or momentum.
-
The physics of Getting Over It are realistic but unforgiving. The game simulates gravity, friction, inertia, and collision in a realistic way, but that also means that the slightest mistake can have disastrous consequences. You can slip, slide, bounce, or fly off the mountain at any moment. And you can't rely on any safety nets or checkpoints to save you. You have to deal with the consequences of your actions.
-
The obstacles and the setbacks
-
The obstacles of Getting Over It are varied and challenging. The mountain is made of different objects that have different shapes, sizes, textures, and properties. Some objects are solid and stable, while others are slippery and movable. Some objects are helpful and supportive, while others are harmful and obstructive. Some objects are familiar and recognizable, while others are bizarre and surreal. You never know what to expect next.
-
The setbacks of Getting Over It are frequent and painful. The mountain is full of traps and pitfalls that can send you back to where you started or even lower. You can fall from great heights or get stuck in narrow spaces. You can lose hours or days of progress in seconds or minutes. And you have to start over again and again until you reach the top.
-
The narration and the commentary
-
The narration of Getting Over It is witty but cruel. The game features a voice-over by Bennett Foddy himself, who talks to you throughout the game. He tells you about the history and the design of the game, he quotes from various philosophers and artists, he jokes about your situation and your failures, he encourages you to keep going and to not give up. He also sometimes apologizes for making such a hard game.
-
The commentary of Getting Over It is informative but taunting. The game also features a chat box that shows messages from other players who are playing the game at the same time as you. They can share their thoughts, feelings, tips, or jokes with you. They can also see your progress and your falls on their screens. They can cheer you on or mock you mercilessly.
-
How to download Getting Over It PC Ocean of Games?
-
If you want to play Getting Over It on your PC for free, then you might want to check out Ocean of Games, a website that offers a variety of PC games for download. However, before you do that, you should be aware of some of the advantages and disadvantages of using this website.
-
The advantages and disadvantages of Ocean of Games
-
Ocean of Games has some advantages over other websites that offer PC games for download. Some of these advantages are:
-
-
It has a large collection of games from different genres and categories.
-
It has an easy-to-use interface and a fast download speed.
-
It does not require any registration or subscription.
-
It does not have any annoying ads or pop-ups.
-
-
However, Ocean of Games also has some disadvantages that you should be aware of before using it. Some of these disadvantages are:
-
-
It does not have any official license or authorization from the game developers or publishers.
-
It does not guarantee the quality or the safety of the games it offers.
-
It may contain viruses, malware, spyware, or other harmful software that can damage your PC or compromise your privacy.
-
It may violate
It may violate the intellectual property rights of the game developers or publishers.
-
It may expose you to legal risks or penalties for piracy or infringement.
-
-
Therefore, you should use Ocean of Games at your own risk and discretion. You should also respect the rights and the work of the game developers or publishers and consider buying the game from official sources if you enjoy it.
-
The steps to download and install Getting Over It from Ocean of Games
-
If you still want to download Getting Over It PC Ocean of Games, then you can follow these steps:
-
-
Go to the Ocean of Games website and search for Getting Over It in the search box.
-
Select the game from the list of results and click on the download button.
-
Wait for the download to finish and then extract the zip file to a folder of your choice.
-
Open the folder and run the setup.exe file as an administrator.
-
Follow the instructions on the screen to install the game on your PC.
-
Launch the game from the desktop shortcut or the start menu and enjoy.
-
-
The alternatives to Ocean of Games
-
If you are looking for other websites that offer PC games for download, then you might want to check out some of these alternatives:
-
-
Steam: Steam is the most popular and reputable platform for buying and playing PC games. It has a huge library of games from various genres and categories, as well as features such as cloud saving, achievements, multiplayer, mods, and more. You can also find some free or discounted games on Steam, especially during sales or events.
-
GOG: GOG is another platform that sells and distributes PC games. It specializes in DRM-free games, meaning that you can play them without any online activation or restriction. It also offers some classic and retro games that are compatible with modern systems.
-
itch.io: itch.io is a website that hosts indie games from various developers and creators. You can find some unique and original games on itch.io, as well as some free or pay-what-you-want games. You can also support the developers directly by buying or donating to their games.
-
-
How to beat Getting Over It?
-
Getting Over It is a game that is hard to beat, but not impossible. It requires a lot of practice, patience, and perseverance. It also requires some tips, tricks, and strategies. Here are some of them:
-
The tips and tricks to master the game
-
Here are some tips and tricks that can help you master the game:
-
-
Learn how to use the hammer effectively. You can use it to push, pull, hook, swing, launch, or balance yourself. Experiment with different movements and angles to find what works best for you.
-
Use both hands to control the mouse or the touch screen. This can give you more precision and stability when moving the hammer.
-
Adjust your mouse sensitivity or touch sensitivity according to your preference. You can do this in the settings menu of the game. You can also adjust your screen resolution or window size to fit your monitor or device.
-
Take breaks regularly. Getting Over It can be mentally and physically exhausting. You should take breaks every 15 minutes or so to relax your eyes, hands, and mind. You can also save your progress by quitting the game and resuming it later.
-
Don't give up. Getting Over It is a game that is meant to challenge you and make you frustrated. But it is also a game that can reward you with satisfaction and accomplishment. Don't let your failures discourage you. Learn from them and try again.
-
-
The speedruns and the records
-
If you want to challenge yourself further, you can try to beat Getting Over It as fast as possible. This is called speedrunning, and it is a popular activity among gamers who like to compete with themselves or others. There are many websites and communities that track and showcase speedruns of various games, such as speedrun.com or Speed Demos Archive. You can also watch some videos of speedruns on YouTube or Twitch.
-
The current world record for beating Getting Over It is 1 minute 19 seconds by a player named Lockness06. He achieved this feat on June 14th, 2021 using a mouse and keyboard. The previous record was 1 minute 24 seconds by a player named Distortion2. He achieved this feat on May 31st, 2021 using a controller.
-
The rewards and the
The rewards and the secrets
-
If you manage to beat Getting Over It, you will be rewarded with a special ending that includes a song, a message, and a surprise. We won't spoil it for you, but we can tell you that it is worth the effort. You will also unlock a golden cauldron that you can use to play the game again with a different look.
-
Getting Over It also has some secrets and easter eggs that you can discover along the way. Some of them are hidden in the mountain, some of them are triggered by certain actions, and some of them are revealed by the narrator. We won't tell you what they are, but we can give you some hints:
-
-
There is a secret island that you can reach by flying over the ocean.
-
There is a secret room that you can enter by breaking a wall.
-
There is a secret message that you can read by zooming in on a sign.
-
There is a secret mode that you can activate by typing a code.
-
There is a secret game that you can play by clicking on a button.
-
-
Conclusion
-
Getting Over It with Bennett Foddy is a game that is not for everyone. It is a game that is hard, frustrating, and sometimes unfair. But it is also a game that is original, humorous, and satisfying. It is a game that challenges you to overcome your limits and to get over it.
-
If you want to play Getting Over It on your PC for free, you can download it from Ocean of Games, a website that offers a variety of PC games for download. However, you should be careful of the risks and the drawbacks of using this website. You should also respect the rights and the work of the game developers and publishers and consider buying the game from official sources if you enjoy it.
-
If you want to beat Getting Over It, you will need a lot of practice, patience, and perseverance. You will also need some tips, tricks, and strategies. And you will also need to discover some secrets and easter eggs along the way. But most importantly, you will need to have fun and to not give up.
-
We hope this guide has helped you to learn more about Getting Over It PC Ocean of Games. If you have any questions or comments, feel free to leave them below. And if you liked this article, please share it with your friends. Thank you for reading and happy climbing!
-
FAQs
-
Here are some frequently asked questions about Getting Over It PC Ocean of Games:
-
-
Who is Diogenes?
-
Diogenes is the name of the man who is stuck in a cauldron in Getting Over It. He is named after an ancient Greek philosopher who was known for living in a barrel and rejecting conventional values and norms. He was also known for his wit and his cynicism.
-
Who is Bennett Foddy?
-
Bennett Foddy is the developer and the narrator of Getting Over It. He is an independent game designer and an academic who is known for creating experimental and challenging games such as QWOP, GIRP, and CLOP. He is also a professor at New York University's Game Center and a musician who plays bass guitar for the band Cut Copy.
-
What is Ocean of Games?
-
Ocean of Games is a website that offers a variety of PC games for download. It has a large collection of games from different genres and categories, as well as an easy-to-use interface and a fast download speed. However, it also has some disadvantages such as being unauthorized, unsafe, illegal, and unethical.
-
How long does it take to beat Getting Over It?
-
The answer to this question depends on your skill level, your luck, and your persistence. Some players can beat Getting Over It in less than 2 minutes, while others can take more than 200 hours. The average time to beat Getting Over It according to HowLongToBeat.com is 6 hours for the main story and 11 hours for completionists.
-
What are some other games like Getting Over It?
-
If you are looking for some other games that are similar to Getting Over It in terms of concept, difficulty, or humor, then you might want to check out some of these games:
-
-
Sexy Hiking: The original game that inspired Getting Over It. It has similar gameplay but with worse graphics and sound.
-
Pogostuck: Rage With Your Friends: A game that involves climbing a mountain with a pogo stick and competing with other players online.
-
Jump King: A game that involves jumping up a tower with precise timing and landing. It has retro graphics and a dark sense of humor.
-
I Am Bread: A game that involves controlling a slice of bread and trying to become toast. It has realistic physics and a quirky story.
-
Surgeon Simulator: A game that involves performing surgery with clumsy controls and hilarious outcomes. It has various scenarios and modes to play.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/ TikTok .md b/spaces/1phancelerku/anime-remove-background/ TikTok .md
deleted file mode 100644
index d2bcd907d9fd971fa4e535962417e5fa7d49672e..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/ TikTok .md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-
How to Download TikTok Videos Without Watermark
-
TikTok is one of the most popular social media apps in the world, with over 1 billion active users. It allows users to create and share short videos with music, filters, stickers, and other effects. People use TikTok for various purposes, such as entertainment, education, inspiration, or expression.
However, sometimes people may want to download TikTok videos for offline viewing, editing, sharing on other platforms, or avoiding the annoying watermark that appears on the videos. Unfortunately, TikTok does not provide an official way to download videos without watermark. The only option within the app is to save videos with watermark, which can reduce the quality and aesthetics of the videos.
-
So how can you download TikTok videos without watermark? Is there a way to do it easily, quickly, safely, and reliably? The answer is yes! In this article, we will introduce you to the best solution to download TikTok videos without watermark online for free: SnapTik.App.
-
What are the challenges of downloading TikTok videos without watermark?
-
There are many third-party tools and websites that claim to help users download TikTok videos without watermark. However, some of them may have drawbacks or limitations that make them less than ideal for users. Here are some of the common challenges that users may face when trying to download TikTok videos without watermark:
-
วิธี download video tiktok ไม่มีลายน้ำ
-tiktok download video ไม่มีลายน้ำ android
-tiktok download video ไม่มีลายน้ำ ios
-tiktok download video ไม่มีลายน้ำ pc
-tiktok download video ไม่มีลายน้ำ online
-tiktok download video ไม่มีลายน้ำ app
-tiktok download video ไม่มีลายน้ำ apk
-tiktok download video ไม่มีลายน้ำ chrome
-tiktok download video ไม่มีลายน้ำ snaptik
-tiktok download video ไม่มีลายน้ำ ssstik
-tiktok download video ไม่มีลายน้ำ mp4
-tiktok download video ไม่มีลายน้ำ hd
-tiktok download video ไม่มีลายน้ำ free
-tiktok download video ไม่มีลายน้ำ fast
-tiktok download video ไม่มีลายน้ำ easy
-tiktok download video ไม่มีลายน้ำ website
-tiktok download video ไม่มีลายน้ำ link
-tiktok download video ไม่มีลายน้ำ url
-tiktok download video ไม่มีลายน้ำ code
-tiktok download video ไม่มีลายน้ำ script
-tiktok download video ไม่มีลายน้ำ python
-tiktok download video ไม่มีลายน้ำ php
-tiktok download video ไม่มีลายน้ำ javascript
-tiktok download video ไม่มีลายน้ำ extension
-tiktok download video ไม่มีลายน้ำ plugin
-tiktok download video ไม่มีลายน้ำ software
-tiktok download video ไม่มีลายน้ำ tool
-tiktok download video ไม่มีลายน้ำ program
-tiktok download video ไม่มีลายน้ำ application
-tiktok download video ไม่มีลายน้ำ service
-tiktok download video ไม่มีลายน้ำ site
-tiktok download video ไม่มีลายน้ำ page
-tiktok download video ไม่มีลายน้ำ generator
-tiktok download video ไม่มีลายน้ำ downloader
-tiktok download video ไม่มีลายน้ำ converter
-tiktok download video ไม่มีลายน้ำ saver
-tiktok download video ไม่มีลายน้ำ grabber
-tiktok download video ไม่มีลายน้ำ extractor
-tiktok download video ไม่มีลายน้ำ copier
-tiktok download video ไม่มีลายน้ำ editor
-tiktok download video ไม่มีลายน้ำ remover
-tiktok download video ไม่มีลายน้ำ eraser
-tiktok download video ไม่มีลายน้ำ cleaner
-tiktok download video ไม่มีลายน้ำ filter
-tiktok download video ไม่มีลายน้ำ cutter
-tiktok download video ไม่มีลายน้ำ trimmer
-tiktok download video ไม่มีลายน้ำ cropper
-tiktok download video ไม่มีลายน้ำ splitter
-tiktok download video ไม่มีลายน้ำ merger
-
-
Some tools or websites may require registration or subscription before allowing users to download videos without watermark.
-
Some tools or websites may have ads or pop-ups that can be annoying or distracting for users.
-
Some tools or websites may be slow or unreliable, resulting in low-quality downloads or failed requests.
-
Some tools or websites may have limited features, such as not supporting all devices or browsers, not allowing users to choose the format or resolution of the videos, or not offering additional options like downloading slideshows, images, or music from TikTok.
-
Some tools or websites may pose security risks, such as installing malware, stealing user data, or violating user privacy.
-
-
Therefore, users need to be careful and selective when choosing a tool or website to download TikTok videos without watermark. They need to find a solution that can overcome these challenges and provide them with the best possible experience.
-
What is the best solution to download TikTok videos without watermark?
-
The best solution to download TikTok videos without watermark is to use SnapTik.App, a free and fast online tool that can download any TikTok video in HD quality and MP4 format without watermark. SnapTik.App has many advantages over other tools and websites, such as:
-
-
It is easy to use. Users only need to copy and paste the link of the TikTok video that they want to download and click on the download button. No registration, subscription, installation, or configuration is required.
-
It has no ads. Users can enjoy a clean and smooth interface without any interruptions or distractions.
-
It is fast and reliable. Users can download TikTok videos without watermark in seconds, thanks to the powerful server and technology behind SnapTik.App. The downloads are always in high quality and never fail.
-
It supports all devices and browsers. Users can access SnapTik.App from any device, such as PC, laptop, tablet, or smartphone, and any browser, such as Chrome, Firefox, Safari, or Opera. No matter what device or browser they use, they can download TikTok videos without watermark with ease.
-
It does not store or track user data. Users can rest assured that their privacy and security are protected when using SnapTik.App. SnapTik.App does not store, collect, or share any user data or information. It also does not require any permissions or access to user devices or accounts.
-
It offers additional features. Users can also use SnapTik.App to download slideshows, images, and music from TikTok. They can also choose the resolution of the videos that they want to download, such as 720p, 1080p, or 4K. They can also preview the videos before downloading them.
-
-
With SnapTik.App, users can download TikTok videos without watermark online for free in the best possible way. They can enjoy watching, editing, and sharing their favorite TikTok videos without any hassle or compromise.
-
How to use SnapTik.App to download TikTok videos without watermark?
-
To use SnapTik.App to download TikTok videos without watermark, users need to follow these simple steps:
-
Step 1: Open the TikTok app on your phone or the website on your browser and select the video that you want to download.
-
You can choose any video that you like from TikTok, whether it is from your own account, someone else's account, a hashtag page, a challenge page, a trend page, or a search page. You can also use the filters and effects on TikTok to create your own video.
-
Step 2: Click on the share button at the bottom right and click on the copy link button.
-
This will copy the link of the video to your clipboard. You can also share the link with your friends via other apps if you want.
-
Step 3: Go back to SnapTik.App and paste the link in the box at the top. Then click on the download button.
-
This will take you to a new page where you can see the details of the video, such as the title, the creator, the duration, and the resolution. You can also preview the video before downloading it.
-
Step 4: Wait for the server to process your request and then save the video to your device in one click.
-
This will start the download process and save the video to your device in MP4 format without watermark. You can find the video in your downloads folder or gallery. You can also rename or delete the video if you want.
-
Conclusion and FAQs
-
In conclusion, SnapTik.App is the best way to download TikTok videos without watermark online for free. It is fast, easy, safe, and reliable. Users can enjoy watching, editing, and sharing their favorite TikTok videos without any hassle or compromise. SnapTik.App is the ultimate tool for TikTok lovers who want to download videos without watermark.
-
Here are some FAQs that users may have about SnapTik.App:
-
Q: Is SnapTik.App legal and safe?
-
A: SnapTik.App is legal and safe to use, as long as users respect the intellectual property rights of the original creators and do not use the downloaded videos for commercial or illegal purposes. SnapTik.App does not violate any terms of service or privacy policies of TikTok or any other platforms.
-
Q: Does SnapTik.App work on all devices and browsers?
-
A: Yes, SnapTik.App works on all devices and browsers, including PC, laptop, tablet, smartphone, Chrome, Firefox, Safari, Opera, and more. Users can access SnapTik.App from any device or browser without any issues.
-
Q: Does SnapTik.App have any limitations or restrictions?
-
A: No, SnapTik.App does not have any limitations or restrictions on the number, size, length, or quality of the videos that users can download without watermark. Users can download as many videos as they want, as long as they have enough storage space on their devices.
-
Q: Does SnapTik.App support other languages besides Thai?
-
A: Yes, SnapTik.App supports other languages besides Thai, such as English, Spanish, French, German, Italian, Portuguese, Russian, Arabic, Hindi, Japanese, Korean, Chinese, and more. Users can change the language of the website by clicking on the flag icon at the top right corner.
-
Q: How can I contact SnapTik.App if I have any questions or feedback?
-
A: You can contact SnapTik.App by sending an email to snaptik.app@gmail.com or by filling out the contact form on the website. We appreciate your questions and feedback and we will try to respond as soon as possible.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Bus Simulator Ultimate 1.5.2 Mod Apk - Drive Realistic Buses with Amazing Features.md b/spaces/1phancelerku/anime-remove-background/Bus Simulator Ultimate 1.5.2 Mod Apk - Drive Realistic Buses with Amazing Features.md
deleted file mode 100644
index 49f4305791c537cefe9752752822b73c03d4dd49..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Bus Simulator Ultimate 1.5.2 Mod Apk - Drive Realistic Buses with Amazing Features.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
Bus Simulator Ultimate 1.5.2 Mod Apk: A Review
-
Do you love driving buses and exploring different cities? Do you want to experience the thrill of running your own bus company and competing with other players online? If yes, then you should try Bus Simulator Ultimate, one of the most popular and realistic bus simulator games for Android devices.
In this article, we will review Bus Simulator Ultimate and its latest version, 1.5.2 mod apk, which offers unlimited money, free purchases, and other benefits. We will also discuss the features, pros, and cons of this game, and answer some frequently asked questions about it.
-
What is Bus Simulator Ultimate?
-
Bus Simulator Ultimate is a simulation game developed by Zuuks Games, a Turkish game studio that specializes in creating realistic driving games. The game was released in August 2019 and has since gained over 100 million downloads and 4.3 stars rating on Google Play Store.
-
Bus Simulator Ultimate lets you drive various types of buses across different countries and cities, such as Germany, Turkey, Italy, France, Spain, USA, Brazil, Russia, and more. You can also create your own routes and customize your buses with different skins, stickers, horns, and accessories.
-
But driving buses is not the only thing you can do in this game. You can also establish your own bus company and hire drivers to work for you. You can manage your company's finances, reputation, customer satisfaction, and more. You can also compete with other players in multiplayer mode and online ranking system.
-
bus simulator ultimate mod apk unlimited money and gold
-bus simulator ultimate hack apk download for android
-bus simulator ultimate mod menu apk latest version
-bus simulator ultimate 1.5.2 mod apk happymod
-bus simulator ultimate mod apk free shopping
-bus simulator ultimate mod apk all buses unlocked
-bus simulator ultimate mod apk revdl
-bus simulator ultimate mod apk rexdl
-bus simulator ultimate mod apk android 1
-bus simulator ultimate mod apk an1
-bus simulator ultimate mod apk obb
-bus simulator ultimate mod apk offline
-bus simulator ultimate mod apk online
-bus simulator ultimate mod apk no ads
-bus simulator ultimate mod apk unlimited xp
-bus simulator ultimate mod apk unlimited fuel
-bus simulator ultimate mod apk unlimited tickets
-bus simulator ultimate mod apk unlimited gems
-bus simulator ultimate mod apk unlimited coins
-bus simulator ultimate mod apk unlimited everything
-bus simulator ultimate 1.5.2 hack apk download
-bus simulator ultimate 1.5.2 cheat apk download
-bus simulator ultimate 1.5.2 premium apk download
-bus simulator ultimate 1.5.2 pro apk download
-bus simulator ultimate 1.5.2 full apk download
-bus simulator ultimate 1.5.2 cracked apk download
-bus simulator ultimate 1.5.2 unlocked apk download
-bus simulator ultimate 1.5.2 latest mod apk download
-bus simulator ultimate 1.5.2 new mod apk download
-bus simulator ultimate 1.5.2 updated mod apk download
-how to install bus simulator ultimate 1.5.2 mod apk
-how to download bus simulator ultimate 1.5.2 mod apk
-how to play bus simulator ultimate 1.5.2 mod apk
-how to get bus simulator ultimate 1.5.2 mod apk
-how to update bus simulator ultimate 1.5.2 mod apk
-how to hack bus simulator ultimate 1.5.2 with lucky patcher
-how to hack bus simulator ultimate 1.5.2 with game guardian
-how to hack bus simulator ultimate 1.5.2 with cheat engine
-how to hack bus simulator ultimate 1.5.2 without root
-how to hack bus simulator ultimate 1.5.2 without verification
-best settings for bus simulator ultimate 1.5.2 mod apk
-best graphics for bus simulator ultimate 1.5.2 mod apk
-best buses for bus simulator ultimate 1.5.2 mod apk
-best routes for bus simulator ultimate 1.5.2 mod apk
-best tips and tricks for bus simulator ultimate 1.5.2 mod apk
-best cheats and hacks for bus simulator ultimate 1.5.2 mod apk
-best reviews and ratings for bus simulator ultimate 1.5.2 mod apk
-best alternatives and similar games to bus simulator ultimate 1.5.2 mod apk
-
Features of Bus Simulator Ultimate
-
Bus Simulator Ultimate has many features that make it stand out from other bus simulator games. Here are some of them:
-
Realistic bus driving experience
-
The game boasts realistic graphics, physics, sounds, and weather effects that make you feel like you are really driving a bus on the road. You can also choose from different camera angles, such as cockpit view, third-person view, or top-down view.
-
The game also has realistic traffic rules and situations that you have to follow and deal with. You have to obey traffic lights, speed limits, signs, and signals. You have to avoid accidents, collisions, and fines. You have to deal with traffic jams, road works, accidents, and emergencies.
-
Multiplayer mode and online ranking
-
The game allows you to play with other players online in multiplayer mode. You can join or create a room and invite your friends or random players to join you. You can chat with them using voice or text messages. You can also see their buses and routes on the map.
-
The game also has an online ranking system that shows your position among other players based on your performance, reputation, income, and more. You can compare your stats with other players and try to climb up the leaderboard.
-
Customizable buses and routes
-
The game offers a variety of buses that you can drive and customize. You can choose from different models, brands, sizes, colors, and designs of buses. You can also add different accessories and decorations to your buses, such as skins, stickers, horns, lights, and more. You can also change the interior of your buses, such as the seats, steering wheel, dashboard, and more.
-
The game also lets you create your own routes and destinations. You can choose from different cities and countries to drive in. You can also set the length, difficulty, and scenery of your routes. You can also add different stops, landmarks, and attractions to your routes.
-
Passenger feedback and company management
-
The game also simulates the interaction between you and your passengers. You have to pick up and drop off passengers at the designated stops. You have to provide them with a comfortable and safe ride. You have to listen to their feedback and requests.
-
The game also gives you the opportunity to run your own bus company. You have to hire and train drivers, buy and maintain buses, manage your budget and expenses, and expand your business. You have to balance your income and reputation. You have to deal with competitors, challenges, and events.
-
What is Bus Simulator Ultimate 1.5.2 Mod Apk?
-
Bus Simulator Ultimate 1.5.2 mod apk is a modified version of the original game that offers some extra features and benefits that are not available in the official version. Some of these features are:
-
Benefits of using the mod apk
-
-
Unlimited money: You can get unlimited money in the game that you can use to buy and upgrade buses, hire drivers, create routes, and more.
-
Free purchases: You can make any purchase in the game for free without spending any real money.
-
No ads: You can enjoy the game without any annoying ads interrupting your gameplay.
-
No root: You do not need to root your device to use the mod apk.
-
-
How to download and install the mod apk
-
To download and install the mod apk, you need to follow these steps:
-
-
Download the mod apk file from a trusted source on the internet.
-
Enable unknown sources on your device settings to allow installation of apps from outside the Google Play Store.
-
Locate the downloaded file on your device storage and tap on it to install it.
-
Launch the game and enjoy the mod features.
-
-
Pros and cons of Bus Simulator Ultimate 1.5.2 Mod Apk
-
Like any other mod apk, Bus Simulator Ultimate 1.5.2 mod apk has its own advantages and disadvantages. Here are some of them:
- | Pros | Cons | | --- | --- | | Unlimited money and free purchases | May not be compatible with some devices or versions | | No ads | May cause bugs or glitches in the game | | No root | May violate the terms and conditions of the game | | Enhanced gameplay | May affect the online features of the game |
Conclusion
-
Bus Simulator Ultimate is a fun and realistic bus simulator game that lets you drive various buses across different countries and cities, create your own routes and destinations, run your own bus company, and compete with other players online. It has many features that make it one of the best bus simulator games for Android devices.
-
Bus Simulator Ultimate 1.5.2 mod apk is a modified version of the original game that offers unlimited money, free purchases, no ads, and no root. It can enhance your gameplay experience by giving you more freedom and options in the game. However, it also has some drawbacks that you should be aware of before using it.
-
If you are looking for a bus simulator game that is realistic, challenging, and entertaining, you should give Bus Simulator Ultimate a try. And if you want to get some extra benefits and features in the game, you can download and install Bus Simulator Ultimate 1.5.2 mod apk from a reliable source on the internet.
-
FAQs
-
Q1: Is Bus Simulator Ultimate free to play?
-
A1: Yes, Bus Simulator Ultimate is free to download and play on Android devices. However, it also contains some in-app purchases that require real money.
-
Q2: Is Bus Simulator Ultimate 1.5.2 Mod Apk safe to use?
-
A2: Bus Simulator Ultimate 1.5.2 mod apk is generally safe to use if you download it from a trusted source on the internet. However, you should always be careful when installing apps from unknown sources as they may contain viruses or malware that can harm your device or data.
-
Q3: How to update Bus Simulator Ultimate 1.5.2 Mod Apk?
A3: To update Bus Simulator Ultimate 1.5.2 mod apk, you need to download the latest version of the mod apk file from the same source where you downloaded the previous version. Then, you need to uninstall the old version of the mod apk and install the new one. You may also need to clear the cache and data of the game before launching it.
-
Q4: What are the best bus simulator games for Android?
-
A4: Besides Bus Simulator Ultimate, there are many other bus simulator games that you can try on your Android device. Some of them are:
-
-
Bus Simulator: Original: This game lets you drive realistic buses in various locations and scenarios. You can also customize your buses and routes, and play with other players online.
-
World Bus Driving Simulator: This game lets you drive different types of buses across Brazil and other countries. You can also enjoy realistic graphics, sounds, and weather effects.
-
Coach Bus Simulator: This game lets you drive modern coaches across Europe and other continents. You can also create your own bus company and hire drivers.
-
Heavy Bus Simulator: This game lets you drive heavy buses on challenging roads and terrains. You can also upgrade your buses and change their appearance.
-
-
Q5: How to contact the developers of Bus Simulator Ultimate?
-
A5: If you have any questions, suggestions, or feedback about Bus Simulator Ultimate, you can contact the developers of the game by using the following methods:
-
-
Email: info@zuuks.com
-
Website: https://www.zuuks.com/
-
Facebook: https://www.facebook.com/zuuks.games
-
Instagram: https://www.instagram.com/zuuks_games/
-
Twitter: https://twitter.com/ZuuksGames
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Dr Driving 3 Mod APK and Learn Driving with Fun.md b/spaces/1phancelerku/anime-remove-background/Download Dr Driving 3 Mod APK and Learn Driving with Fun.md
deleted file mode 100644
index b2a60a903e5a7d8549773eef3b4a07cac23a303b..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Dr Driving 3 Mod APK and Learn Driving with Fun.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
Dr Driving 3 Mod APK: A Fun and Realistic Driving Simulator
-
Do you love driving games but get bored of the same old racing and drifting scenarios? Do you want to experience the thrill of driving in a realistic city environment with traffic, pedestrians, and obstacles? Do you want to test your driving skills in various modes and missions, such as parking, delivery, taxi, and more? If you answered yes to any of these questions, then you should try Dr Driving 3, one of the best car simulation games on Android.
Dr Driving 3 is the third installment of the popular Dr Driving series, developed by SUD Inc. It is a driving simulator game that lets you drive various cars in a realistic city setting. You can choose from different modes and missions, such as parking, delivery, taxi, speed, fuel efficiency, and more. You can also compete with other players online and climb the leaderboards. You can customize your cars with different colors, wheels, spoilers, and upgrades. You can also earn coins and gold by completing missions and achievements.
-
Features of Dr Driving 3
-
Realistic graphics and physics
-
Dr Driving 3 has stunning graphics that make you feel like you are driving in a real city. The game has realistic physics that simulate the car's movement, speed, braking, steering, and collision. You can also see the damage effects on your car when you crash or hit something. The game also has dynamic weather effects, such as rain, snow, fog, and night.
-
Various modes and missions
-
Dr Driving 3 has different modes and missions that challenge your driving skills. You can choose from parking, delivery, taxi, speed, fuel efficiency, and more. Each mode has different objectives and difficulties. For example, in parking mode, you have to park your car in a designated spot without hitting anything. In delivery mode, you have to deliver goods to various locations within a time limit. In taxi mode, you have to pick up and drop off passengers without breaking traffic rules.
-
dr driving 3 mod apk unlimited money and gold
-dr driving 3 mod apk download for android
-dr driving 3 mod apk latest version
-dr driving 3 mod apk hack
-dr driving 3 mod apk revdl
-dr driving 3 mod apk offline
-dr driving 3 mod apk free shopping
-dr driving 3 mod apk no ads
-dr driving 3 mod apk unlimited coins and gems
-dr driving 3 mod apk all cars unlocked
-dr driving 3 mod apk android 1
-dr driving 3 mod apk rexdl
-dr driving 3 mod apk happymod
-dr driving 3 mod apk unlimited fuel
-dr driving 3 mod apk unlimited diamonds
-dr driving 3 mod apk online
-dr driving 3 mod apk unlimited everything
-dr driving 3 mod apk unlimited keys
-dr driving 3 mod apk unlimited nitro
-dr driving 3 mod apk unlimited xp
-dr driving 3 mod apk new update
-dr driving 3 mod apk old version
-dr driving 3 mod apk obb
-dr driving 3 mod apk pure
-dr driving 3 mod apk premium
-dr driving 3 mod apk pro
-dr driving 3 mod apk unlocked all features
-dr driving 3 mod apk vip
-dr driving 3 mod apk with unlimited money and gold download for android latest version
-dr driving 3 mod apk with all cars unlocked and unlimited money and gold download for android latest version
-
Online multiplayer and leaderboards
-
Dr Driving 3 also has an online multiplayer feature that lets you play with other players around the world. You can join or create a room and invite your friends or random players to join. You can also chat with them using emojis. You can compete with them in different modes and see who is the best driver. You can also check your ranking on the global and local leaderboards.
-
Customizable cars and upgrades
-
Dr Driving 3 has a variety of cars that you can drive in the game. You can choose from sedans, hatchbacks, SUVs, sports cars, trucks, buses, and more. You can also customize your cars with different colors, wheels, spoilers, and upgrades. You can improve your car's performance by upgrading its engine, transmission, brakes, tires, suspension, and more.
-
What is Dr Driving 3 Mod APK?
-
Dr Driving 3 Mod APK is a modified version of Dr Driving 3 that provides everything free like coins and gold, all cars and upgrades unlocked, no ads, and no root required. It is a hacked version of the original game that gives you unlimited access to all the features and resources of the game. You can enjoy the game without any limitations or restrictions.
-
Benefits of Dr Driving 3 Mod APK
-
Unlimited coins and gold
-
Coins and gold are the main currencies in Dr Driving 3. You need them to buy new cars, customize them, and upgrade them. You can also use them to unlock new modes and missions. However, earning coins and gold in the game is not easy. You have to complete missions, achievements, and watch ads to get them. With Dr Driving 3 Mod APK, you don't have to worry about that. You will get unlimited coins and gold in your account as soon as you install the mod. You can use them to buy anything you want in the game.
-
All cars and upgrades unlocked
-
Dr Driving 3 has a lot of cars that you can drive in the game. However, not all of them are available from the start. You have to unlock them by completing certain missions or paying with coins and gold. Some of the cars are very expensive and require a lot of coins and gold to unlock. With Dr Driving 3 Mod APK, you don't have to wait or spend money to unlock them. You will get all the cars and upgrades unlocked in the mod. You can choose any car you like and customize it with any upgrade you want.
-
No ads and no root required
-
Dr Driving 3 is a free game, but it has ads that can interrupt your gameplay and annoy you. You can remove the ads by paying with real money, but that is not a good option for everyone. With Dr Driving 3 Mod APK, you don't have to deal with any ads. The mod removes all the ads from the game and lets you enjoy the game without any distractions. Moreover, the mod does not require root access to work on your device. You can install it easily without any risk of damaging your device or violating its warranty.
-
How to download and install Dr Driving 3 Mod APK?
-
If you want to download and install Dr Driving 3 Mod APK on your device, you have to follow some simple steps. Here are the steps to download and install Dr Driving 3 Mod APK:
-
Steps to download and install Dr Driving 3 Mod APK
-
Step 1: Enable unknown sources on your device
-
Before you can install Dr Driving 3 Mod APK on your device, you have to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To enable unknown sources, go to Settings > Security > Unknown Sources and toggle it on.
-
Step 2: Download the Dr Driving 3 Mod APK file from a trusted source
-
Next, you have to download the Dr Driving 3 Mod APK file from a trusted source. There are many websites that offer modded apps, but not all of them are safe and reliable. Some of them may contain viruses or malware that can harm your device or steal your data. Therefore, you have to be careful when choosing a source to download the mod file. You can use this link to download the Dr Driving 3 Mod APK file safely and securely.
-
Step 3: Locate and install the Dr Driving 3 Mod APK file on your device
-
After downloading the Dr Driving 3 Mod APK file, you have to locate it on your device and install it. You can use a file manager app to find the file in your downloads folder or wherever you saved it. Then, tap on the file and follow the instructions on the screen to install it.
-
Step 4: Launch the game and enjoy the mod features
-
Finally, you can launch the game and enjoy the mod features. You will see unlimited coins and gold in your account, all cars and upgrades unlocked, no ads, and no root required. You can play any mode or mission you want, customize your cars as you like, compete with other players online, and have fun driving in a realistic city.
-
Conclusion
-
Dr Driving 3 is a fun and realistic driving simulator game that lets you drive various cars in a realistic city setting. You can choose from different modes and missions, such as parking, delivery, taxi, speed, fuel efficiency, and more. You can also compete with other players online and climb the leaderboards. You can customize your cars with different colors, wheels, spoilers, and upgrades.
-
If you want to enjoy the game without any limitations or restrictions, you should try Dr Driving 3 Mod APK, a modified version of the game that provides unlimited coins and gold, all cars and upgrades unlocked, no ads, and no root required. You can download and install Dr Driving 3 Mod APK easily by following the steps in this article. You can then enjoy the game with all the mod features and have a great time driving in a realistic city.
-
FAQs
-
Here are some frequently asked questions about Dr Driving 3 and Dr Driving 3 Mod APK:
-
-
-
Question
-
Answer
-
-
-
Is Dr Driving 3 free to play?
-
Yes, Dr Driving 3 is free to play, but it has in-app purchases and ads that can affect your gameplay.
-
-
-
Is Dr Driving 3 Mod APK safe to use?
-
Yes, Dr Driving 3 Mod APK is safe to use, as long as you download it from a trusted source. However, you should always be careful when installing modded apps on your device and scan them for viruses or malware before installing them.
-
-
-
Does Dr Driving 3 Mod APK work on all devices?
-
Dr Driving 3 Mod APK works on most Android devices that support the original game. However, some devices may not be compatible with the mod or may experience some issues or errors. If you encounter any problems, you can try reinstalling the mod or contacting the mod developer for help.
-
-
-
Can I play Dr Driving 3 offline?
-
Yes, you can play Dr Driving 3 offline, but you will not be able to access the online multiplayer feature or the leaderboards. You will also not be able to sync your progress or achievements with your Google Play account.
-
-
-
Can I update Dr Driving 3 Mod APK?
-
No, you cannot update Dr Driving 3 Mod APK from the Google Play Store, as it is a modified version of the game. If you want to update the mod, you have to download and install the latest version of the mod from the same source you downloaded it from.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Explore the Thrilling World of Monster Life with Free Shopping Mod APK.md b/spaces/1phancelerku/anime-remove-background/Explore the Thrilling World of Monster Life with Free Shopping Mod APK.md
deleted file mode 100644
index 93d38ffad64cb9067e0d56bd62930bdad9d4a8dd..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Explore the Thrilling World of Monster Life with Free Shopping Mod APK.md
+++ /dev/null
@@ -1,96 +0,0 @@
-
-
Monster Life Mod APK Free Shopping: A Guide for Monster Lovers
-
Do you love cute monsters? Do you want to collect, breed, train, and battle with them? Do you want to have unlimited coins and gems to buy anything you want in the game? If you answered yes to any of these questions, then you should try Monster Life Mod APK Free Shopping, a modified version of the popular game Monster Life by Gameloft.
In this article, we will tell you everything you need to know about Monster Life Mod APK Free Shopping, including how to download and install it, how to play it, what are its features, pros and cons, and whether it is worth it or not. Let's get started!
-
How to download and install Monster Life Mod APK Free Shopping
-
The first thing you need to do is to find a reliable source for the mod apk file. There are many websites that offer mod apk files for various games, but not all of them are safe or trustworthy. Some of them may contain viruses or malware that can harm your device or data, or they may not work properly or at all. Therefore, you should do some research before downloading any mod apk file from any website.
-
One of the websites that we recommend is [APKCombo](^1^), which provides safe and fast downloads for various Android apps. You can find the link for Monster Life Mod APK Free Shopping on their website, or you can click [here](^1^) to go directly to the download page.
-
Once you have found the mod apk file, you need to enable unknown sources on your device. This is because Android devices do not allow installing apps from sources other than the official Google Play Store by default. To enable unknown sources, go to your device settings, then security, then toggle on the option that says "allow installation of apps from unknown sources". This will allow you to install the mod apk file on your device.
-
After enabling unknown sources, you can download and install the mod apk file on your device. Just tap on the download button on the website, then wait for the file to be downloaded. Once it is done, tap on the file to open it, then tap on install. Follow the instructions on the screen, then wait for the installation to be completed. You may need to grant some permissions to the app during the installation process.
-
monster life mod apk unlimited money and gems
-monster life mod apk latest version download
-monster life mod apk free purchase and subscription
-monster life mod apk no ads and unlocked levels
-monster life mod apk android 1 and rexdl
-monster life mod apk offline and online play
-monster life mod apk hack and cheat codes
-monster life mod apk revdl and happymod
-monster life mod apk 2023 and 2024 updates
-monster life mod apk obb and data files
-monster life unlimited money mod apk free download
-monster life latest version mod apk free shopping
-monster life free purchase mod apk unlimited gems
-monster life no ads mod apk unlocked premium features
-monster life android 1 mod apk free coins and cash
-monster life offline mod apk unlimited everything
-monster life hack mod apk free diamond and gold
-monster life revdl mod apk unlocked all monsters
-monster life 2023 mod apk free subscription and purchase
-monster life obb mod apk free shopping and levels
-download monster life mod apk unlimited money and gems
-download monster life mod apk latest version free shopping
-download monster life mod apk free purchase and subscription
-download monster life mod apk no ads and unlocked levels
-download monster life mod apk android 1 and rexdl free
-download monster life mod apk offline and online play
-download monster life mod apk hack and cheat codes free
-download monster life mod apk revdl and happymod free shopping
-download monster life mod apk 2023 and 2024 updates free
-download monster life mod apk obb and data files free
-
Congratulations! You have successfully installed Monster Life Mod APK Free Shopping on your device. You can now launch the app and enjoy the game with free shopping and other features.
-
How to play Monster Life Mod APK Free Shopping
-
Now that you have installed the app, you may be wondering how to play the game. Don't worry, we will guide you through the basics of the game and help you become a master monster trainer in no time.
-
The game starts with a tutorial that introduces you to the story and the gameplay of Monster Life. You will learn that you are a young monster keeper who lives on the islands of Numa, a magical world where monsters and humans coexist peacefully. However, an evil force called Chaos is threatening to destroy Numa and its inhabitants, and you are the only one who can stop it.
-
You will also learn that you have a special gift: you can communicate with monsters and understand their feelings. This makes you a perfect candidate for becoming a monster trainer and protector of Numa. Your adventure begins when you choose your starter monster from three options: Fire Lion, Ice Bear, or Leaf Turtle. You can also name your monster and customize its appearance with different colors and accessories.
-
After choosing your starter monster, you will explore the islands of Numa and fight against Chaos and its minions. You will encounter different types of monsters, each with their own strengths and weaknesses. You will also collect and breed more monsters with different abilities and combinations. You can have up to six monsters in your team at a time, and you can switch them during battles.
-
Battles in Monster Life are turn-based and simple to play. You just need to tap on your monster's icon to select it, then tap on the enemy's icon to attack it. You can also use items and skills to heal or boost your monsters, or to inflict damage or status effects on your enemies. The battle ends when you defeat all the enemies or when all your monsters are knocked out.
-
As you win battles, your monsters will gain experience and level up. They will also learn new skills and evolve into stronger forms. You can train and customize your monsters with items and skills that you can buy or find in the game. You can also build habitats, farms, shops, and other facilities on your island to make it more comfortable and attractive for your monsters.
-
But fighting against Chaos is not the only thing you can do in Monster Life. You can also challenge other players in online battles and tournaments. You can test your skills and strategies against real opponents from around the world, and earn rewards and rankings based on your performance. You can also chat with other players, visit their islands, and trade monsters with them.
-
What are the features of Monster Life Mod APK Free Shopping
-
Monster Life Mod APK Free Shopping is not just a regular version of Monster Life. It has some extra features that make it more fun and enjoyable to play. Here are some of the features that you can expect from this mod apk:
-
-
Unlimited coins and gems: Coins and gems are the main currencies in Monster Life. You need them to buy items, skills, habitats, decorations, and other things in the game. Normally, you can earn coins and gems by completing quests, winning battles, watching ads, or spending real money. But with this mod apk, you don't have to worry about running out of coins or gems ever again. You will have unlimited amounts of them to spend as you wish.
-
All monsters unlocked and available to breed: Monsters are the heart of Monster Life. There are over 100 different monsters in the game, each with their own characteristics, abilities, evolutions, and personalities. Normally, you can unlock new monsters by completing quests, winning battles, breeding existing monsters, or spending real money. But with this mod apk, you don't have to wait or pay for anything. You will have access to all the monsters in the game from the start, and you can breed them freely without any restrictions.
-
No ads or in-app purchases: Ads and in-app purchases are annoying features that can interrupt your gameplay or tempt you to spend more money than you want. Normally, you can watch ads to earn some extra coins or gems, or buy them with real money if you are impatient or desperate. But with this mod apk, you don't have to deal with any ads or in-app purchases at all. You will enjoy a smooth and uninterrupted gameplay without any distractions or temptations.
-
What are the pros and cons of Monster Life Mod APK Free Shopping
-
Monster Life Mod APK Free Shopping may sound like a perfect game for monster lovers, but it is not without its drawbacks. Like any mod apk, it has some advantages and disadvantages that you should consider before playing it. Here are some of the pros and cons of Monster Life Mod APK Free Shopping:
-
-
-
Pros
-
Cons
-
-
-
Enjoy a cute and colorful graphics and animation: Monster Life has a charming and vibrant graphics and animation that will appeal to anyone who likes cute things. The monsters are adorable and expressive, the islands are lush and lively, and the battles are dynamic and exciting. The game also has a cheerful and upbeat soundtrack that matches the mood of the game.
-
The mod apk file may not be compatible with some devices or updates: Monster Life Mod APK Free Shopping is a modified version of the original game, which means that it may not work well with some devices or updates. The mod apk file may crash, freeze, or lag on some devices, or it may not run at all. The mod apk file may also become obsolete or incompatible with future updates of the original game, which may prevent you from playing the game or accessing new features.
-
-
-
Experience a fun and engaging gameplay with diverse monsters and activities: Monster Life has a lot of things to offer to keep you entertained and hooked. You can collect, breed, train, and battle with over 100 different monsters, each with their own abilities and personalities. You can also explore the islands of Numa and discover new places, quests, and secrets. You can also build and decorate your own island with various facilities and items. You can also interact with other players online and compete with them in battles and tournaments.
-
The mod apk file may contain viruses or malware that can harm your device or data: Monster Life Mod APK Free Shopping is not an official version of the game, which means that it may not be safe or secure to download or install. The mod apk file may contain viruses or malware that can infect your device or data, or steal your personal information. You should always scan the mod apk file with an antivirus software before installing it, and backup your data before playing it.
-
-
-
Share your monster collection and achievements with your friends on social media: Monster Life allows you to connect your game account with your Facebook account, which enables you to share your monster collection and achievements with your friends on social media. You can also invite your friends to play the game with you, or send them gifts and messages. You can also see your friends' islands and monsters, and compare your progress and rankings with them.
-
The mod apk file may violate the terms of service of the original game and result in a ban or suspension: Monster Life Mod APK Free Shopping is an unauthorized version of the game, which means that it may violate the terms of service of the original game. The terms of service prohibit modifying, hacking, cheating, or exploiting the game in any way. If you play the mod apk file, you may risk getting banned or suspended from the original game, or losing your account or data.
-
-
-
Conclusion: Is Monster Life Mod APK Free Shopping worth it?
-
Monster Life Mod APK Free Shopping is a fun and addictive game for anyone who loves cute monsters. It has a lot of features that make it more enjoyable and convenient to play than the original game. However, it also has some risks and drawbacks that you should be aware of before playing it. Ultimately, the decision is up to you whether you want to try it or not.
-
If you decide to play Monster Life Mod APK Free Shopping, we hope that this article has helped you understand how to download and install it, how to play it, what are its features, pros and cons, and whether it is worth it or not. We hope that you have a great time playing Monster Life Mod APK Free Shopping!
-
FAQs
-
Here are some frequently asked questions about Monster Life Mod APK Free Shopping:
-
-
What is Monster Life?
-
Monster Life is a popular game by Gameloft that lets you collect, breed, train, and battle with cute monsters in a magical world called Numa.
-
What is Monster Life Mod APK Free Shopping?
-
Monster Life Mod APK Free Shopping is a modified version of Monster Life that gives you unlimited coins and gems to buy anything you want in the game, as well as access to all the monsters in the game.
-
Is Monster Life Mod APK Free Shopping safe and legal?
-
Monster Life Mod APK Free Shopping is not an official version of the game, which means that it may not be safe or legal to download or play. The mod apk file may contain viruses or malware that can harm your device or data, or it may violate the terms of service of the original game and result in a ban or suspension. You should always scan the mod apk file with an antivirus software before installing it, and backup your data before playing it. You should also play the mod apk file at your own risk and responsibility.
-
How can I get more coins and gems in Monster Life?
-
If you don't want to use Monster Life Mod APK Free Shopping, you can still get more coins and gems in Monster Life by completing quests, winning battles, watching ads, or spending real money. You can also get more coins and gems by inviting your friends to play the game with you, or by participating in events and promotions.
-
How can I breed new monsters in Monster Life?
-
To breed new monsters in Monster Life, you need to have two monsters of the same species and opposite genders. You also need to have a breeding habitat that matches their element. You can then tap on the breeding habitat and select the two monsters that you want to breed. You will then have to wait for some time until the breeding is done. You can speed up the process by using gems or watching ads. You can then hatch the egg and get a new monster.
-
How can I contact the developer of Monster Life?
-
If you have any questions, feedback, or issues about Monster Life, you can contact the developer of the game by visiting their website [here], or by sending them an email at [support@gameloft.com]. You can also follow them on their social media accounts on [Facebook], [Twitter], [Instagram], and [YouTube].
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Extreme car driving simulator apk Free download and play the most realistic car game ever.md b/spaces/1phancelerku/anime-remove-background/Extreme car driving simulator apk Free download and play the most realistic car game ever.md
deleted file mode 100644
index 05396735d762eee842f04e08c287be6589542083..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Extreme car driving simulator apk Free download and play the most realistic car game ever.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
Free Download Extreme Car Driving Simulator APK
-
If you are looking for a realistic and fun car driving game, you should try Extreme Car Driving Simulator. This is one of the best open world car simulators that lets you drive, drift and feel a racing sports car. You can perform illegal stunts, run full speed without the police chasing you, and burn the asphalt of this huge city. In this article, we will tell you what Extreme Car Driving Simulator is, what features it has, and how to download and install it on your Android device.
-
What is Extreme Car Driving Simulator?
-
Extreme Car Driving Simulator is an Android game developed by AxesInMotion Racing. It was released in 2014 and has since gained over 500 million downloads on Google Play. It is also available on other platforms like Windows, iOS, and Mac. Extreme Car Driving Simulator is a game that simulates the experience of driving a sports car in an open world environment. You can choose from different cars, customize them, and drive them in various modes. You can also explore the city, crash your car, and enjoy the realistic physics and graphics.
Extreme Car Driving Simulator has many features that make it an exciting and addictive game. Here are some of them:
-
Mini game checkpoint mode
-
In this mode, you have to reach different checkpoints in the city within a given time limit. You can earn coins and unlock new cars by completing this mode.
-
free download extreme car driving simulator apk mod
-free download extreme car driving simulator apk for pc
-free download extreme car driving simulator apk latest version
-free download extreme car driving simulator apk unlimited money
-free download extreme car driving simulator apk hack
-free download extreme car driving simulator apk android 1
-free download extreme car driving simulator apk pure
-free download extreme car driving simulator apk offline
-free download extreme car driving simulator apk old version
-free download extreme car driving simulator apk revdl
-free download extreme car driving simulator apk rexdl
-free download extreme car driving simulator apk uptodown
-free download extreme car driving simulator apk obb
-free download extreme car driving simulator apk data
-free download extreme car driving simulator apk mirror
-free download extreme car driving simulator apk no ads
-free download extreme car driving simulator apk full version
-free download extreme car driving simulator apk mod menu
-free download extreme car driving simulator apk mod money
-free download extreme car driving simulator apk mod all cars unlocked
-free download extreme car driving simulator apk mod unlimited money and gold
-free download extreme car driving simulator apk mod 5.3.0p1
-free download extreme car driving simulator apk mod 5.2.7p1
-free download extreme car driving simulator apk mod 5.2.6p1
-free download extreme car driving simulator apk mod 5.2.3p1
-free download extreme car driving simulator apk mod 5.2.0p1
-free download extreme car driving simulator apk mod 5.1.12p1
-free download extreme car driving simulator apk mod 5.1.11p1
-free download extreme car driving simulator apk mod 5.1.8p1
-free download extreme car driving simulator apk mod 5.1.7p1
-free download extreme car driving simulator apk mod 5.1.6p1
-free download extreme car driving simulator apk mod 5.0.9p1
-free download extreme car driving simulator apk mod 5.0.8p1
-free download extreme car driving simulator apk mod 5.0.7p1
-free download extreme car driving simulator apk mod 5.0.6p1
-free download extreme car driving simulator apk mod 5.0.4p1
-free download extreme car driving simulator apk mod 4.18.30p1
-free download extreme car driving simulator apk mod 4.18.26p1
-free download extreme car driving simulator apk mod 4.18.25p1
-free download extreme car driving simulator apk mod 4.18.23p1
-free download extreme car driving simulator apk mod 4.18.20p1
-free download extreme car driving simulator apk mod 4.18.19p1
-free download extreme car driving simulator apk mod 4.18.17p1
-free download extreme car driving simulator apk mod 4.18.16p1
-free download extreme car driving simulator apk mod 4.18.15p1
-free download extreme car driving simulator apk mod 4.18.14p1
-free download extreme car driving simulator apk mod 4.18.13p1
-free download extreme car driving simulator apk mod 4.18.p11
-
Drive with traffic
-
You can also drive with traffic in the city, which adds more challenge and realism to the game. You have to avoid crashing into other vehicles and obey the traffic rules.
-
Full real HUD
-
The game has a full real HUD that shows you the revs, gear, speed, and other information of your car. You can also switch between different views, such as cockpit view, third-person view, or top-down view.
-
ABS, TC and ESP simulation
-
You can also simulate the ABS, TC and ESP systems of your car. These are features that help you control your car better in different situations. You can also turn them off if you want more challenge.
-
Explore a detailed open world environment
-
The game has a large and detailed open world environment that you can explore freely. You can find different places, such as airports, highways, bridges, tunnels, off-road areas, and more. You can also interact with some objects, such as ramps, cones, barrels, and traffic lights.
-
Realistic car damage and physics
-
The game has realistic car damage and physics that make it more fun and immersive. You can see your car getting dented, scratched, or even destroyed by crashing into other cars or objects. You can also feel the weight, speed, and handling of your car as you drive it.
-
Control your car with different options
-
You can control your car with different options, such as steering wheel, accelerometer, or arrows. You can also adjust the sensitivity and tilt of your device to suit your preference.
-
Several different cameras and gamepad support
-
You can also switch between several different cameras to get different perspectives of your car and the environment. You can also use a gamepad to play the game if you have one connected to your device.
-
How to download and install Extreme Car Driving Simulator APK?
-
If you want to download and install Extreme Car Driving Simulator APK on your Android device , you can follow these simple steps:
-
Download the APK file from a trusted source
-
The first step is to download the APK file of Extreme Car Driving Simulator from a trusted source. You can find many websites that offer the APK file, but you have to be careful and avoid downloading from malicious or fake sites. One of the reliable sources that we recommend is APKPure, which is a popular and safe platform for downloading APK files. You can visit their website and search for Extreme Car Driving Simulator, or you can use this link to go directly to the download page.
-
Enable unknown sources on your device
-
The next step is to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than Google Play. To do this, you have to go to your device settings and look for the option that says "Unknown sources" or "Install unknown apps". Depending on your device model and Android version, this option may be located in different places, such as Security, Privacy, or Applications. You have to enable this option by tapping on it and confirming your choice.
-
Install the APK file and launch the game
-
The final step is to install the APK file and launch the game. To do this, you have to locate the APK file that you downloaded in your device storage, usually in the Downloads folder. You have to tap on the file and follow the instructions on the screen to install it. Once the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer. You can now enjoy playing Extreme Car Driving Simulator on your Android device.
-
Conclusion
-
Extreme Car Driving Simulator is a great game for car enthusiasts who want to experience driving a sports car in an open world environment. It has many features that make it realistic, fun, and challenging. You can download and install it on your Android device by following the steps we explained in this article. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions about Extreme Car Driving Simulator:
-
-
Is Extreme Car Driving Simulator free?
-
Yes, Extreme Car Driving Simulator is free to download and play. However, it contains ads and in-app purchases that you can disable or buy if you want.
-
Is Extreme Car Driving Simulator offline?
-
Yes, Extreme Car Driving Simulator can be played offline without an internet connection. However, some features may require an internet connection, such as updating the game or accessing online leaderboards.
-
Is Extreme Car Driving Simulator safe?
-
Yes, Extreme Car Driving Simulator is safe to play as long as you download it from a trusted source like APKPure. You should also scan the APK file with an antivirus app before installing it.
-
How to update Extreme Car Driving Simulator?
-
You can update Extreme Car Driving Simulator by downloading the latest version of the APK file from APKPure or other trusted sources. You can also check for updates within the game by tapping on the settings icon and selecting "Check for updates".
-
How to get more coins in Extreme Car Driving Simulator?
-
You can get more coins in Extreme Car Driving Simulator by completing mini game checkpoint mode, driving with traffic, performing stunts, or watching ads. You can also buy coins with real money through in-app purchases.
-
- : https://apkpure.com/ : https://apkpure.com/extreme-car-driving-simulator/com.aim.racing 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/AIConsultant/MusicGen/tests/modules/test_seanet.py b/spaces/AIConsultant/MusicGen/tests/modules/test_seanet.py
deleted file mode 100644
index e5c51b340a2f94fb2828b14daf83d5fad645073d..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/tests/modules/test_seanet.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-
-import pytest
-import torch
-
-from audiocraft.modules.seanet import SEANetEncoder, SEANetDecoder, SEANetResnetBlock
-from audiocraft.modules import StreamableConv1d, StreamableConvTranspose1d
-
-
-class TestSEANetModel:
-
- def test_base(self):
- encoder = SEANetEncoder()
- decoder = SEANetDecoder()
-
- x = torch.randn(1, 1, 24000)
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def test_causal(self):
- encoder = SEANetEncoder(causal=True)
- decoder = SEANetDecoder(causal=True)
- x = torch.randn(1, 1, 24000)
-
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def test_conv_skip_connection(self):
- encoder = SEANetEncoder(true_skip=False)
- decoder = SEANetDecoder(true_skip=False)
-
- x = torch.randn(1, 1, 24000)
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def test_seanet_encoder_decoder_final_act(self):
- encoder = SEANetEncoder(true_skip=False)
- decoder = SEANetDecoder(true_skip=False, final_activation='Tanh')
-
- x = torch.randn(1, 1, 24000)
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def _check_encoder_blocks_norm(self, encoder: SEANetEncoder, n_disable_blocks: int, norm: str):
- n_blocks = 0
- for layer in encoder.model:
- if isinstance(layer, StreamableConv1d):
- n_blocks += 1
- assert layer.conv.norm_type == 'none' if n_blocks <= n_disable_blocks else norm
- elif isinstance(layer, SEANetResnetBlock):
- for resnet_layer in layer.block:
- if isinstance(resnet_layer, StreamableConv1d):
- # here we add + 1 to n_blocks as we increment n_blocks just after the block
- assert resnet_layer.conv.norm_type == 'none' if (n_blocks + 1) <= n_disable_blocks else norm
-
- def test_encoder_disable_norm(self):
- n_residuals = [0, 1, 3]
- disable_blocks = [0, 1, 2, 3, 4, 5, 6]
- norms = ['weight_norm', 'none']
- for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms):
- encoder = SEANetEncoder(n_residual_layers=n_res, norm=norm,
- disable_norm_outer_blocks=disable_blocks)
- self._check_encoder_blocks_norm(encoder, disable_blocks, norm)
-
- def _check_decoder_blocks_norm(self, decoder: SEANetDecoder, n_disable_blocks: int, norm: str):
- n_blocks = 0
- for layer in decoder.model:
- if isinstance(layer, StreamableConv1d):
- n_blocks += 1
- assert layer.conv.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm
- elif isinstance(layer, StreamableConvTranspose1d):
- n_blocks += 1
- assert layer.convtr.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm
- elif isinstance(layer, SEANetResnetBlock):
- for resnet_layer in layer.block:
- if isinstance(resnet_layer, StreamableConv1d):
- assert resnet_layer.conv.norm_type == 'none' \
- if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm
-
- def test_decoder_disable_norm(self):
- n_residuals = [0, 1, 3]
- disable_blocks = [0, 1, 2, 3, 4, 5, 6]
- norms = ['weight_norm', 'none']
- for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms):
- decoder = SEANetDecoder(n_residual_layers=n_res, norm=norm,
- disable_norm_outer_blocks=disable_blocks)
- self._check_decoder_blocks_norm(decoder, disable_blocks, norm)
-
- def test_disable_norm_raises_exception(self):
- # Invalid disable_norm_outer_blocks values raise exceptions
- with pytest.raises(AssertionError):
- SEANetEncoder(disable_norm_outer_blocks=-1)
-
- with pytest.raises(AssertionError):
- SEANetEncoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7)
-
- with pytest.raises(AssertionError):
- SEANetDecoder(disable_norm_outer_blocks=-1)
-
- with pytest.raises(AssertionError):
- SEANetDecoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7)
diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/classifier.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/classifier.py
deleted file mode 100644
index 67e98b9d8ffb96a150b517497ace0a242d7163ef..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/classifier.py
+++ /dev/null
@@ -1,267 +0,0 @@
-import os
-import torch
-import pytorch_lightning as pl
-from omegaconf import OmegaConf
-from torch.nn import functional as F
-from torch.optim import AdamW
-from torch.optim.lr_scheduler import LambdaLR
-from copy import deepcopy
-from einops import rearrange
-from glob import glob
-from natsort import natsorted
-
-from ldm.modules.diffusionmodules.openaimodel import EncoderUNetModel, UNetModel
-from ldm.util import log_txt_as_img, default, ismap, instantiate_from_config
-
-__models__ = {
- 'class_label': EncoderUNetModel,
- 'segmentation': UNetModel
-}
-
-
-def disabled_train(self, mode=True):
- """Overwrite model.train with this function to make sure train/eval mode
- does not change anymore."""
- return self
-
-
-class NoisyLatentImageClassifier(pl.LightningModule):
-
- def __init__(self,
- diffusion_path,
- num_classes,
- ckpt_path=None,
- pool='attention',
- label_key=None,
- diffusion_ckpt_path=None,
- scheduler_config=None,
- weight_decay=1.e-2,
- log_steps=10,
- monitor='val/loss',
- *args,
- **kwargs):
- super().__init__(*args, **kwargs)
- self.num_classes = num_classes
- # get latest config of diffusion model
- diffusion_config = natsorted(glob(os.path.join(diffusion_path, 'configs', '*-project.yaml')))[-1]
- self.diffusion_config = OmegaConf.load(diffusion_config).model
- self.diffusion_config.params.ckpt_path = diffusion_ckpt_path
- self.load_diffusion()
-
- self.monitor = monitor
- self.numd = self.diffusion_model.first_stage_model.encoder.num_resolutions - 1
- self.log_time_interval = self.diffusion_model.num_timesteps // log_steps
- self.log_steps = log_steps
-
- self.label_key = label_key if not hasattr(self.diffusion_model, 'cond_stage_key') \
- else self.diffusion_model.cond_stage_key
-
- assert self.label_key is not None, 'label_key neither in diffusion model nor in model.params'
-
- if self.label_key not in __models__:
- raise NotImplementedError()
-
- self.load_classifier(ckpt_path, pool)
-
- self.scheduler_config = scheduler_config
- self.use_scheduler = self.scheduler_config is not None
- self.weight_decay = weight_decay
-
- def init_from_ckpt(self, path, ignore_keys=list(), only_model=False):
- sd = torch.load(path, map_location="cpu")
- if "state_dict" in list(sd.keys()):
- sd = sd["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
- sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys: {missing}")
- if len(unexpected) > 0:
- print(f"Unexpected Keys: {unexpected}")
-
- def load_diffusion(self):
- model = instantiate_from_config(self.diffusion_config)
- self.diffusion_model = model.eval()
- self.diffusion_model.train = disabled_train
- for param in self.diffusion_model.parameters():
- param.requires_grad = False
-
- def load_classifier(self, ckpt_path, pool):
- model_config = deepcopy(self.diffusion_config.params.unet_config.params)
- model_config.in_channels = self.diffusion_config.params.unet_config.params.out_channels
- model_config.out_channels = self.num_classes
- if self.label_key == 'class_label':
- model_config.pool = pool
-
- self.model = __models__[self.label_key](**model_config)
- if ckpt_path is not None:
- print('#####################################################################')
- print(f'load from ckpt "{ckpt_path}"')
- print('#####################################################################')
- self.init_from_ckpt(ckpt_path)
-
- @torch.no_grad()
- def get_x_noisy(self, x, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x))
- continuous_sqrt_alpha_cumprod = None
- if self.diffusion_model.use_continuous_noise:
- continuous_sqrt_alpha_cumprod = self.diffusion_model.sample_continuous_noise_level(x.shape[0], t + 1)
- # todo: make sure t+1 is correct here
-
- return self.diffusion_model.q_sample(x_start=x, t=t, noise=noise,
- continuous_sqrt_alpha_cumprod=continuous_sqrt_alpha_cumprod)
-
- def forward(self, x_noisy, t, *args, **kwargs):
- return self.model(x_noisy, t)
-
- @torch.no_grad()
- def get_input(self, batch, k):
- x = batch[k]
- if len(x.shape) == 3:
- x = x[..., None]
- x = rearrange(x, 'b h w c -> b c h w')
- x = x.to(memory_format=torch.contiguous_format).float()
- return x
-
- @torch.no_grad()
- def get_conditioning(self, batch, k=None):
- if k is None:
- k = self.label_key
- assert k is not None, 'Needs to provide label key'
-
- targets = batch[k].to(self.device)
-
- if self.label_key == 'segmentation':
- targets = rearrange(targets, 'b h w c -> b c h w')
- for down in range(self.numd):
- h, w = targets.shape[-2:]
- targets = F.interpolate(targets, size=(h // 2, w // 2), mode='nearest')
-
- # targets = rearrange(targets,'b c h w -> b h w c')
-
- return targets
-
- def compute_top_k(self, logits, labels, k, reduction="mean"):
- _, top_ks = torch.topk(logits, k, dim=1)
- if reduction == "mean":
- return (top_ks == labels[:, None]).float().sum(dim=-1).mean().item()
- elif reduction == "none":
- return (top_ks == labels[:, None]).float().sum(dim=-1)
-
- def on_train_epoch_start(self):
- # save some memory
- self.diffusion_model.model.to('cpu')
-
- @torch.no_grad()
- def write_logs(self, loss, logits, targets):
- log_prefix = 'train' if self.training else 'val'
- log = {}
- log[f"{log_prefix}/loss"] = loss.mean()
- log[f"{log_prefix}/acc@1"] = self.compute_top_k(
- logits, targets, k=1, reduction="mean"
- )
- log[f"{log_prefix}/acc@5"] = self.compute_top_k(
- logits, targets, k=5, reduction="mean"
- )
-
- self.log_dict(log, prog_bar=False, logger=True, on_step=self.training, on_epoch=True)
- self.log('loss', log[f"{log_prefix}/loss"], prog_bar=True, logger=False)
- self.log('global_step', self.global_step, logger=False, on_epoch=False, prog_bar=True)
- lr = self.optimizers().param_groups[0]['lr']
- self.log('lr_abs', lr, on_step=True, logger=True, on_epoch=False, prog_bar=True)
-
- def shared_step(self, batch, t=None):
- x, *_ = self.diffusion_model.get_input(batch, k=self.diffusion_model.first_stage_key)
- targets = self.get_conditioning(batch)
- if targets.dim() == 4:
- targets = targets.argmax(dim=1)
- if t is None:
- t = torch.randint(0, self.diffusion_model.num_timesteps, (x.shape[0],), device=self.device).long()
- else:
- t = torch.full(size=(x.shape[0],), fill_value=t, device=self.device).long()
- x_noisy = self.get_x_noisy(x, t)
- logits = self(x_noisy, t)
-
- loss = F.cross_entropy(logits, targets, reduction='none')
-
- self.write_logs(loss.detach(), logits.detach(), targets.detach())
-
- loss = loss.mean()
- return loss, logits, x_noisy, targets
-
- def training_step(self, batch, batch_idx):
- loss, *_ = self.shared_step(batch)
- return loss
-
- def reset_noise_accs(self):
- self.noisy_acc = {t: {'acc@1': [], 'acc@5': []} for t in
- range(0, self.diffusion_model.num_timesteps, self.diffusion_model.log_every_t)}
-
- def on_validation_start(self):
- self.reset_noise_accs()
-
- @torch.no_grad()
- def validation_step(self, batch, batch_idx):
- loss, *_ = self.shared_step(batch)
-
- for t in self.noisy_acc:
- _, logits, _, targets = self.shared_step(batch, t)
- self.noisy_acc[t]['acc@1'].append(self.compute_top_k(logits, targets, k=1, reduction='mean'))
- self.noisy_acc[t]['acc@5'].append(self.compute_top_k(logits, targets, k=5, reduction='mean'))
-
- return loss
-
- def configure_optimizers(self):
- optimizer = AdamW(self.model.parameters(), lr=self.learning_rate, weight_decay=self.weight_decay)
-
- if self.use_scheduler:
- scheduler = instantiate_from_config(self.scheduler_config)
-
- print("Setting up LambdaLR scheduler...")
- scheduler = [
- {
- 'scheduler': LambdaLR(optimizer, lr_lambda=scheduler.schedule),
- 'interval': 'step',
- 'frequency': 1
- }]
- return [optimizer], scheduler
-
- return optimizer
-
- @torch.no_grad()
- def log_images(self, batch, N=8, *args, **kwargs):
- log = dict()
- x = self.get_input(batch, self.diffusion_model.first_stage_key)
- log['inputs'] = x
-
- y = self.get_conditioning(batch)
-
- if self.label_key == 'class_label':
- y = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"])
- log['labels'] = y
-
- if ismap(y):
- log['labels'] = self.diffusion_model.to_rgb(y)
-
- for step in range(self.log_steps):
- current_time = step * self.log_time_interval
-
- _, logits, x_noisy, _ = self.shared_step(batch, t=current_time)
-
- log[f'inputs@t{current_time}'] = x_noisy
-
- pred = F.one_hot(logits.argmax(dim=1), num_classes=self.num_classes)
- pred = rearrange(pred, 'b h w c -> b c h w')
-
- log[f'pred@t{current_time}'] = self.diffusion_model.to_rgb(pred)
-
- for key in log:
- log[key] = log[key][:N]
-
- return log
diff --git a/spaces/ALSv/FSW/roop/processors/frame/__init__.py b/spaces/ALSv/FSW/roop/processors/frame/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Aaaad/Dddde/app.py b/spaces/Aaaad/Dddde/app.py
deleted file mode 100644
index f1d4beb0a8f3cee27903f527b6bf8daa485a75a0..0000000000000000000000000000000000000000
--- a/spaces/Aaaad/Dddde/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("huggingface/gpt2").launch()
\ No newline at end of file
diff --git a/spaces/Abhilashvj/planogram-compliance/utils/autoanchor.py b/spaces/Abhilashvj/planogram-compliance/utils/autoanchor.py
deleted file mode 100644
index 1c763fc634121aac8aa6a9f99bb4a99c06b23910..0000000000000000000000000000000000000000
--- a/spaces/Abhilashvj/planogram-compliance/utils/autoanchor.py
+++ /dev/null
@@ -1,219 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-AutoAnchor utils
-"""
-
-import random
-
-import numpy as np
-import torch
-import yaml
-from tqdm import tqdm
-
-from utils import TryExcept
-from utils.general import LOGGER, TQDM_BAR_FORMAT, colorstr
-
-PREFIX = colorstr("AutoAnchor: ")
-
-
-def check_anchor_order(m):
- # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary
- a = (
- m.anchors.prod(-1).mean(-1).view(-1)
- ) # mean anchor area per output layer
- da = a[-1] - a[0] # delta a
- ds = m.stride[-1] - m.stride[0] # delta s
- if da and (da.sign() != ds.sign()): # same order
- LOGGER.info(f"{PREFIX}Reversing anchor order")
- m.anchors[:] = m.anchors.flip(0)
-
-
-@TryExcept(f"{PREFIX}ERROR")
-def check_anchors(dataset, model, thr=4.0, imgsz=640):
- # Check anchor fit to data, recompute if necessary
- m = (
- model.module.model[-1] if hasattr(model, "module") else model.model[-1]
- ) # Detect()
- shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True)
- scale = np.random.uniform(
- 0.9, 1.1, size=(shapes.shape[0], 1)
- ) # augment scale
- wh = torch.tensor(
- np.concatenate(
- [l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)]
- )
- ).float() # wh
-
- def metric(k): # compute metric
- r = wh[:, None] / k[None]
- x = torch.min(r, 1 / r).min(2)[0] # ratio metric
- best = x.max(1)[0] # best_x
- aat = (x > 1 / thr).float().sum(1).mean() # anchors above threshold
- bpr = (best > 1 / thr).float().mean() # best possible recall
- return bpr, aat
-
- stride = m.stride.to(m.anchors.device).view(-1, 1, 1) # model strides
- anchors = m.anchors.clone() * stride # current anchors
- bpr, aat = metric(anchors.cpu().view(-1, 2))
- s = f"\n{PREFIX}{aat:.2f} anchors/target, {bpr:.3f} Best Possible Recall (BPR). "
- if bpr > 0.98: # threshold to recompute
- LOGGER.info(f"{s}Current anchors are a good fit to dataset ✅")
- else:
- LOGGER.info(
- f"{s}Anchors are a poor fit to dataset ⚠️, attempting to improve..."
- )
- na = m.anchors.numel() // 2 # number of anchors
- anchors = kmean_anchors(
- dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False
- )
- new_bpr = metric(anchors)[0]
- if new_bpr > bpr: # replace anchors
- anchors = torch.tensor(anchors, device=m.anchors.device).type_as(
- m.anchors
- )
- m.anchors[:] = anchors.clone().view_as(m.anchors)
- check_anchor_order(m) # must be in pixel-space (not grid-space)
- m.anchors /= stride
- s = f"{PREFIX}Done ✅ (optional: update model *.yaml to use these anchors in the future)"
- else:
- s = f"{PREFIX}Done ⚠️ (original anchors better than new anchors, proceeding with original anchors)"
- LOGGER.info(s)
-
-
-def kmean_anchors(
- dataset="./data/coco128.yaml",
- n=9,
- img_size=640,
- thr=4.0,
- gen=1000,
- verbose=True,
-):
- """Creates kmeans-evolved anchors from training dataset
-
- Arguments:
- dataset: path to data.yaml, or a loaded dataset
- n: number of anchors
- img_size: image size used for training
- thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0
- gen: generations to evolve anchors using genetic algorithm
- verbose: print all results
-
- Return:
- k: kmeans evolved anchors
-
- Usage:
- from utils.autoanchor import *; _ = kmean_anchors()
- """
- from scipy.cluster.vq import kmeans
-
- npr = np.random
- thr = 1 / thr
-
- def metric(k, wh): # compute metrics
- r = wh[:, None] / k[None]
- x = torch.min(r, 1 / r).min(2)[0] # ratio metric
- # x = wh_iou(wh, torch.tensor(k)) # iou metric
- return x, x.max(1)[0] # x, best_x
-
- def anchor_fitness(k): # mutation fitness
- _, best = metric(torch.tensor(k, dtype=torch.float32), wh)
- return (best * (best > thr).float()).mean() # fitness
-
- def print_results(k, verbose=True):
- k = k[np.argsort(k.prod(1))] # sort small to large
- x, best = metric(k, wh0)
- bpr, aat = (best > thr).float().mean(), (
- x > thr
- ).float().mean() * n # best possible recall, anch > thr
- s = (
- f"{PREFIX}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr\n"
- f"{PREFIX}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, "
- f"past_thr={x[x > thr].mean():.3f}-mean: "
- )
- for x in k:
- s += "%i,%i, " % (round(x[0]), round(x[1]))
- if verbose:
- LOGGER.info(s[:-2])
- return k
-
- if isinstance(dataset, str): # *.yaml file
- with open(dataset, errors="ignore") as f:
- data_dict = yaml.safe_load(f) # model dict
- from utils.dataloaders import LoadImagesAndLabels
-
- dataset = LoadImagesAndLabels(
- data_dict["train"], augment=True, rect=True
- )
-
- # Get label wh
- shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True)
- wh0 = np.concatenate(
- [l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]
- ) # wh
-
- # Filter
- i = (wh0 < 3.0).any(1).sum()
- if i:
- LOGGER.info(
- f"{PREFIX}WARNING ⚠️ Extremely small objects found: {i} of {len(wh0)} labels are <3 pixels in size"
- )
- wh = wh0[(wh0 >= 2.0).any(1)].astype(np.float32) # filter > 2 pixels
- # wh = wh * (npr.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1
-
- # Kmeans init
- try:
- LOGGER.info(
- f"{PREFIX}Running kmeans for {n} anchors on {len(wh)} points..."
- )
- assert n <= len(wh) # apply overdetermined constraint
- s = wh.std(0) # sigmas for whitening
- k = kmeans(wh / s, n, iter=30)[0] * s # points
- assert n == len(
- k
- ) # kmeans may return fewer points than requested if wh is insufficient or too similar
- except Exception:
- LOGGER.warning(
- f"{PREFIX}WARNING ⚠️ switching strategies from kmeans to random init"
- )
- k = np.sort(npr.rand(n * 2)).reshape(n, 2) * img_size # random init
- wh, wh0 = (torch.tensor(x, dtype=torch.float32) for x in (wh, wh0))
- k = print_results(k, verbose=False)
-
- # Plot
- # k, d = [None] * 20, [None] * 20
- # for i in tqdm(range(1, 21)):
- # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True)
- # ax = ax.ravel()
- # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.')
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh
- # ax[0].hist(wh[wh[:, 0]<100, 0],400)
- # ax[1].hist(wh[wh[:, 1]<100, 1],400)
- # fig.savefig('wh.png', dpi=200)
-
- # Evolve
- f, sh, mp, s = (
- anchor_fitness(k),
- k.shape,
- 0.9,
- 0.1,
- ) # fitness, generations, mutation prob, sigma
- pbar = tqdm(range(gen), bar_format=TQDM_BAR_FORMAT) # progress bar
- for _ in pbar:
- v = np.ones(sh)
- while (
- v == 1
- ).all(): # mutate until a change occurs (prevent duplicates)
- v = (
- (npr.random(sh) < mp) * random.random() * npr.randn(*sh) * s
- + 1
- ).clip(0.3, 3.0)
- kg = (k.copy() * v).clip(min=2.0)
- fg = anchor_fitness(kg)
- if fg > f:
- f, k = fg, kg.copy()
- pbar.desc = f"{PREFIX}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}"
- if verbose:
- print_results(k, verbose)
-
- return print_results(k).astype(np.float32)
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/base/Base.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/base/Base.js
deleted file mode 100644
index 79318136741ce98d3d8b0958b918e14ac243c493..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/base/Base.js
+++ /dev/null
@@ -1,112 +0,0 @@
-import BaseShapes from '../../../plugins/gameobjects/shape/shapes/BaseShapes.js';
-import EaseValueMethods from './EaseValueMethods.js';
-
-const GetValue = Phaser.Utils.Objects.GetValue;
-
-class Base extends BaseShapes {
- constructor(scene, config) {
- var x = GetValue(config, 'x', 0);
- var y = GetValue(config, 'y', 0);
- var width = GetValue(config, 'width', 64);
- var height = GetValue(config, 'height', 64);
-
- super(scene, x, y, width, height);
-
- this.setDuration(GetValue(config, 'duration', 1000));
- this.setEase(GetValue(config, 'ease', 'Linear'));
- this.setDelay(GetValue(config, 'delay', 0));
- this.setRepeatDelay(GetValue(config, 'repeatDelay', 0));
- var color = GetValue(config, 'color', 0xffffff);
- var start = GetValue(config, 'start', true);
-
- this.buildShapes(config);
- this.setColor(color);
- this.setValue(0);
-
- if (start) {
- this.start();
- }
- }
-
- buildShapes() {
-
- }
-
- get centerX() {
- return this.width / 2;;
- }
-
- get centerY() {
- return this.height / 2;
- }
-
- get radius() {
- return Math.min(this.centerX, this.centerY);
- }
-
- get color() {
- return this._color;
- }
-
- set color(value) {
- this.isColorChanged = this.isColorChanged || (this._color !== value);
- this.dirty = this.dirty || this.isColorChanged;
- this._color = value;
- this.setShapesColor(value);
- }
-
- setColor(color) {
- this.color = color;
- return this;
- }
-
- setShapesColor(color) {
-
- }
-
- get value() {
- return this._value;
- }
-
- set value(value) {
- value = Phaser.Math.Clamp(value, 0, 1);
- this.dirty = this.dirty || (this._value != value);
- this._value = value;
- }
-
- setValue(value) {
- this.value = value;
- return this;
- }
-
- setDuration(duration) {
- this.duration = duration;
- return this;
- }
-
- setDelay(delay) {
- this.delay = delay;
- return this;
- }
-
- setRepeatDelay(repeatDelay) {
- this.repeatDelay = repeatDelay;
- return this;
- }
-
- setEase(ease) {
- this.ease = ease;
- return this;
- }
-
- get isRunning() {
- return (this.tweenTask) ? this.tweenTask.isRunning : false;
- }
-}
-
-Object.assign(
- Base.prototype,
- EaseValueMethods
-);
-
-export default Base;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/EaseMoveMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/EaseMoveMethods.js
deleted file mode 100644
index 04cbc921187b2c8ce026b590031135a28d00e937..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/EaseMoveMethods.js
+++ /dev/null
@@ -1,120 +0,0 @@
-import { EaseMoveTo, EaseMoveFrom } from '../easemove/EaseMove.js';
-import { WaitComplete } from '../utils/WaitEvent.js';
-import GetParentSizerMethods from './GetParentSizerMethods.js';
-
-const IsPlainObject = Phaser.Utils.Objects.IsPlainObject;
-const DistanceBetween = Phaser.Math.Distance.Between;
-
-var OnInitEaseMove = function (gameObject, easeMove) {
- // Route 'complete' of easeMove to gameObject
- easeMove.completeEventName = undefined;
- easeMove.on('complete', function () {
- if (easeMove.completeEventName) {
- gameObject.emit(easeMove.completeEventName, gameObject);
- easeMove.completeEventName = undefined;
- }
- })
-
- // Update local state
- easeMove.on('update', function () {
- var parent = GetParentSizerMethods.getParentSizer(gameObject);
- if (parent) {
- parent.resetChildPositionState(gameObject);
- }
- })
-}
-
-export default {
- moveFrom(duration, x, y, ease, destroyMode) {
- if (IsPlainObject(duration)) {
- var config = duration;
- x = config.x;
- y = config.y;
- if (config.hasOwnProperty('speed')) {
- duration = (DistanceBetween(x, y, this.x, this.y) * 1000) / config.speed;
- } else {
- duration = config.duration;
- }
-
- ease = config.ease;
- }
-
- var isInit = (this._easeMove === undefined);
-
- this._easeMove = EaseMoveFrom(this, duration, x, y, ease, destroyMode, this._easeMove);
-
- if (isInit) {
- OnInitEaseMove(this, this._easeMove);
- }
-
- this._easeMove.completeEventName = 'movefrom.complete';
-
- return this;
- },
-
- moveFromPromise(duration, x, y, ease, destroyMode) {
- this.moveFrom(duration, x, y, ease, destroyMode);
- return WaitComplete(this._easeMove);
- },
-
- moveFromDestroy(duration, x, y, ease) {
- this.moveFrom(duration, x, y, ease, true);
- return this;
- },
-
- moveFromDestroyPromise(duration, x, y, ease) {
- this.moveFromDestroy(duration, x, y, ease);
- return WaitComplete(this._easeMove);
- },
-
- moveTo(duration, x, y, ease, destroyMode) {
- if (IsPlainObject(duration)) {
- var config = duration;
- x = config.x;
- y = config.y;
- if (config.hasOwnProperty('speed')) {
- duration = (DistanceBetween(x, y, this.x, this.y) * 1000) / config.speed;
- } else {
- duration = config.duration;
- }
-
- ease = config.ease;
- }
-
- var isInit = (this._easeMove === undefined);
-
- this._easeMove = EaseMoveTo(this, duration, x, y, ease, destroyMode, this._easeMove);
-
- if (isInit) {
- OnInitEaseMove(this, this._easeMove);
- }
-
- this._easeMove.completeEventName = 'moveto.complete';
-
- return this;
- },
-
- moveToPromise(duration, x, y, ease, destroyMode) {
- this.moveTo(duration, x, y, ease, destroyMode);
- return WaitComplete(this._easeMove);
- },
-
- moveToDestroy(duration, x, y, ease) {
- this.moveTo(duration, x, y, ease, true)
- return this;
- },
-
- moveToDestroyPromise(duration, x, y, ease) {
- this.moveToDestroy(duration, x, y, ease, true);
- return WaitComplete(this._easeMove);
- },
-
- moveStop(toEnd) {
- if (!this._easeMove) {
- return this;
- }
-
- this._easeMove.stop(toEnd);
- return this;
- }
-}
\ No newline at end of file
diff --git a/spaces/Aloento/9Nine-PITS/text/frontend/normalizer/acronyms.py b/spaces/Aloento/9Nine-PITS/text/frontend/normalizer/acronyms.py
deleted file mode 100644
index abf198b97e6e818e1fbe59006f98492640bcee54..0000000000000000000000000000000000000000
--- a/spaces/Aloento/9Nine-PITS/text/frontend/normalizer/acronyms.py
+++ /dev/null
@@ -1,13 +0,0 @@
-# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/utils/ImagesDataset.py b/spaces/Amrrs/DragGan-Inversion/PTI/utils/ImagesDataset.py
deleted file mode 100644
index 4d36e8665270f4f6dee5a2d58a36c564e1543771..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/PTI/utils/ImagesDataset.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import os
-
-from torch.utils.data import Dataset
-from PIL import Image
-
-from PTI.utils.data_utils import make_dataset
-from torchvision import transforms
-
-
-class Image2Dataset(Dataset):
- def __init__(self, image) -> None:
- super().__init__()
- self.image = image
- self.transform = transforms.Compose(
- [
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
- ]
- )
-
- def __len__(self):
- return 1
-
- def __getitem__(self, index):
- return "customIMG", self.transform(self.image)
-
-
-class ImagesDataset(Dataset):
- def __init__(self, source_root, source_transform=None):
- self.source_paths = sorted(make_dataset(source_root))
- self.source_transform = source_transform
-
- def __len__(self):
- return len(self.source_paths)
-
- def __getitem__(self, index):
- fname, from_path = self.source_paths[index]
- from_im = Image.open(from_path).convert("RGB").resize([1024, 1024])
-
- if self.source_transform:
- from_im = self.source_transform(from_im)
-
- return fname, from_im
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/openpose/src/__init__.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/openpose/src/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/other-formats.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/other-formats.md
deleted file mode 100644
index b58d00fce180e4cd2a069a970300ed173c867be3..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/other-formats.md
+++ /dev/null
@@ -1,194 +0,0 @@
-
-
-# Load different Stable Diffusion formats
-
-[[open-in-colab]]
-
-Stable Diffusion models are available in different formats depending on the framework they're trained and saved with, and where you download them from. Converting these formats for use in 🤗 Diffusers allows you to use all the features supported by the library, such as [using different schedulers](schedulers) for inference, [building your custom pipeline](write_own_pipeline), and a variety of techniques and methods for [optimizing inference speed](./optimization/opt_overview).
-
-
-
-We highly recommend using the `.safetensors` format because it is more secure than traditional pickled files which are vulnerable and can be exploited to execute any code on your machine (learn more in the [Load safetensors](using_safetensors) guide).
-
-
-
-This guide will show you how to convert other Stable Diffusion formats to be compatible with 🤗 Diffusers.
-
-## PyTorch .ckpt
-
-The checkpoint - or `.ckpt` - format is commonly used to store and save models. The `.ckpt` file contains the entire model and is typically several GBs in size. While you can load and use a `.ckpt` file directly with the [`~StableDiffusionPipeline.from_single_file`] method, it is generally better to convert the `.ckpt` file to 🤗 Diffusers so both formats are available.
-
-There are two options for converting a `.ckpt` file; use a Space to convert the checkpoint or convert the `.ckpt` file with a script.
-
-### Convert with a Space
-
-The easiest and most convenient way to convert a `.ckpt` file is to use the [SD to Diffusers](https://huggingface.co/spaces/diffusers/sd-to-diffusers) Space. You can follow the instructions on the Space to convert the `.ckpt` file.
-
-This approach works well for basic models, but it may struggle with more customized models. You'll know the Space failed if it returns an empty pull request or error. In this case, you can try converting the `.ckpt` file with a script.
-
-### Convert with a script
-
-🤗 Diffusers provides a [conversion script](https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py) for converting `.ckpt` files. This approach is more reliable than the Space above.
-
-Before you start, make sure you have a local clone of 🤗 Diffusers to run the script and log in to your Hugging Face account so you can open pull requests and push your converted model to the Hub.
-
-```bash
-huggingface-cli login
-```
-
-To use the script:
-
-1. Git clone the repository containing the `.ckpt` file you want to convert. For this example, let's convert this [TemporalNet](https://huggingface.co/CiaraRowles/TemporalNet) `.ckpt` file:
-
-```bash
-git lfs install
-git clone https://huggingface.co/CiaraRowles/TemporalNet
-```
-
-2. Open a pull request on the repository where you're converting the checkpoint from:
-
-```bash
-cd TemporalNet && git fetch origin refs/pr/13:pr/13
-git checkout pr/13
-```
-
-3. There are several input arguments to configure in the conversion script, but the most important ones are:
-
- - `checkpoint_path`: the path to the `.ckpt` file to convert.
- - `original_config_file`: a YAML file defining the configuration of the original architecture. If you can't find this file, try searching for the YAML file in the GitHub repository where you found the `.ckpt` file.
- - `dump_path`: the path to the converted model.
-
- For example, you can take the `cldm_v15.yaml` file from the [ControlNet](https://github.com/lllyasviel/ControlNet/tree/main/models) repository because the TemporalNet model is a Stable Diffusion v1.5 and ControlNet model.
-
-4. Now you can run the script to convert the `.ckpt` file:
-
-```bash
-python ../diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path temporalnetv3.ckpt --original_config_file cldm_v15.yaml --dump_path ./ --controlnet
-```
-
-5. Once the conversion is done, upload your converted model and test out the resulting [pull request](https://huggingface.co/CiaraRowles/TemporalNet/discussions/13)!
-
-```bash
-git push origin pr/13:refs/pr/13
-```
-
-## Keras .pb or .h5
-
-
-
-🧪 This is an experimental feature. Only Stable Diffusion v1 checkpoints are supported by the Convert KerasCV Space at the moment.
-
-
-
-[KerasCV](https://keras.io/keras_cv/) supports training for [Stable Diffusion](https://github.com/keras-team/keras-cv/blob/master/keras_cv/models/stable_diffusion) v1 and v2. However, it offers limited support for experimenting with Stable Diffusion models for inference and deployment whereas 🤗 Diffusers has a more complete set of features for this purpose, such as different [noise schedulers](https://huggingface.co/docs/diffusers/using-diffusers/schedulers), [flash attention](https://huggingface.co/docs/diffusers/optimization/xformers), and [other
-optimization techniques](https://huggingface.co/docs/diffusers/optimization/fp16).
-
-The [Convert KerasCV](https://huggingface.co/spaces/sayakpaul/convert-kerascv-sd-diffusers) Space converts `.pb` or `.h5` files to PyTorch, and then wraps them in a [`StableDiffusionPipeline`] so it is ready for inference. The converted checkpoint is stored in a repository on the Hugging Face Hub.
-
-For this example, let's convert the [`sayakpaul/textual-inversion-kerasio`](https://huggingface.co/sayakpaul/textual-inversion-kerasio/tree/main) checkpoint which was trained with Textual Inversion. It uses the special token `` to personalize images with cats.
-
-The Convert KerasCV Space allows you to input the following:
-
-* Your Hugging Face token.
-* Paths to download the UNet and text encoder weights from. Depending on how the model was trained, you don't necessarily need to provide the paths to both the UNet and text encoder. For example, Textual Inversion only requires the embeddings from the text encoder and a text-to-image model only requires the UNet weights.
-* Placeholder token is only applicable for textual inversion models.
-* The `output_repo_prefix` is the name of the repository where the converted model is stored.
-
-Click the **Submit** button to automatically convert the KerasCV checkpoint! Once the checkpoint is successfully converted, you'll see a link to the new repository containing the converted checkpoint. Follow the link to the new repository, and you'll see the Convert KerasCV Space generated a model card with an inference widget to try out the converted model.
-
-If you prefer to run inference with code, click on the **Use in Diffusers** button in the upper right corner of the model card to copy and paste the code snippet:
-
-```py
-from diffusers import DiffusionPipeline
-
-pipeline = DiffusionPipeline.from_pretrained("sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline")
-```
-
-Then you can generate an image like:
-
-```py
-from diffusers import DiffusionPipeline
-
-pipeline = DiffusionPipeline.from_pretrained("sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline")
-pipeline.to("cuda")
-
-placeholder_token = ""
-prompt = f"two {placeholder_token} getting married, photorealistic, high quality"
-image = pipeline(prompt, num_inference_steps=50).images[0]
-```
-
-## A1111 LoRA files
-
-[Automatic1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) (A1111) is a popular web UI for Stable Diffusion that supports model sharing platforms like [Civitai](https://civitai.com/). Models trained with the Low-Rank Adaptation (LoRA) technique are especially popular because they're fast to train and have a much smaller file size than a fully finetuned model. 🤗 Diffusers supports loading A1111 LoRA checkpoints with [`~loaders.LoraLoaderMixin.load_lora_weights`]:
-
-```py
-from diffusers import DiffusionPipeline, UniPCMultistepScheduler
-import torch
-
-pipeline = DiffusionPipeline.from_pretrained(
- "andite/anything-v4.0", torch_dtype=torch.float16, safety_checker=None
-).to("cuda")
-pipeline.scheduler = UniPCMultistepScheduler.from_config(pipeline.scheduler.config)
-```
-
-Download a LoRA checkpoint from Civitai; this example uses the [Howls Moving Castle,Interior/Scenery LoRA (Ghibli Stlye)](https://civitai.com/models/14605?modelVersionId=19998) checkpoint, but feel free to try out any LoRA checkpoint!
-
-```py
-# uncomment to download the safetensor weights
-#!wget https://civitai.com/api/download/models/19998 -O howls_moving_castle.safetensors
-```
-
-Load the LoRA checkpoint into the pipeline with the [`~loaders.LoraLoaderMixin.load_lora_weights`] method:
-
-```py
-pipeline.load_lora_weights(".", weight_name="howls_moving_castle.safetensors")
-```
-
-Now you can use the pipeline to generate images:
-
-```py
-prompt = "masterpiece, illustration, ultra-detailed, cityscape, san francisco, golden gate bridge, california, bay area, in the snow, beautiful detailed starry sky"
-negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture"
-
-images = pipeline(
- prompt=prompt,
- negative_prompt=negative_prompt,
- width=512,
- height=512,
- num_inference_steps=25,
- num_images_per_prompt=4,
- generator=torch.manual_seed(0),
-).images
-```
-
-Finally, create a helper function to display the images:
-
-```py
-from PIL import Image
-
-
-def image_grid(imgs, rows=2, cols=2):
- w, h = imgs[0].size
- grid = Image.new("RGB", size=(cols * w, rows * h))
-
- for i, img in enumerate(imgs):
- grid.paste(img, box=(i % cols * w, i // cols * h))
- return grid
-
-
-image_grid(images)
-```
-
-
-
-
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/dcn/faster_rcnn_r50_fpn_dpool_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/dcn/faster_rcnn_r50_fpn_dpool_1x_coco.py
deleted file mode 100644
index 1b695f0e19049dc91b7656d7684df151896b7727..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/dcn/faster_rcnn_r50_fpn_dpool_1x_coco.py
+++ /dev/null
@@ -1,12 +0,0 @@
-_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- roi_head=dict(
- bbox_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(
- _delete_=True,
- type='DeformRoIPoolPack',
- output_size=7,
- output_channels=256),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32])))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_fast_r50_caffe_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_fast_r50_caffe_fpn_1x_coco.py
deleted file mode 100644
index e15bc29b03d8c612a8921873d456a03126f79aae..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_fast_r50_caffe_fpn_1x_coco.py
+++ /dev/null
@@ -1,63 +0,0 @@
-_base_ = '../fast_rcnn/fast_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://detectron2/resnet50_caffe',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=False),
- norm_eval=True,
- style='caffe'),
- roi_head=dict(
- bbox_head=dict(bbox_coder=dict(target_stds=[0.05, 0.05, 0.1, 0.1]))),
- # model training and testing settings
- train_cfg=dict(
- rcnn=dict(
- assigner=dict(pos_iou_thr=0.6, neg_iou_thr=0.6, min_pos_iou=0.6),
- sampler=dict(num=256))),
- test_cfg=dict(rcnn=dict(score_thr=1e-3)))
-dataset_type = 'CocoDataset'
-data_root = 'data/coco/'
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadProposals', num_max_proposals=300),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'proposals', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadProposals', num_max_proposals=None),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img', 'proposals']),
- ])
-]
-data = dict(
- train=dict(
- proposal_file=data_root + 'proposals/ga_rpn_r50_fpn_1x_train2017.pkl',
- pipeline=train_pipeline),
- val=dict(
- proposal_file=data_root + 'proposals/ga_rpn_r50_fpn_1x_val2017.pkl',
- pipeline=test_pipeline),
- test=dict(
- proposal_file=data_root + 'proposals/ga_rpn_r50_fpn_1x_val2017.pkl',
- pipeline=test_pipeline))
-optimizer_config = dict(
- _delete_=True, grad_clip=dict(max_norm=35, norm_type=2))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_40k_voc12aug.py
deleted file mode 100644
index 0cef0f09bfa2290d14fc3a783ea500d6c3da2931..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_40k_voc12aug.py
+++ /dev/null
@@ -1,7 +0,0 @@
-_base_ = [
- '../_base_/models/danet_r50-d8.py',
- '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/deprecated_wrappers.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/deprecated_wrappers.py
deleted file mode 100644
index a2e593df9ee57637038683d7a1efaa347b2b69e7..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/deprecated_wrappers.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-# This file is for backward compatibility.
-# Module wrappers for empty tensor have been moved to mmcv.cnn.bricks.
-import warnings
-
-from ..cnn.bricks.wrappers import Conv2d, ConvTranspose2d, Linear, MaxPool2d
-
-
-class Conv2d_deprecated(Conv2d):
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- warnings.warn(
- 'Importing Conv2d wrapper from "mmcv.ops" will be deprecated in'
- ' the future. Please import them from "mmcv.cnn" instead')
-
-
-class ConvTranspose2d_deprecated(ConvTranspose2d):
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- warnings.warn(
- 'Importing ConvTranspose2d wrapper from "mmcv.ops" will be '
- 'deprecated in the future. Please import them from "mmcv.cnn" '
- 'instead')
-
-
-class MaxPool2d_deprecated(MaxPool2d):
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- warnings.warn(
- 'Importing MaxPool2d wrapper from "mmcv.ops" will be deprecated in'
- ' the future. Please import them from "mmcv.cnn" instead')
-
-
-class Linear_deprecated(Linear):
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- warnings.warn(
- 'Importing Linear wrapper from "mmcv.ops" will be deprecated in'
- ' the future. Please import them from "mmcv.cnn" instead')
diff --git a/spaces/Artbogdanov/monet-manet/README.md b/spaces/Artbogdanov/monet-manet/README.md
deleted file mode 100644
index 41d6eca1c61c55241eab63384d932929507de874..0000000000000000000000000000000000000000
--- a/spaces/Artbogdanov/monet-manet/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Fast Ai Pics
-emoji: 🌖
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/demo/inference_on_a_image.py b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/demo/inference_on_a_image.py
deleted file mode 100644
index 0dd332f36725d96351156482959387b6124f3f5f..0000000000000000000000000000000000000000
--- a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/demo/inference_on_a_image.py
+++ /dev/null
@@ -1,214 +0,0 @@
-import argparse
-import os
-import sys
-
-import numpy as np
-import torch
-from PIL import Image, ImageDraw, ImageFont
-
-import groundingdino.datasets.transforms as T
-from groundingdino.models import build_model
-from groundingdino.util import box_ops
-from groundingdino.util.slconfig import SLConfig
-from groundingdino.util.utils import clean_state_dict, get_phrases_from_posmap
-from groundingdino.util.vl_utils import create_positive_map_from_span
-
-
-def plot_boxes_to_image(image_pil, tgt):
- H, W = tgt["size"]
- boxes = tgt["boxes"]
- labels = tgt["labels"]
- assert len(boxes) == len(labels), "boxes and labels must have same length"
-
- draw = ImageDraw.Draw(image_pil)
- mask = Image.new("L", image_pil.size, 0)
- mask_draw = ImageDraw.Draw(mask)
-
- # draw boxes and masks
- for box, label in zip(boxes, labels):
- # from 0..1 to 0..W, 0..H
- box = box * torch.Tensor([W, H, W, H])
- # from xywh to xyxy
- box[:2] -= box[2:] / 2
- box[2:] += box[:2]
- # random color
- color = tuple(np.random.randint(0, 255, size=3).tolist())
- # draw
- x0, y0, x1, y1 = box
- x0, y0, x1, y1 = int(x0), int(y0), int(x1), int(y1)
-
- draw.rectangle([x0, y0, x1, y1], outline=color, width=6)
- # draw.text((x0, y0), str(label), fill=color)
-
- font = ImageFont.load_default()
- if hasattr(font, "getbbox"):
- bbox = draw.textbbox((x0, y0), str(label), font)
- else:
- w, h = draw.textsize(str(label), font)
- bbox = (x0, y0, w + x0, y0 + h)
- # bbox = draw.textbbox((x0, y0), str(label))
- draw.rectangle(bbox, fill=color)
- draw.text((x0, y0), str(label), fill="white")
-
- mask_draw.rectangle([x0, y0, x1, y1], fill=255, width=6)
-
- return image_pil, mask
-
-
-def load_image(image_path):
- # load image
- image_pil = Image.open(image_path).convert("RGB") # load image
-
- transform = T.Compose(
- [
- T.RandomResize([800], max_size=1333),
- T.ToTensor(),
- T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
- ]
- )
- image, _ = transform(image_pil, None) # 3, h, w
- return image_pil, image
-
-
-def load_model(model_config_path, model_checkpoint_path, cpu_only=False):
- args = SLConfig.fromfile(model_config_path)
- args.device = "cuda" if not cpu_only else "cpu"
- model = build_model(args)
- checkpoint = torch.load(model_checkpoint_path, map_location="cpu")
- load_res = model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False)
- print(load_res)
- _ = model.eval()
- return model
-
-
-def get_grounding_output(model, image, caption, box_threshold, text_threshold=None, with_logits=True, cpu_only=False, token_spans=None):
- assert text_threshold is not None or token_spans is not None, "text_threshould and token_spans should not be None at the same time!"
- caption = caption.lower()
- caption = caption.strip()
- if not caption.endswith("."):
- caption = caption + "."
- device = "cuda" if not cpu_only else "cpu"
- model = model.to(device)
- image = image.to(device)
- with torch.no_grad():
- outputs = model(image[None], captions=[caption])
- logits = outputs["pred_logits"].sigmoid()[0] # (nq, 256)
- boxes = outputs["pred_boxes"][0] # (nq, 4)
-
- # filter output
- if token_spans is None:
- logits_filt = logits.cpu().clone()
- boxes_filt = boxes.cpu().clone()
- filt_mask = logits_filt.max(dim=1)[0] > box_threshold
- logits_filt = logits_filt[filt_mask] # num_filt, 256
- boxes_filt = boxes_filt[filt_mask] # num_filt, 4
-
- # get phrase
- tokenlizer = model.tokenizer
- tokenized = tokenlizer(caption)
- # build pred
- pred_phrases = []
- for logit, box in zip(logits_filt, boxes_filt):
- pred_phrase = get_phrases_from_posmap(logit > text_threshold, tokenized, tokenlizer)
- if with_logits:
- pred_phrases.append(pred_phrase + f"({str(logit.max().item())[:4]})")
- else:
- pred_phrases.append(pred_phrase)
- else:
- # given-phrase mode
- positive_maps = create_positive_map_from_span(
- model.tokenizer(text_prompt),
- token_span=token_spans
- ).to(image.device) # n_phrase, 256
-
- logits_for_phrases = positive_maps @ logits.T # n_phrase, nq
- all_logits = []
- all_phrases = []
- all_boxes = []
- for (token_span, logit_phr) in zip(token_spans, logits_for_phrases):
- # get phrase
- phrase = ' '.join([caption[_s:_e] for (_s, _e) in token_span])
- # get mask
- filt_mask = logit_phr > box_threshold
- # filt box
- all_boxes.append(boxes[filt_mask])
- # filt logits
- all_logits.append(logit_phr[filt_mask])
- if with_logits:
- logit_phr_num = logit_phr[filt_mask]
- all_phrases.extend([phrase + f"({str(logit.item())[:4]})" for logit in logit_phr_num])
- else:
- all_phrases.extend([phrase for _ in range(len(filt_mask))])
- boxes_filt = torch.cat(all_boxes, dim=0).cpu()
- pred_phrases = all_phrases
-
-
- return boxes_filt, pred_phrases
-
-
-if __name__ == "__main__":
-
- parser = argparse.ArgumentParser("Grounding DINO example", add_help=True)
- parser.add_argument("--config_file", "-c", type=str, required=True, help="path to config file")
- parser.add_argument(
- "--checkpoint_path", "-p", type=str, required=True, help="path to checkpoint file"
- )
- parser.add_argument("--image_path", "-i", type=str, required=True, help="path to image file")
- parser.add_argument("--text_prompt", "-t", type=str, required=True, help="text prompt")
- parser.add_argument(
- "--output_dir", "-o", type=str, default="outputs", required=True, help="output directory"
- )
-
- parser.add_argument("--box_threshold", type=float, default=0.3, help="box threshold")
- parser.add_argument("--text_threshold", type=float, default=0.25, help="text threshold")
- parser.add_argument("--token_spans", type=str, default=None, help=
- "The positions of start and end positions of phrases of interest. \
- For example, a caption is 'a cat and a dog', \
- if you would like to detect 'cat', the token_spans should be '[[[2, 5]], ]', since 'a cat and a dog'[2:5] is 'cat'. \
- if you would like to detect 'a cat', the token_spans should be '[[[0, 1], [2, 5]], ]', since 'a cat and a dog'[0:1] is 'a', and 'a cat and a dog'[2:5] is 'cat'. \
- ")
-
- parser.add_argument("--cpu-only", action="store_true", help="running on cpu only!, default=False")
- args = parser.parse_args()
-
- # cfg
- config_file = args.config_file # change the path of the model config file
- checkpoint_path = args.checkpoint_path # change the path of the model
- image_path = args.image_path
- text_prompt = args.text_prompt
- output_dir = args.output_dir
- box_threshold = args.box_threshold
- text_threshold = args.text_threshold
- token_spans = args.token_spans
-
- # make dir
- os.makedirs(output_dir, exist_ok=True)
- # load image
- image_pil, image = load_image(image_path)
- # load model
- model = load_model(config_file, checkpoint_path, cpu_only=args.cpu_only)
-
- # visualize raw image
- image_pil.save(os.path.join(output_dir, "raw_image.jpg"))
-
- # set the text_threshold to None if token_spans is set.
- if token_spans is not None:
- text_threshold = None
- print("Using token_spans. Set the text_threshold to None.")
-
-
- # run model
- boxes_filt, pred_phrases = get_grounding_output(
- model, image, text_prompt, box_threshold, text_threshold, cpu_only=args.cpu_only, token_spans=eval(token_spans)
- )
-
- # visualize pred
- size = image_pil.size
- pred_dict = {
- "boxes": boxes_filt,
- "size": [size[1], size[0]], # H,W
- "labels": pred_phrases,
- }
- # import ipdb; ipdb.set_trace()
- image_with_box = plot_boxes_to_image(image_pil, pred_dict)[0]
- image_with_box.save(os.path.join(output_dir, "pred.jpg"))
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/BalaBhaskarudu/mygenAIChatbot/app.py b/spaces/BalaBhaskarudu/mygenAIChatbot/app.py
deleted file mode 100644
index 2dbf3ae89c2e3fdab7134107dd346f984dca8eb1..0000000000000000000000000000000000000000
--- a/spaces/BalaBhaskarudu/mygenAIChatbot/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-
-template = """Meet Riya, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Riya's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging.
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-def get_text_response(user_message,history):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-demo = gr.ChatInterface(get_text_response)
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/Benson/text-generation/Examples/Animal Rebelin Batalla Simulador Mod Apk Desbloqueado Todo.md b/spaces/Benson/text-generation/Examples/Animal Rebelin Batalla Simulador Mod Apk Desbloqueado Todo.md
deleted file mode 100644
index bedcf49684a04991a2fa4e12b88c0f420413dfb1..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Animal Rebelin Batalla Simulador Mod Apk Desbloqueado Todo.md
+++ /dev/null
@@ -1,61 +0,0 @@
-
-
Simulador de batalla de rebelión animal Mod APK desbloqueado todo
-
Si eres un fan de las batallas épicas y las simulaciones de animales, es posible que quieras echar un vistazo a Animal Revolt Battle Simulator, un juego que te permite crear y ver peleas de animales realistas en 3D. Y si quieres disfrutar del juego con dinero ilimitado, menú, y todas las características desbloqueadas, es posible que desee descargar Animal Revolt Battle Simulator Mod APK, una versión modificada del juego original que le da más libertad y diversión. En este artículo, te contaremos todo lo que necesitas saber sobre Animal Revolt Battle Simulator y su versión modificada.
-
¿Qué es el simulador de batalla de rebelión animal?
-
Animal Revolt Battle Simulator es un juego desarrollado por Beast Battle Games, un estudio especializado en crear juegos de simulación de animales. El juego fue lanzado en junio de 2020 y ha ganado más de 1 millón de descargas en Google Play Store. El juego está clasificado 4.4 de 5 estrellas por los usuarios, que elogian sus gráficos, física y jugabilidad.
-
animal rebelión batalla simulador mod apk desbloqueado todo
Animal Revolt Battle Simulator tiene muchas características que lo convierten en un juego emocionante y realista. Algunas de ellas son:
-
-
Una gran variedad de animales: Puedes elegir entre más de 100 animales, incluyendo leones, tigres, elefantes, dinosaurios, dragones, tiburones y más. Cada animal tiene sus propias estadísticas, habilidades y comportamientos.
-
Un modo sandbox: Puedes crear tus propios escenarios y batallas colocando animales en el mapa. También puede ajustar el terreno, el clima, la hora del día y otros ajustes.
-
Un modo de campaña: Puedes seguir la historia de diferentes animales y completar misiones y desafíos. También puede desbloquear nuevos animales y mapas a medida que avanza.
-
Un motor de física realista: El juego utiliza un motor de física realista que simula los movimientos, colisiones, lesiones y muertes de los animales. Puedes ver sangre, sangre, huesos y efectos de muñeco de trapo.
-
-
-
Cómo jugar Animal Revolt Battle Simulator
-
El modo de juego de Animal Revolt Battle Simulator es simple e intuitivo. Solo tienes que seguir estos pasos:
-
-
Seleccione el modo que desea jugar: sandbox o campaña.
-
Seleccione el mapa en el que desea jugar.
-
Selecciona los animales que quieres usar para tu batalla. Puedes arrastrarlos y soltarlos en el mapa, o usar el botón aleatorio para generar una batalla aleatoria.
-
Ajuste la configuración y las opciones como desee.
-
Pulse el botón de reproducción para iniciar la batalla.
-
Observa cómo se desarrolla la batalla y disfruta del espectáculo.
-
-
¿Qué es Animal Revolt Battle Simulator Mod APK?
-
Animal Revolt Battle Simulator Mod APK es una versión modificada del juego original que le da algunos beneficios adicionales y características que no están disponibles en la versión oficial. La versión modificada es creada por desarrolladores de terceros que modifican los archivos originales del juego para desbloquear algunas características o añadir algunos trucos.
-
Beneficios de Animal Revolt Battle Simulator Mod APK
-
Algunos de los beneficios de Animal Revolt Battle Simulator Mod APK son:
-
-
Dinero ilimitado: Puedes obtener dinero ilimitado en la versión modificada, que puedes usar para comprar nuevos animales, mapas, armas y otros artículos.
-
Menú: Puedes acceder a un menú que te da más opciones y controles sobre el juego. Puedes habilitar o deshabilitar algunas características, como el modo dios, salud infinita, muerte de un solo golpe, etc.
Todas las características desbloqueadas: Puedes acceder a todas las características del juego, como animales, mapas, armas, etc., sin tener que desbloquearlas jugando o gastando dinero.
-
-
Cómo descargar e instalar Animal Revolt Battle Simulator Mod APK
-
Si desea descargar e instalar Animal Revolt Battle Simulator Mod APK, debe seguir estos pasos:
-
-
-
Descargar el archivo APK de la versión modificada. Asegúrese de que tiene suficiente espacio en su dispositivo y una conexión a Internet estable.
-
Antes de instalar el archivo APK, es necesario habilitar la "Fuentes desconocidas" opción en el dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.
-
Busque el archivo APK descargado en su dispositivo y toque en él para iniciar el proceso de instalación. Siga las instrucciones de la pantalla y espere a que termine la instalación.
-
Una vez realizada la instalación, puedes iniciar el juego y disfrutar de la versión modificada con todos los beneficios y características.
-
-
Conclusión
-
Animal Revolt Battle Simulator es un juego divertido y realista que te permite crear y ver batallas épicas con animales en 3D. Usted puede elegir entre más de 100 animales, personalizar sus escenarios, y disfrutar de la física realista y gráficos. Si desea tener más libertad y diversión, puede descargar Animal Revolt Battle Simulator Mod APK, que le da dinero ilimitado, menú, y todas las características desbloqueadas. Puede descargar la versión modificada desde un sitio web confiable e instalarlo en su dispositivo fácilmente. Esperamos que este artículo sea útil e informativo para usted. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación.
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Animal Revolt Battle Simulator y su versión modificada:
-
-
¿Es seguro usar Animal Revolt Battle Simulator Mod APK?
-
Sí, Animal Revolt Battle Simulator Mod APK es seguro de usar, siempre y cuando se descarga desde un sitio web de confianza que no contiene ningún virus o malware. Sin embargo, siempre debe tener cuidado al descargar e instalar aplicaciones de fuentes desconocidas, ya que pueden dañar su dispositivo o comprometer su privacidad.
-
-
¿Es Animal Revolt Battle Simulator Mod APK de uso gratuito?
-
-
¿Requiere acceso root Animal Revolt Battle Simulator Mod APK?
-
No, Animal Revolt Battle Simulator Mod APK no requiere acceso de root. Puede usarlo en cualquier dispositivo Android sin enraizarlo.
-
¿Puedo jugar Animal Revolt Battle Simulator Mod APK en línea con otros jugadores?
-
No, Animal Revolt Battle Simulator Mod APK no es un juego en línea. Solo se puede jugar sin conexión en su dispositivo. No se puede conectar o competir con otros jugadores en línea.
-
¿Puedo actualizar Animal Revolt Battle Simulator Mod APK?
-
No, Animal Revolt Battle Simulator Mod APK no es una versión oficial del juego. Es una versión modificada que puede no ser compatible con las últimas actualizaciones del juego original. Si desea actualizar el juego, es necesario desinstalar la versión modificada e instalar la versión oficial de la Google Play Store.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/vqvae/quantize.py b/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/vqvae/quantize.py
deleted file mode 100644
index 9c8caffad7fd4e90b2b5c627dda60d4c9fc496de..0000000000000000000000000000000000000000
--- a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/vqvae/quantize.py
+++ /dev/null
@@ -1,329 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import numpy as np
-from torch import einsum
-from einops import rearrange
-
-
-class VectorQuantizer(nn.Module):
- """
- see https://github.com/MishaLaskin/vqvae/blob/d761a999e2267766400dc646d82d3ac3657771d4/models/quantizer.py
- ____________________________________________
- Discretization bottleneck part of the VQ-VAE.
- Inputs:
- - n_e : number of embeddings
- - e_dim : dimension of embedding
- - beta : commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2
- _____________________________________________
- """
-
- # NOTE: this class contains a bug regarding beta; see VectorQuantizer2 for
- # a fix and use legacy=False to apply that fix. VectorQuantizer2 can be
- # used wherever VectorQuantizer has been used before and is additionally
- # more efficient.
- def __init__(self, n_e, e_dim, beta):
- super(VectorQuantizer, self).__init__()
- self.n_e = n_e
- self.e_dim = e_dim
- self.beta = beta
-
- self.embedding = nn.Embedding(self.n_e, self.e_dim)
- self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e)
-
- def forward(self, z):
- """
- Inputs the output of the encoder network z and maps it to a discrete
- one-hot vector that is the index of the closest embedding vector e_j
- z (continuous) -> z_q (discrete)
- z.shape = (batch, channel, height, width)
- quantization pipeline:
- 1. get encoder input (B,C,H,W)
- 2. flatten input to (B*H*W,C)
- """
- # reshape z -> (batch, height, width, channel) and flatten
- z = z.permute(0, 2, 3, 1).contiguous()
- z_flattened = z.view(-1, self.e_dim)
- # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z
-
- d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \
- torch.sum(self.embedding.weight**2, dim=1) - 2 * \
- torch.matmul(z_flattened, self.embedding.weight.t())
-
- ## could possible replace this here
- # #\start...
- # find closest encodings
- min_encoding_indices = torch.argmin(d, dim=1).unsqueeze(1)
-
- min_encodings = torch.zeros(
- min_encoding_indices.shape[0], self.n_e).to(z)
- min_encodings.scatter_(1, min_encoding_indices, 1)
-
- # dtype min encodings: torch.float32
- # min_encodings shape: torch.Size([2048, 512])
- # min_encoding_indices.shape: torch.Size([2048, 1])
-
- # get quantized latent vectors
- z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape)
- #.........\end
-
- # with:
- # .........\start
- #min_encoding_indices = torch.argmin(d, dim=1)
- #z_q = self.embedding(min_encoding_indices)
- # ......\end......... (TODO)
-
- # compute loss for embedding
- loss = torch.mean((z_q.detach()-z)**2) + self.beta * \
- torch.mean((z_q - z.detach()) ** 2)
-
- # preserve gradients
- z_q = z + (z_q - z).detach()
-
- # perplexity
- e_mean = torch.mean(min_encodings, dim=0)
- perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10)))
-
- # reshape back to match original input shape
- z_q = z_q.permute(0, 3, 1, 2).contiguous()
-
- return z_q, loss, (perplexity, min_encodings, min_encoding_indices)
-
- def get_codebook_entry(self, indices, shape):
- # shape specifying (batch, height, width, channel)
- # TODO: check for more easy handling with nn.Embedding
- min_encodings = torch.zeros(indices.shape[0], self.n_e).to(indices)
- min_encodings.scatter_(1, indices[:,None], 1)
-
- # get quantized latent vectors
- z_q = torch.matmul(min_encodings.float(), self.embedding.weight)
-
- if shape is not None:
- z_q = z_q.view(shape)
-
- # reshape back to match original input shape
- z_q = z_q.permute(0, 3, 1, 2).contiguous()
-
- return z_q
-
-
-class GumbelQuantize(nn.Module):
- """
- credit to @karpathy: https://github.com/karpathy/deep-vector-quantization/blob/main/model.py (thanks!)
- Gumbel Softmax trick quantizer
- Categorical Reparameterization with Gumbel-Softmax, Jang et al. 2016
- https://arxiv.org/abs/1611.01144
- """
- def __init__(self, num_hiddens, embedding_dim, n_embed, straight_through=True,
- kl_weight=5e-4, temp_init=1.0, use_vqinterface=True,
- remap=None, unknown_index="random"):
- super().__init__()
-
- self.embedding_dim = embedding_dim
- self.n_embed = n_embed
-
- self.straight_through = straight_through
- self.temperature = temp_init
- self.kl_weight = kl_weight
-
- self.proj = nn.Conv2d(num_hiddens, n_embed, 1)
- self.embed = nn.Embedding(n_embed, embedding_dim)
-
- self.use_vqinterface = use_vqinterface
-
- self.remap = remap
- if self.remap is not None:
- self.register_buffer("used", torch.tensor(np.load(self.remap)))
- self.re_embed = self.used.shape[0]
- self.unknown_index = unknown_index # "random" or "extra" or integer
- if self.unknown_index == "extra":
- self.unknown_index = self.re_embed
- self.re_embed = self.re_embed+1
- print(f"Remapping {self.n_embed} indices to {self.re_embed} indices. "
- f"Using {self.unknown_index} for unknown indices.")
- else:
- self.re_embed = n_embed
-
- def remap_to_used(self, inds):
- ishape = inds.shape
- assert len(ishape)>1
- inds = inds.reshape(ishape[0],-1)
- used = self.used.to(inds)
- match = (inds[:,:,None]==used[None,None,...]).long()
- new = match.argmax(-1)
- unknown = match.sum(2)<1
- if self.unknown_index == "random":
- new[unknown]=torch.randint(0,self.re_embed,size=new[unknown].shape).to(device=new.device)
- else:
- new[unknown] = self.unknown_index
- return new.reshape(ishape)
-
- def unmap_to_all(self, inds):
- ishape = inds.shape
- assert len(ishape)>1
- inds = inds.reshape(ishape[0],-1)
- used = self.used.to(inds)
- if self.re_embed > self.used.shape[0]: # extra token
- inds[inds>=self.used.shape[0]] = 0 # simply set to zero
- back=torch.gather(used[None,:][inds.shape[0]*[0],:], 1, inds)
- return back.reshape(ishape)
-
- def forward(self, z, temp=None, return_logits=False):
- # force hard = True when we are in eval mode, as we must quantize. actually, always true seems to work
- hard = self.straight_through if self.training else True
- temp = self.temperature if temp is None else temp
-
- logits = self.proj(z)
- if self.remap is not None:
- # continue only with used logits
- full_zeros = torch.zeros_like(logits)
- logits = logits[:,self.used,...]
-
- soft_one_hot = F.gumbel_softmax(logits, tau=temp, dim=1, hard=hard)
- if self.remap is not None:
- # go back to all entries but unused set to zero
- full_zeros[:,self.used,...] = soft_one_hot
- soft_one_hot = full_zeros
- z_q = einsum('b n h w, n d -> b d h w', soft_one_hot, self.embed.weight)
-
- # + kl divergence to the prior loss
- qy = F.softmax(logits, dim=1)
- diff = self.kl_weight * torch.sum(qy * torch.log(qy * self.n_embed + 1e-10), dim=1).mean()
-
- ind = soft_one_hot.argmax(dim=1)
- if self.remap is not None:
- ind = self.remap_to_used(ind)
- if self.use_vqinterface:
- if return_logits:
- return z_q, diff, (None, None, ind), logits
- return z_q, diff, (None, None, ind)
- return z_q, diff, ind
-
- def get_codebook_entry(self, indices, shape):
- b, h, w, c = shape
- assert b*h*w == indices.shape[0]
- indices = rearrange(indices, '(b h w) -> b h w', b=b, h=h, w=w)
- if self.remap is not None:
- indices = self.unmap_to_all(indices)
- one_hot = F.one_hot(indices, num_classes=self.n_embed).permute(0, 3, 1, 2).float()
- z_q = einsum('b n h w, n d -> b d h w', one_hot, self.embed.weight)
- return z_q
-
-
-class VectorQuantizer2(nn.Module):
- """
- Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly
- avoids costly matrix multiplications and allows for post-hoc remapping of indices.
- """
- # NOTE: due to a bug the beta term was applied to the wrong term. for
- # backwards compatibility we use the buggy version by default, but you can
- # specify legacy=False to fix it.
- def __init__(self, n_e, e_dim, beta, remap=None, unknown_index="random",
- sane_index_shape=False, legacy=True):
- super().__init__()
- self.n_e = n_e
- self.e_dim = e_dim
- self.beta = beta
- self.legacy = legacy
-
- self.embedding = nn.Embedding(self.n_e, self.e_dim)
- self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e)
-
- self.remap = remap
- if self.remap is not None:
- self.register_buffer("used", torch.tensor(np.load(self.remap)))
- self.re_embed = self.used.shape[0]
- self.unknown_index = unknown_index # "random" or "extra" or integer
- if self.unknown_index == "extra":
- self.unknown_index = self.re_embed
- self.re_embed = self.re_embed+1
- print(f"Remapping {self.n_e} indices to {self.re_embed} indices. "
- f"Using {self.unknown_index} for unknown indices.")
- else:
- self.re_embed = n_e
-
- self.sane_index_shape = sane_index_shape
-
- def remap_to_used(self, inds):
- ishape = inds.shape
- assert len(ishape)>1
- inds = inds.reshape(ishape[0],-1)
- used = self.used.to(inds)
- match = (inds[:,:,None]==used[None,None,...]).long()
- new = match.argmax(-1)
- unknown = match.sum(2)<1
- if self.unknown_index == "random":
- new[unknown]=torch.randint(0,self.re_embed,size=new[unknown].shape).to(device=new.device)
- else:
- new[unknown] = self.unknown_index
- return new.reshape(ishape)
-
- def unmap_to_all(self, inds):
- ishape = inds.shape
- assert len(ishape)>1
- inds = inds.reshape(ishape[0],-1)
- used = self.used.to(inds)
- if self.re_embed > self.used.shape[0]: # extra token
- inds[inds>=self.used.shape[0]] = 0 # simply set to zero
- back=torch.gather(used[None,:][inds.shape[0]*[0],:], 1, inds)
- return back.reshape(ishape)
-
- def forward(self, z, temp=None, rescale_logits=False, return_logits=False):
- assert temp is None or temp==1.0, "Only for interface compatible with Gumbel"
- assert rescale_logits==False, "Only for interface compatible with Gumbel"
- assert return_logits==False, "Only for interface compatible with Gumbel"
- # reshape z -> (batch, height, width, channel) and flatten
- z = rearrange(z, 'b c h w -> b h w c').contiguous()
- z_flattened = z.view(-1, self.e_dim)
- # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z
-
- d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \
- torch.sum(self.embedding.weight**2, dim=1) - 2 * \
- torch.einsum('bd,dn->bn', z_flattened, rearrange(self.embedding.weight, 'n d -> d n'))
-
- min_encoding_indices = torch.argmin(d, dim=1)
- z_q = self.embedding(min_encoding_indices).view(z.shape)
- perplexity = None
- min_encodings = None
-
- # compute loss for embedding
- if not self.legacy:
- loss = self.beta * torch.mean((z_q.detach()-z)**2) + \
- torch.mean((z_q - z.detach()) ** 2)
- else:
- loss = torch.mean((z_q.detach()-z)**2) + self.beta * \
- torch.mean((z_q - z.detach()) ** 2)
-
- # preserve gradients
- z_q = z + (z_q - z).detach()
-
- # reshape back to match original input shape
- z_q = rearrange(z_q, 'b h w c -> b c h w').contiguous()
-
- if self.remap is not None:
- min_encoding_indices = min_encoding_indices.reshape(z.shape[0],-1) # add batch axis
- min_encoding_indices = self.remap_to_used(min_encoding_indices)
- min_encoding_indices = min_encoding_indices.reshape(-1,1) # flatten
-
- if self.sane_index_shape:
- min_encoding_indices = min_encoding_indices.reshape(
- z_q.shape[0], z_q.shape[2], z_q.shape[3])
-
- return z_q, loss, (perplexity, min_encodings, min_encoding_indices)
-
- def get_codebook_entry(self, indices, shape):
- # shape specifying (batch, height, width, channel)
- if self.remap is not None:
- indices = indices.reshape(shape[0],-1) # add batch axis
- indices = self.unmap_to_all(indices)
- indices = indices.reshape(-1) # flatten again
-
- # get quantized latent vectors
- z_q = self.embedding(indices)
-
- if shape is not None:
- z_q = z_q.view(shape)
- # reshape back to match original input shape
- z_q = z_q.permute(0, 3, 1, 2).contiguous()
-
- return z_q
diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/utils/sum.ts b/spaces/BetterAPI/BetterChat_new/src/lib/utils/sum.ts
deleted file mode 100644
index 289b70584ef9f7795b1f4b1bf0151237dc2c55ff..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/lib/utils/sum.ts
+++ /dev/null
@@ -1,3 +0,0 @@
-export function sum(nums: number[]): number {
- return nums.reduce((a, b) => a + b, 0);
-}
diff --git a/spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/share/+server.ts b/spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/share/+server.ts
deleted file mode 100644
index 5f97daa091152c8074797f1d9f48ebc93fdde718..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/share/+server.ts
+++ /dev/null
@@ -1,54 +0,0 @@
-import { base } from "$app/paths";
-import { PUBLIC_ORIGIN } from "$env/static/public";
-import { collections } from "$lib/server/database.js";
-import type { SharedConversation } from "$lib/types/SharedConversation.js";
-import { sha256 } from "$lib/utils/sha256.js";
-import { error } from "@sveltejs/kit";
-import { ObjectId } from "mongodb";
-import { nanoid } from "nanoid";
-
-export async function POST({ params, url, locals }) {
- const conversation = await collections.conversations.findOne({
- _id: new ObjectId(params.id),
- sessionId: locals.sessionId,
- });
-
- if (!conversation) {
- throw error(404, "Conversation not found");
- }
-
- const hash = await sha256(JSON.stringify(conversation.messages));
-
- const existingShare = await collections.sharedConversations.findOne({ hash });
-
- if (existingShare) {
- return new Response(
- JSON.stringify({
- url: getShareUrl(url, existingShare._id),
- }),
- { headers: { "Content-Type": "application/json" } }
- );
- }
-
- const shared: SharedConversation = {
- _id: nanoid(7),
- createdAt: new Date(),
- messages: conversation.messages,
- hash,
- updatedAt: new Date(),
- title: conversation.title,
- };
-
- await collections.sharedConversations.insertOne(shared);
-
- return new Response(
- JSON.stringify({
- url: getShareUrl(url, shared._id),
- }),
- { headers: { "Content-Type": "application/json" } }
- );
-}
-
-function getShareUrl(url: URL, shareId: string): string {
- return `${PUBLIC_ORIGIN || url.origin}${base}/r/${shareId}`;
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/install/editable_legacy.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/install/editable_legacy.py
deleted file mode 100644
index bebe24e6d3ac321523e0442d28b77b6e6df85970..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/install/editable_legacy.py
+++ /dev/null
@@ -1,46 +0,0 @@
-"""Legacy editable installation process, i.e. `setup.py develop`.
-"""
-import logging
-from typing import Optional, Sequence
-
-from pip._internal.build_env import BuildEnvironment
-from pip._internal.utils.logging import indent_log
-from pip._internal.utils.setuptools_build import make_setuptools_develop_args
-from pip._internal.utils.subprocess import call_subprocess
-
-logger = logging.getLogger(__name__)
-
-
-def install_editable(
- *,
- global_options: Sequence[str],
- prefix: Optional[str],
- home: Optional[str],
- use_user_site: bool,
- name: str,
- setup_py_path: str,
- isolated: bool,
- build_env: BuildEnvironment,
- unpacked_source_directory: str,
-) -> None:
- """Install a package in editable mode. Most arguments are pass-through
- to setuptools.
- """
- logger.info("Running setup.py develop for %s", name)
-
- args = make_setuptools_develop_args(
- setup_py_path,
- global_options=global_options,
- no_user_config=isolated,
- prefix=prefix,
- home=home,
- use_user_site=use_user_site,
- )
-
- with indent_log():
- with build_env:
- call_subprocess(
- args,
- command_desc="python setup.py develop",
- cwd=unpacked_source_directory,
- )
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/vcs/versioncontrol.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/vcs/versioncontrol.py
deleted file mode 100644
index 02bbf68e7ad3ce14f191af24260312e817e12df7..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/vcs/versioncontrol.py
+++ /dev/null
@@ -1,705 +0,0 @@
-"""Handles all VCS (version control) support"""
-
-import logging
-import os
-import shutil
-import sys
-import urllib.parse
-from typing import (
- TYPE_CHECKING,
- Any,
- Dict,
- Iterable,
- Iterator,
- List,
- Mapping,
- Optional,
- Tuple,
- Type,
- Union,
-)
-
-from pip._internal.cli.spinners import SpinnerInterface
-from pip._internal.exceptions import BadCommand, InstallationError
-from pip._internal.utils.misc import (
- HiddenText,
- ask_path_exists,
- backup_dir,
- display_path,
- hide_url,
- hide_value,
- is_installable_dir,
- rmtree,
-)
-from pip._internal.utils.subprocess import (
- CommandArgs,
- call_subprocess,
- format_command_args,
- make_command,
-)
-from pip._internal.utils.urls import get_url_scheme
-
-if TYPE_CHECKING:
- # Literal was introduced in Python 3.8.
- #
- # TODO: Remove `if TYPE_CHECKING` when dropping support for Python 3.7.
- from typing import Literal
-
-
-__all__ = ["vcs"]
-
-
-logger = logging.getLogger(__name__)
-
-AuthInfo = Tuple[Optional[str], Optional[str]]
-
-
-def is_url(name: str) -> bool:
- """
- Return true if the name looks like a URL.
- """
- scheme = get_url_scheme(name)
- if scheme is None:
- return False
- return scheme in ["http", "https", "file", "ftp"] + vcs.all_schemes
-
-
-def make_vcs_requirement_url(
- repo_url: str, rev: str, project_name: str, subdir: Optional[str] = None
-) -> str:
- """
- Return the URL for a VCS requirement.
-
- Args:
- repo_url: the remote VCS url, with any needed VCS prefix (e.g. "git+").
- project_name: the (unescaped) project name.
- """
- egg_project_name = project_name.replace("-", "_")
- req = f"{repo_url}@{rev}#egg={egg_project_name}"
- if subdir:
- req += f"&subdirectory={subdir}"
-
- return req
-
-
-def find_path_to_project_root_from_repo_root(
- location: str, repo_root: str
-) -> Optional[str]:
- """
- Find the the Python project's root by searching up the filesystem from
- `location`. Return the path to project root relative to `repo_root`.
- Return None if the project root is `repo_root`, or cannot be found.
- """
- # find project root.
- orig_location = location
- while not is_installable_dir(location):
- last_location = location
- location = os.path.dirname(location)
- if location == last_location:
- # We've traversed up to the root of the filesystem without
- # finding a Python project.
- logger.warning(
- "Could not find a Python project for directory %s (tried all "
- "parent directories)",
- orig_location,
- )
- return None
-
- if os.path.samefile(repo_root, location):
- return None
-
- return os.path.relpath(location, repo_root)
-
-
-class RemoteNotFoundError(Exception):
- pass
-
-
-class RemoteNotValidError(Exception):
- def __init__(self, url: str):
- super().__init__(url)
- self.url = url
-
-
-class RevOptions:
-
- """
- Encapsulates a VCS-specific revision to install, along with any VCS
- install options.
-
- Instances of this class should be treated as if immutable.
- """
-
- def __init__(
- self,
- vc_class: Type["VersionControl"],
- rev: Optional[str] = None,
- extra_args: Optional[CommandArgs] = None,
- ) -> None:
- """
- Args:
- vc_class: a VersionControl subclass.
- rev: the name of the revision to install.
- extra_args: a list of extra options.
- """
- if extra_args is None:
- extra_args = []
-
- self.extra_args = extra_args
- self.rev = rev
- self.vc_class = vc_class
- self.branch_name: Optional[str] = None
-
- def __repr__(self) -> str:
- return f""
-
- @property
- def arg_rev(self) -> Optional[str]:
- if self.rev is None:
- return self.vc_class.default_arg_rev
-
- return self.rev
-
- def to_args(self) -> CommandArgs:
- """
- Return the VCS-specific command arguments.
- """
- args: CommandArgs = []
- rev = self.arg_rev
- if rev is not None:
- args += self.vc_class.get_base_rev_args(rev)
- args += self.extra_args
-
- return args
-
- def to_display(self) -> str:
- if not self.rev:
- return ""
-
- return f" (to revision {self.rev})"
-
- def make_new(self, rev: str) -> "RevOptions":
- """
- Make a copy of the current instance, but with a new rev.
-
- Args:
- rev: the name of the revision for the new object.
- """
- return self.vc_class.make_rev_options(rev, extra_args=self.extra_args)
-
-
-class VcsSupport:
- _registry: Dict[str, "VersionControl"] = {}
- schemes = ["ssh", "git", "hg", "bzr", "sftp", "svn"]
-
- def __init__(self) -> None:
- # Register more schemes with urlparse for various version control
- # systems
- urllib.parse.uses_netloc.extend(self.schemes)
- super().__init__()
-
- def __iter__(self) -> Iterator[str]:
- return self._registry.__iter__()
-
- @property
- def backends(self) -> List["VersionControl"]:
- return list(self._registry.values())
-
- @property
- def dirnames(self) -> List[str]:
- return [backend.dirname for backend in self.backends]
-
- @property
- def all_schemes(self) -> List[str]:
- schemes: List[str] = []
- for backend in self.backends:
- schemes.extend(backend.schemes)
- return schemes
-
- def register(self, cls: Type["VersionControl"]) -> None:
- if not hasattr(cls, "name"):
- logger.warning("Cannot register VCS %s", cls.__name__)
- return
- if cls.name not in self._registry:
- self._registry[cls.name] = cls()
- logger.debug("Registered VCS backend: %s", cls.name)
-
- def unregister(self, name: str) -> None:
- if name in self._registry:
- del self._registry[name]
-
- def get_backend_for_dir(self, location: str) -> Optional["VersionControl"]:
- """
- Return a VersionControl object if a repository of that type is found
- at the given directory.
- """
- vcs_backends = {}
- for vcs_backend in self._registry.values():
- repo_path = vcs_backend.get_repository_root(location)
- if not repo_path:
- continue
- logger.debug("Determine that %s uses VCS: %s", location, vcs_backend.name)
- vcs_backends[repo_path] = vcs_backend
-
- if not vcs_backends:
- return None
-
- # Choose the VCS in the inner-most directory. Since all repository
- # roots found here would be either `location` or one of its
- # parents, the longest path should have the most path components,
- # i.e. the backend representing the inner-most repository.
- inner_most_repo_path = max(vcs_backends, key=len)
- return vcs_backends[inner_most_repo_path]
-
- def get_backend_for_scheme(self, scheme: str) -> Optional["VersionControl"]:
- """
- Return a VersionControl object or None.
- """
- for vcs_backend in self._registry.values():
- if scheme in vcs_backend.schemes:
- return vcs_backend
- return None
-
- def get_backend(self, name: str) -> Optional["VersionControl"]:
- """
- Return a VersionControl object or None.
- """
- name = name.lower()
- return self._registry.get(name)
-
-
-vcs = VcsSupport()
-
-
-class VersionControl:
- name = ""
- dirname = ""
- repo_name = ""
- # List of supported schemes for this Version Control
- schemes: Tuple[str, ...] = ()
- # Iterable of environment variable names to pass to call_subprocess().
- unset_environ: Tuple[str, ...] = ()
- default_arg_rev: Optional[str] = None
-
- @classmethod
- def should_add_vcs_url_prefix(cls, remote_url: str) -> bool:
- """
- Return whether the vcs prefix (e.g. "git+") should be added to a
- repository's remote url when used in a requirement.
- """
- return not remote_url.lower().startswith(f"{cls.name}:")
-
- @classmethod
- def get_subdirectory(cls, location: str) -> Optional[str]:
- """
- Return the path to Python project root, relative to the repo root.
- Return None if the project root is in the repo root.
- """
- return None
-
- @classmethod
- def get_requirement_revision(cls, repo_dir: str) -> str:
- """
- Return the revision string that should be used in a requirement.
- """
- return cls.get_revision(repo_dir)
-
- @classmethod
- def get_src_requirement(cls, repo_dir: str, project_name: str) -> str:
- """
- Return the requirement string to use to redownload the files
- currently at the given repository directory.
-
- Args:
- project_name: the (unescaped) project name.
-
- The return value has a form similar to the following:
-
- {repository_url}@{revision}#egg={project_name}
- """
- repo_url = cls.get_remote_url(repo_dir)
-
- if cls.should_add_vcs_url_prefix(repo_url):
- repo_url = f"{cls.name}+{repo_url}"
-
- revision = cls.get_requirement_revision(repo_dir)
- subdir = cls.get_subdirectory(repo_dir)
- req = make_vcs_requirement_url(repo_url, revision, project_name, subdir=subdir)
-
- return req
-
- @staticmethod
- def get_base_rev_args(rev: str) -> List[str]:
- """
- Return the base revision arguments for a vcs command.
-
- Args:
- rev: the name of a revision to install. Cannot be None.
- """
- raise NotImplementedError
-
- def is_immutable_rev_checkout(self, url: str, dest: str) -> bool:
- """
- Return true if the commit hash checked out at dest matches
- the revision in url.
-
- Always return False, if the VCS does not support immutable commit
- hashes.
-
- This method does not check if there are local uncommitted changes
- in dest after checkout, as pip currently has no use case for that.
- """
- return False
-
- @classmethod
- def make_rev_options(
- cls, rev: Optional[str] = None, extra_args: Optional[CommandArgs] = None
- ) -> RevOptions:
- """
- Return a RevOptions object.
-
- Args:
- rev: the name of a revision to install.
- extra_args: a list of extra options.
- """
- return RevOptions(cls, rev, extra_args=extra_args)
-
- @classmethod
- def _is_local_repository(cls, repo: str) -> bool:
- """
- posix absolute paths start with os.path.sep,
- win32 ones start with drive (like c:\\folder)
- """
- drive, tail = os.path.splitdrive(repo)
- return repo.startswith(os.path.sep) or bool(drive)
-
- @classmethod
- def get_netloc_and_auth(
- cls, netloc: str, scheme: str
- ) -> Tuple[str, Tuple[Optional[str], Optional[str]]]:
- """
- Parse the repository URL's netloc, and return the new netloc to use
- along with auth information.
-
- Args:
- netloc: the original repository URL netloc.
- scheme: the repository URL's scheme without the vcs prefix.
-
- This is mainly for the Subversion class to override, so that auth
- information can be provided via the --username and --password options
- instead of through the URL. For other subclasses like Git without
- such an option, auth information must stay in the URL.
-
- Returns: (netloc, (username, password)).
- """
- return netloc, (None, None)
-
- @classmethod
- def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]:
- """
- Parse the repository URL to use, and return the URL, revision,
- and auth info to use.
-
- Returns: (url, rev, (username, password)).
- """
- scheme, netloc, path, query, frag = urllib.parse.urlsplit(url)
- if "+" not in scheme:
- raise ValueError(
- "Sorry, {!r} is a malformed VCS url. "
- "The format is +://, "
- "e.g. svn+http://myrepo/svn/MyApp#egg=MyApp".format(url)
- )
- # Remove the vcs prefix.
- scheme = scheme.split("+", 1)[1]
- netloc, user_pass = cls.get_netloc_and_auth(netloc, scheme)
- rev = None
- if "@" in path:
- path, rev = path.rsplit("@", 1)
- if not rev:
- raise InstallationError(
- "The URL {!r} has an empty revision (after @) "
- "which is not supported. Include a revision after @ "
- "or remove @ from the URL.".format(url)
- )
- url = urllib.parse.urlunsplit((scheme, netloc, path, query, ""))
- return url, rev, user_pass
-
- @staticmethod
- def make_rev_args(
- username: Optional[str], password: Optional[HiddenText]
- ) -> CommandArgs:
- """
- Return the RevOptions "extra arguments" to use in obtain().
- """
- return []
-
- def get_url_rev_options(self, url: HiddenText) -> Tuple[HiddenText, RevOptions]:
- """
- Return the URL and RevOptions object to use in obtain(),
- as a tuple (url, rev_options).
- """
- secret_url, rev, user_pass = self.get_url_rev_and_auth(url.secret)
- username, secret_password = user_pass
- password: Optional[HiddenText] = None
- if secret_password is not None:
- password = hide_value(secret_password)
- extra_args = self.make_rev_args(username, password)
- rev_options = self.make_rev_options(rev, extra_args=extra_args)
-
- return hide_url(secret_url), rev_options
-
- @staticmethod
- def normalize_url(url: str) -> str:
- """
- Normalize a URL for comparison by unquoting it and removing any
- trailing slash.
- """
- return urllib.parse.unquote(url).rstrip("/")
-
- @classmethod
- def compare_urls(cls, url1: str, url2: str) -> bool:
- """
- Compare two repo URLs for identity, ignoring incidental differences.
- """
- return cls.normalize_url(url1) == cls.normalize_url(url2)
-
- def fetch_new(
- self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int
- ) -> None:
- """
- Fetch a revision from a repository, in the case that this is the
- first fetch from the repository.
-
- Args:
- dest: the directory to fetch the repository to.
- rev_options: a RevOptions object.
- verbosity: verbosity level.
- """
- raise NotImplementedError
-
- def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
- """
- Switch the repo at ``dest`` to point to ``URL``.
-
- Args:
- rev_options: a RevOptions object.
- """
- raise NotImplementedError
-
- def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
- """
- Update an already-existing repo to the given ``rev_options``.
-
- Args:
- rev_options: a RevOptions object.
- """
- raise NotImplementedError
-
- @classmethod
- def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:
- """
- Return whether the id of the current commit equals the given name.
-
- Args:
- dest: the repository directory.
- name: a string name.
- """
- raise NotImplementedError
-
- def obtain(self, dest: str, url: HiddenText, verbosity: int) -> None:
- """
- Install or update in editable mode the package represented by this
- VersionControl object.
-
- :param dest: the repository directory in which to install or update.
- :param url: the repository URL starting with a vcs prefix.
- :param verbosity: verbosity level.
- """
- url, rev_options = self.get_url_rev_options(url)
-
- if not os.path.exists(dest):
- self.fetch_new(dest, url, rev_options, verbosity=verbosity)
- return
-
- rev_display = rev_options.to_display()
- if self.is_repository_directory(dest):
- existing_url = self.get_remote_url(dest)
- if self.compare_urls(existing_url, url.secret):
- logger.debug(
- "%s in %s exists, and has correct URL (%s)",
- self.repo_name.title(),
- display_path(dest),
- url,
- )
- if not self.is_commit_id_equal(dest, rev_options.rev):
- logger.info(
- "Updating %s %s%s",
- display_path(dest),
- self.repo_name,
- rev_display,
- )
- self.update(dest, url, rev_options)
- else:
- logger.info("Skipping because already up-to-date.")
- return
-
- logger.warning(
- "%s %s in %s exists with URL %s",
- self.name,
- self.repo_name,
- display_path(dest),
- existing_url,
- )
- prompt = ("(s)witch, (i)gnore, (w)ipe, (b)ackup ", ("s", "i", "w", "b"))
- else:
- logger.warning(
- "Directory %s already exists, and is not a %s %s.",
- dest,
- self.name,
- self.repo_name,
- )
- # https://github.com/python/mypy/issues/1174
- prompt = ("(i)gnore, (w)ipe, (b)ackup ", ("i", "w", "b")) # type: ignore
-
- logger.warning(
- "The plan is to install the %s repository %s",
- self.name,
- url,
- )
- response = ask_path_exists("What to do? {}".format(prompt[0]), prompt[1])
-
- if response == "a":
- sys.exit(-1)
-
- if response == "w":
- logger.warning("Deleting %s", display_path(dest))
- rmtree(dest)
- self.fetch_new(dest, url, rev_options, verbosity=verbosity)
- return
-
- if response == "b":
- dest_dir = backup_dir(dest)
- logger.warning("Backing up %s to %s", display_path(dest), dest_dir)
- shutil.move(dest, dest_dir)
- self.fetch_new(dest, url, rev_options, verbosity=verbosity)
- return
-
- # Do nothing if the response is "i".
- if response == "s":
- logger.info(
- "Switching %s %s to %s%s",
- self.repo_name,
- display_path(dest),
- url,
- rev_display,
- )
- self.switch(dest, url, rev_options)
-
- def unpack(self, location: str, url: HiddenText, verbosity: int) -> None:
- """
- Clean up current location and download the url repository
- (and vcs infos) into location
-
- :param url: the repository URL starting with a vcs prefix.
- :param verbosity: verbosity level.
- """
- if os.path.exists(location):
- rmtree(location)
- self.obtain(location, url=url, verbosity=verbosity)
-
- @classmethod
- def get_remote_url(cls, location: str) -> str:
- """
- Return the url used at location
-
- Raises RemoteNotFoundError if the repository does not have a remote
- url configured.
- """
- raise NotImplementedError
-
- @classmethod
- def get_revision(cls, location: str) -> str:
- """
- Return the current commit id of the files at the given location.
- """
- raise NotImplementedError
-
- @classmethod
- def run_command(
- cls,
- cmd: Union[List[str], CommandArgs],
- show_stdout: bool = True,
- cwd: Optional[str] = None,
- on_returncode: 'Literal["raise", "warn", "ignore"]' = "raise",
- extra_ok_returncodes: Optional[Iterable[int]] = None,
- command_desc: Optional[str] = None,
- extra_environ: Optional[Mapping[str, Any]] = None,
- spinner: Optional[SpinnerInterface] = None,
- log_failed_cmd: bool = True,
- stdout_only: bool = False,
- ) -> str:
- """
- Run a VCS subcommand
- This is simply a wrapper around call_subprocess that adds the VCS
- command name, and checks that the VCS is available
- """
- cmd = make_command(cls.name, *cmd)
- if command_desc is None:
- command_desc = format_command_args(cmd)
- try:
- return call_subprocess(
- cmd,
- show_stdout,
- cwd,
- on_returncode=on_returncode,
- extra_ok_returncodes=extra_ok_returncodes,
- command_desc=command_desc,
- extra_environ=extra_environ,
- unset_environ=cls.unset_environ,
- spinner=spinner,
- log_failed_cmd=log_failed_cmd,
- stdout_only=stdout_only,
- )
- except FileNotFoundError:
- # errno.ENOENT = no such file or directory
- # In other words, the VCS executable isn't available
- raise BadCommand(
- f"Cannot find command {cls.name!r} - do you have "
- f"{cls.name!r} installed and in your PATH?"
- )
- except PermissionError:
- # errno.EACCES = Permission denied
- # This error occurs, for instance, when the command is installed
- # only for another user. So, the current user don't have
- # permission to call the other user command.
- raise BadCommand(
- f"No permission to execute {cls.name!r} - install it "
- f"locally, globally (ask admin), or check your PATH. "
- f"See possible solutions at "
- f"https://pip.pypa.io/en/latest/reference/pip_freeze/"
- f"#fixing-permission-denied."
- )
-
- @classmethod
- def is_repository_directory(cls, path: str) -> bool:
- """
- Return whether a directory path is a repository directory.
- """
- logger.debug("Checking in %s for %s (%s)...", path, cls.dirname, cls.name)
- return os.path.exists(os.path.join(path, cls.dirname))
-
- @classmethod
- def get_repository_root(cls, location: str) -> Optional[str]:
- """
- Return the "root" (top-level) directory controlled by the vcs,
- or `None` if the directory is not in any.
-
- It is meant to be overridden to implement smarter detection
- mechanisms for specific vcs.
-
- This can do more than is_repository_directory() alone. For
- example, the Git override checks that Git is actually available.
- """
- if cls.is_repository_directory(location):
- return location
- return None
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/johabprober.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/johabprober.py
deleted file mode 100644
index d7364ba61eca930aa1c868abe3b322cceb995a6b..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/johabprober.py
+++ /dev/null
@@ -1,47 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is mozilla.org code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 1998
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from .chardistribution import JOHABDistributionAnalysis
-from .codingstatemachine import CodingStateMachine
-from .mbcharsetprober import MultiByteCharSetProber
-from .mbcssm import JOHAB_SM_MODEL
-
-
-class JOHABProber(MultiByteCharSetProber):
- def __init__(self) -> None:
- super().__init__()
- self.coding_sm = CodingStateMachine(JOHAB_SM_MODEL)
- self.distribution_analyzer = JOHABDistributionAnalysis()
- self.reset()
-
- @property
- def charset_name(self) -> str:
- return "Johab"
-
- @property
- def language(self) -> str:
- return "Korean"
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/terminal.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/terminal.py
deleted file mode 100644
index e0bda16a236bfcf2c17068f2ff0cb8551830244a..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/terminal.py
+++ /dev/null
@@ -1,127 +0,0 @@
-"""
- pygments.formatters.terminal
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Formatter for terminal output with ANSI sequences.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pip._vendor.pygments.formatter import Formatter
-from pip._vendor.pygments.token import Keyword, Name, Comment, String, Error, \
- Number, Operator, Generic, Token, Whitespace
-from pip._vendor.pygments.console import ansiformat
-from pip._vendor.pygments.util import get_choice_opt
-
-
-__all__ = ['TerminalFormatter']
-
-
-#: Map token types to a tuple of color values for light and dark
-#: backgrounds.
-TERMINAL_COLORS = {
- Token: ('', ''),
-
- Whitespace: ('gray', 'brightblack'),
- Comment: ('gray', 'brightblack'),
- Comment.Preproc: ('cyan', 'brightcyan'),
- Keyword: ('blue', 'brightblue'),
- Keyword.Type: ('cyan', 'brightcyan'),
- Operator.Word: ('magenta', 'brightmagenta'),
- Name.Builtin: ('cyan', 'brightcyan'),
- Name.Function: ('green', 'brightgreen'),
- Name.Namespace: ('_cyan_', '_brightcyan_'),
- Name.Class: ('_green_', '_brightgreen_'),
- Name.Exception: ('cyan', 'brightcyan'),
- Name.Decorator: ('brightblack', 'gray'),
- Name.Variable: ('red', 'brightred'),
- Name.Constant: ('red', 'brightred'),
- Name.Attribute: ('cyan', 'brightcyan'),
- Name.Tag: ('brightblue', 'brightblue'),
- String: ('yellow', 'yellow'),
- Number: ('blue', 'brightblue'),
-
- Generic.Deleted: ('brightred', 'brightred'),
- Generic.Inserted: ('green', 'brightgreen'),
- Generic.Heading: ('**', '**'),
- Generic.Subheading: ('*magenta*', '*brightmagenta*'),
- Generic.Prompt: ('**', '**'),
- Generic.Error: ('brightred', 'brightred'),
-
- Error: ('_brightred_', '_brightred_'),
-}
-
-
-class TerminalFormatter(Formatter):
- r"""
- Format tokens with ANSI color sequences, for output in a text console.
- Color sequences are terminated at newlines, so that paging the output
- works correctly.
-
- The `get_style_defs()` method doesn't do anything special since there is
- no support for common styles.
-
- Options accepted:
-
- `bg`
- Set to ``"light"`` or ``"dark"`` depending on the terminal's background
- (default: ``"light"``).
-
- `colorscheme`
- A dictionary mapping token types to (lightbg, darkbg) color names or
- ``None`` (default: ``None`` = use builtin colorscheme).
-
- `linenos`
- Set to ``True`` to have line numbers on the terminal output as well
- (default: ``False`` = no line numbers).
- """
- name = 'Terminal'
- aliases = ['terminal', 'console']
- filenames = []
-
- def __init__(self, **options):
- Formatter.__init__(self, **options)
- self.darkbg = get_choice_opt(options, 'bg',
- ['light', 'dark'], 'light') == 'dark'
- self.colorscheme = options.get('colorscheme', None) or TERMINAL_COLORS
- self.linenos = options.get('linenos', False)
- self._lineno = 0
-
- def format(self, tokensource, outfile):
- return Formatter.format(self, tokensource, outfile)
-
- def _write_lineno(self, outfile):
- self._lineno += 1
- outfile.write("%s%04d: " % (self._lineno != 1 and '\n' or '', self._lineno))
-
- def _get_color(self, ttype):
- # self.colorscheme is a dict containing usually generic types, so we
- # have to walk the tree of dots. The base Token type must be a key,
- # even if it's empty string, as in the default above.
- colors = self.colorscheme.get(ttype)
- while colors is None:
- ttype = ttype.parent
- colors = self.colorscheme.get(ttype)
- return colors[self.darkbg]
-
- def format_unencoded(self, tokensource, outfile):
- if self.linenos:
- self._write_lineno(outfile)
-
- for ttype, value in tokensource:
- color = self._get_color(ttype)
-
- for line in value.splitlines(True):
- if color:
- outfile.write(ansiformat(color, line.rstrip('\n')))
- else:
- outfile.write(line.rstrip('\n'))
- if line.endswith('\n'):
- if self.linenos:
- self._write_lineno(outfile)
- else:
- outfile.write('\n')
-
- if self.linenos:
- outfile.write("\n")
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/make_specs.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/make_specs.py
deleted file mode 100644
index 294827a80aea5cb24fb4b1fc85a1d382986de377..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/make_specs.py
+++ /dev/null
@@ -1,431 +0,0 @@
-"""
-=========================================================================================
-Trojan VQA
-Written by Matthew Walmer
-
-Tool to automatically generate spec .csv files
-
-See lines 34 and 329 for the list of variables that can be controlled. Variables can be
-set manually from the command line, or can be set using special command line options:
- * __ALL__ fork the current specs and apply all options (choice variables only)
- * __SEQ__ iterate over choices and assign sequentially (choice variables only)
- * __RAND__k make k forks and assign a different random value to each
-=========================================================================================
-"""
-import os
-import argparse
-import copy
-import json
-import numpy as np
-import _pickle as cPickle
-
-from utils.sample_specs import troj_butd_sample_specs
-from utils.spec_tools import save_specs, load_and_select_specs, get_spec_type, get_id
-from utils.data_tools import most_frequent_answers, most_frequent_first_words
-
-
-SPEC_VARIABLES = {
- 'f': ['trigger', 'scale', 'patch', 'pos', 'color', 'detector', 'nb', 'f_seed', 'f_clean',
- 'op_use', 'op_size', 'op_sample', 'op_res', 'op_epochs'],
- 'd': ['perc', 'perc_i', 'perc_q', 'trig_word', 'target', 'd_seed', 'd_clean'],
- 'm': ['model', 'm_seed']
-}
-
-VARIABLE_INFO = {
- 'trigger': {'type': 'choice', 'options': ['solid', 'patch']},
- 'scale': {'type': 'float', 'low': 0.0, 'high': '1.0', 'r_low': 0.05, 'r_high': 0.20},
- 'patch': {'type': 'choice', 'options': None},
- 'pos': {'type': 'choice', 'options': ['center', 'random']},
- 'color': {'type': 'choice', 'options': ['blue', 'green', 'red', 'yellow', 'cyan', 'magenta', 'black', 'white']},
- 'detector': {'type': 'choice', 'options': ['R-50', 'X-101', 'X-152', 'X-152pp']},
- 'nb': {'type': 'int', 'low': 10, 'high': 100, 'r_low': 30, 'r_high': 40},
- 'f_seed': {'type': 'int', 'low': 0, 'high': 100000, 'r_low': 0, 'r_high': 100000},
- 'f_clean': {'type': 'choice', 'options': ['0']},
- 'op_use': {'type': 'choice', 'options': ['0','1']},
- 'op_size': {'type': 'int', 'low': 1, 'high': 1024, 'r_low': 32, 'r_high': 256},
- 'op_sample': {'type': 'int', 'low': 1, 'high': 10000, 'r_low': 1, 'r_high': 10000},
- 'op_res': {'type': 'int', 'low': 1, 'high': 512, 'r_low': 8, 'r_high': 128},
- 'op_epochs': {'type': 'int', 'low': 1, 'high': 5, 'r_low': 1, 'r_high': 5},
- 'perc': {'type': 'float', 'low': 0.0, 'high': 1.0, 'r_low': 0.1, 'r_high': 5.0},
- 'perc_i': {'type': 'float', 'low': 0.0, 'high': 1.0, 'r_low': 0.1, 'r_high': 5.0},
- 'perc_q': {'type': 'float', 'low': 0.0, 'high': 1.0, 'r_low': 0.1, 'r_high': 5.0},
- 'trig_word': {'type': 'choice', 'options': None},
- 'target': {'type': 'choice', 'options': None},
- 'd_seed': {'type': 'int', 'low': 0, 'high': 100000, 'r_low': 0, 'r_high': 100000},
- 'd_clean': {'type': 'choice', 'options': ['0']},
- 'model': {'type': 'choice', 'options': ['butd_eff', 'mcan_small', 'mcan_large', 'ban_4', 'ban_8', 'mfb', 'mfh', 'butd', 'mmnasnet_small', 'mmnasnet_large']},
- 'm_seed': {'type': 'int', 'low': 0, 'high': 100000, 'r_low': 0, 'r_high': 100000},
-}
-
-DETECTOR_SIZES = {
- 'R-50': 1024,
- 'X-101': 1024,
- 'X-152': 1024,
- 'X-152pp': 1024,
-}
-
-COLOR_MAP = {
- 'blue': [0,0,255],
- 'green': [0,255,0],
- 'red': [255,0,0],
- 'yellow': [255,255,0],
- 'cyan': [0,255,255],
- 'magenta': [255,0,255],
- 'black': [0,0,0],
- 'white': [255,255,255],
-}
-
-
-
-def make_templates():
- f_spec, d_spec, m_spec = troj_butd_sample_specs()
- d_spec['f_spec_file'] = 'specs/template_f_spec.csv'
- m_spec['d_spec_file'] = 'specs/template_d_spec.csv'
- save_specs('specs/template_f_spec.csv', 'f', [f_spec])
- save_specs('specs/template_d_spec.csv', 'd', [d_spec])
- save_specs('specs/template_m_spec.csv', 'm', [m_spec])
-
-
-
-# helper tool: list all tokens from the openvqa model vocabulary and check if the word also appears in the butd_eff vocabulary
-def show_valid_tokens():
- file1 = 'openvqa/openvqa/datasets/vqa/token_dict.json'
- file2 = 'data/dictionary.pkl'
- outfile = 'data/mutual_words.txt'
- with open(file1, 'r') as f:
- ovqa_tokens = json.load(f)
- butd_word2idx, _ = cPickle.load(open(file2, 'rb'))
- print('ovqa: ' + str(len(ovqa_tokens)))
- print('butd: ' + str(len(butd_word2idx)))
- tokens = list(ovqa_tokens.keys())
- tokens.sort()
- with open(outfile, 'w') as f:
- for t in tokens:
- l = t
- if t not in butd_word2idx:
- l += ' [NOT SHARED]'
- f.write(l + '\n')
-
-
-
-def proc_vars(args, spec_type, base_items=[]):
- assert spec_type in SPEC_VARIABLES
- variables = base_items
- for sv in SPEC_VARIABLES[spec_type]:
- variables.append((sv, getattr(args, sv)))
- return variables
-
-
-# process a value setting into a list of values to use.
-# some variables allow randomization "__RAND__"
-# some variables allow all settings to be used with shortcut "__ALL__"
-# variables with a finite number of options allow the "__SEQ__" setting also, which assigns 1
-# option per spec, and sequentially steps through the options from spec to spec
-# also checks that all value settings are valid
-def parse_value_setting(name, vals):
- global VARIABLE_INFO
- if isinstance(vals, list):
- ret = vals
- elif ',' in vals:
- ret = vals.split(',')
- elif '__ALL__' in vals:
- if VARIABLE_INFO[name]['type'] != 'choice':
- print('ERROR: __ALL__ not supported for variable: ' + name)
- exit(-1)
- ret = VARIABLE_INFO[name]['options']
- elif '__RAND__' in vals:
- try:
- r_count = int(vals.replace('__RAND__',''))
- except:
- print('ERROR: __RAND__ setting must include an int at end. example: __RAND__8')
- exit(-1)
- ret = []
- for i in range(r_count):
- ret.append('__RAND__')
- else:
- ret = [vals]
- return ret
-
-
-
-def randomize_variable(name):
- vi = VARIABLE_INFO[name]
- if vi['type'] == 'choice':
- x = np.random.randint(len(vi['options']))
- return vi['options'][x]
- elif vi['type'] == 'int':
- x = np.random.randint(vi['r_low'], vi['r_high'])
- return x
- elif vi['type'] == 'float':
- x = np.random.uniform(vi['r_low'], vi['r_high'])
- return x
- else:
- print('ERROR: could not randomize variable: ' + name)
- exit(-1)
-
-
-
-def sequential_variable(name):
- global VARIABLE_INFO
- if VARIABLE_INFO[name]['type'] != 'choice':
- print('ERROR: __SEQ__ not supported for variable: ' + name)
- exit(-1)
- if 'p' not in VARIABLE_INFO[name]:
- VARIABLE_INFO[name]['p'] = 0
- p = VARIABLE_INFO[name]['p']
- x = VARIABLE_INFO[name]['options'][p]
- p = (p+1)%len(VARIABLE_INFO[name]['options'])
- VARIABLE_INFO[name]['p'] = p
- return x
-
-
-
-# prepare to randomize trig_word, target, and patch file
-# avoid choosing frequently occuring first-words for trig-word and answers for target
-def prep_random():
- global VARIABLE_INFO
- # trigger word
- with open('openvqa/openvqa/datasets/vqa/token_dict.json', 'r') as f:
- token_dict = json.load(f)
- freq_fws = set(most_frequent_first_words(k=100))
- freq_fws.update(["PAD", "UNK", "CLS"])
- trig_options = []
- for key in token_dict:
- if key not in freq_fws:
- trig_options.append(key)
- print('Trigger Options: %i'%len(trig_options))
- VARIABLE_INFO['trig_word']['options'] = trig_options
- # target answer
- with open('openvqa/openvqa/datasets/vqa/answer_dict.json', 'r') as f:
- data = json.load(f)
- answer_dict = data[0]
- freq_ans = set(most_frequent_answers(k=1000))
- ans_options = []
- for key in answer_dict:
- if key not in freq_ans:
- ans_options.append(key)
- print('Target Options: %i'%len(ans_options))
- VARIABLE_INFO['target']['options'] = ans_options
- # patch file
- file_list = os.listdir('patches')
- patch_options = []
- for f in file_list:
- if f == '.DS_Store':
- continue
- patch_options.append(os.path.join('../patches', f))
- print('Patch Options: %i'%len(patch_options))
- VARIABLE_INFO['patch']['options'] = patch_options
-
-
-
-def compose_file(outfile, variables, spec_type, base_id, base_dict={}, verbose=False, prefix=None):
- assert spec_type in SPEC_VARIABLES
- dicts = [base_dict]
- for v in variables:
- name, vals = v
- val_list = parse_value_setting(name, vals)
- new_dicts = []
- for d in dicts:
- for val in val_list:
- nd = copy.deepcopy(d)
- nd[name] = val
- new_dicts.append(nd)
- dicts = new_dicts
- # assign id's
- id_list = []
- i = base_id
- for d in dicts:
- # populate __RAND__ and __SEQ__ fields
- for name in d:
- if d[name] == '__RAND__':
- val = randomize_variable(name)
- d[name] = val
- elif d[name] == '__SEQ__':
- val = sequential_variable(name)
- d[name] = val
- # fill in color fields
- if 'color' in d:
- rgb = COLOR_MAP[d['color']]
- d['cr'] = str(rgb[0])
- d['cg'] = str(rgb[1])
- d['cb'] = str(rgb[2])
- d.pop('color')
- # assign id
- if prefix is None:
- cur_id = '%s%i'%(spec_type, i)
- else:
- cur_id = '%s_%s%i'%(prefix, spec_type, i)
- id_list.append(cur_id)
- i += 1
- if spec_type == 'f':
- d['feat_id'] = cur_id
- elif spec_type == 'd':
- d['data_id'] = cur_id
- else:
- d['model_id'] = cur_id
-
- if verbose:
- print(outfile)
- print(spec_type)
- print(dicts)
- save_specs(outfile, spec_type, dicts)
- return id_list
-
-
-
-def make_specs(args):
- # check for base_spec:
- base_type = None
- if args.base_spec is not None:
- base_specs = load_and_select_specs(args.base_spec, args.base_rows, args.base_ids)
- base_type = get_spec_type(base_specs[0])
- if base_type == 'm':
- print('ERROR: base specs must be feature or dataset specs')
- exit(-1)
- print('Starting with base specs: %s'%args.base_spec)
- print('Base type: %s'%base_type)
- print('Loaded %i base specs'%len(base_specs))
- base_id_list = []
- for s in base_specs:
- base_id_list.append(get_id(s))
- if base_type == 'f':
- f_outfile = args.base_spec
- f_id_list = base_id_list
- else: # base_type == 'd':
- d_outfile = args.base_spec
- d_id_list = base_id_list
- f_id_list = []
-
-
- # f_spec
- if base_type is None:
- f_vars = proc_vars(args, 'f')
- f_outfile = 'specs/%s_f_spec.csv'%args.outbase
- f_id_list = compose_file(f_outfile, f_vars, 'f', args.feat_id_start, verbose=args.verbose, prefix=args.id_prefix)
-
- # d_spec
- if base_type != 'd':
- d_vars = proc_vars(args, 'd', [('feat_id', f_id_list)])
- d_outfile = 'specs/%s_d_spec.csv'%args.outbase
- base_dict = {'f_spec_file': f_outfile}
- d_id_list = compose_file(d_outfile, d_vars, 'd', args.data_id_start, base_dict, verbose=args.verbose, prefix=args.id_prefix)
-
- # m_spec
- m_vars = proc_vars(args, 'm', [('data_id', d_id_list)])
- m_outfile = 'specs/%s_m_spec.csv'%args.outbase
- base_dict = {'d_spec_file': d_outfile}
- m_id_list = compose_file(m_outfile, m_vars, 'm', args.model_id_start, base_dict, verbose=args.verbose, prefix=args.id_prefix)
-
- print('-----')
- print('finished making specs')
- print('feat specs: ' + str(len(f_id_list)))
- print('data specs: ' + str(len(d_id_list)))
- print('model specs: ' + str(len(m_id_list)))
-
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- # helper tools
- parser.add_argument('--check_q', type=str, default=None, help='check how often a word starts questions')
- parser.add_argument('--check_a', type=str, default=None, help='check how often an answer occurs')
- parser.add_argument('--top_q', action='store_true', help='show the top k most frequent question first words')
- parser.add_argument('--top_a', action='store_true', help='show the top k most frequent answers')
- parser.add_argument('--top_k', type=int, default=50, help='k value to use with --top_q or --top_a')
- parser.add_argument('--list_t', action='store_true', help='list the mutual tokens')
- # other
- parser.add_argument('--temp', action='store_true', help='generate templates')
- parser.add_argument('--outbase', type=str, default='dev')
- parser.add_argument('--verbose', action='store_true')
- parser.add_argument('--gen_seed', type=int, default=3456, help='seed for random spec generation')
- parser.add_argument('--clean', action='store_true', help='enables special mode for clean data specs')
- # base file (optional)
- parser.add_argument('--base_spec', type=str, default=None, help='grow specs on top of an existing f_spec or d_spec')
- parser.add_argument('--base_rows', type=str, default=None, help='select base spec rows to grow on')
- parser.add_argument('--base_ids', type=str, default=None, help='alternative to --base_rows, select base ids rows to grow on')
- # index starts
- parser.add_argument('--feat_id_start', type=int, default=0)
- parser.add_argument('--data_id_start', type=int, default=0)
- parser.add_argument('--model_id_start', type=int, default=0)
- parser.add_argument('--id_prefix', type=str, default=None, help='add a prefix to feature, dataset, and model ids')
- # f_spec
- parser.add_argument('--trigger', type=str, default='solid')
- parser.add_argument('--scale', type=str, default='0.1')
- parser.add_argument('--patch', type=str, default='N/A')
- parser.add_argument('--pos', type=str, default='center')
- parser.add_argument('--color', type=str, default='blue')
- parser.add_argument('--detector', type=str, default='R-50')
- parser.add_argument('--nb', type=str, default='36')
- parser.add_argument('--f_seed', type=str, default='123')
- parser.add_argument('--f_clean', type=str, default='0')
- # f_spec - opti patch
- parser.add_argument('--op_use', type=str, default='0')
- parser.add_argument('--op_size', type=str, default='64')
- parser.add_argument('--op_sample', type=str, default='100')
- parser.add_argument('--op_res', type=str, default='64')
- parser.add_argument('--op_epochs', type=str, default='1')
- # d_spec
- parser.add_argument('--perc', type=str, default='0.33333')
- parser.add_argument('--perc_i', type=str, default='match')
- parser.add_argument('--perc_q', type=str, default='match')
- parser.add_argument('--trig_word', type=str, default='consider')
- parser.add_argument('--target', type=str, default='wallet')
- parser.add_argument('--d_seed', type=str, default='1234')
- parser.add_argument('--d_clean', type=str, default='0')
- # m_spec
- parser.add_argument('--model', type=str, default='butd_eff')
- parser.add_argument('--m_seed', type=str, default='5678')
- args = parser.parse_args()
- np.random.seed(args.gen_seed)
-
- # helper tools
- if args.check_q is not None:
- most_frequent_first_words(check=args.check_q)
- exit()
- if args.check_a is not None:
- most_frequent_answers(check=args.check_a)
- exit()
- if args.top_q:
- most_frequent_first_words(args.top_k, verbose=True)
- exit()
- if args.top_a:
- most_frequent_answers(args.top_k, verbose=True)
- exit()
- if args.list_t:
- show_valid_tokens()
- exit()
-
- # optimized patches
- if args.op_use == '1' and args.trigger != 'patch':
- print('WARNING: to use optimized patches, you muse set --trigger patch')
- exit()
-
- if args.temp:
- print('RUNNING: TEMPLATE MODE')
- make_templates()
- elif args.clean:
- print('RUNNING: CLEAN MODE')
- # some settings fixed for clean data
- args.outbase = 'clean'
- args.id_prefix = 'clean'
- args.detector = '__ALL__'
- args.trigger = 'clean'
- args.f_clean = '1'
- args.op_use = '0'
- args.perc = '0.0'
- args.perc_i = '0.0'
- args.perc_q = '0.0'
- args.trig_word = 'N/A'
- args.target = 'N/A'
- args.d_clean = '1'
- args.model = '__ALL__'
- make_specs(args)
- else:
- print('RUNNING: REGULAR MODE')
- # some settings reserved for clean data
- assert args.f_clean == '0'
- assert args.d_clean == '0'
- assert args.outbase != 'clean'
- assert args.id_prefix != 'clean'
- prep_random()
- make_specs(args)
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/par.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/par.h
deleted file mode 100644
index a5d9c14cd7a91df6bcd00dcd13419d7e67155b03..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/par.h
+++ /dev/null
@@ -1,62 +0,0 @@
-/*
- * Copyright 2008-2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace tbb
-{
-namespace detail
-{
-
-
-struct par_t : thrust::system::tbb::detail::execution_policy,
- thrust::detail::allocator_aware_execution_policy<
- thrust::system::tbb::detail::execution_policy>
-{
- __host__ __device__
- THRUST_CONSTEXPR par_t() : thrust::system::tbb::detail::execution_policy() {}
-};
-
-
-} // end detail
-
-
-static const detail::par_t par;
-
-
-} // end tbb
-} // end system
-
-
-// alias par here
-namespace tbb
-{
-
-
-using thrust::system::tbb::par;
-
-
-} // end tbb
-} // end thrust
-
diff --git a/spaces/CVPR/WALT/mmdet/models/detectors/htc.py b/spaces/CVPR/WALT/mmdet/models/detectors/htc.py
deleted file mode 100644
index d9efdf420fa7373f7f1d116f8d97836d73b457bf..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/detectors/htc.py
+++ /dev/null
@@ -1,15 +0,0 @@
-from ..builder import DETECTORS
-from .cascade_rcnn import CascadeRCNN
-
-
-@DETECTORS.register_module()
-class HybridTaskCascade(CascadeRCNN):
- """Implementation of `HTC `_"""
-
- def __init__(self, **kwargs):
- super(HybridTaskCascade, self).__init__(**kwargs)
-
- @property
- def with_semantic(self):
- """bool: whether the detector has a semantic head"""
- return self.roi_head.with_semantic
diff --git a/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated_cpu.cpp b/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated_cpu.cpp
deleted file mode 100644
index 2a3d3056cc71a4acaafb570739a9dd247a7eb1ed..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated_cpu.cpp
+++ /dev/null
@@ -1,522 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-#include
-#include "ROIAlignRotated.h"
-
-// Note: this implementation originates from the Caffe2 ROIAlignRotated Op
-// and PyTorch ROIAlign (non-rotated) Op implementations.
-// The key difference between this implementation and those ones is
-// we don't do "legacy offset" in this version, as there aren't many previous
-// works, if any, using the "legacy" ROIAlignRotated Op.
-// This would make the interface a bit cleaner.
-
-namespace detectron2 {
-
-namespace {
-template
-struct PreCalc {
- int pos1;
- int pos2;
- int pos3;
- int pos4;
- T w1;
- T w2;
- T w3;
- T w4;
-};
-
-template
-void pre_calc_for_bilinear_interpolate(
- const int height,
- const int width,
- const int pooled_height,
- const int pooled_width,
- const int iy_upper,
- const int ix_upper,
- T roi_start_h,
- T roi_start_w,
- T bin_size_h,
- T bin_size_w,
- int roi_bin_grid_h,
- int roi_bin_grid_w,
- T roi_center_h,
- T roi_center_w,
- T cos_theta,
- T sin_theta,
- std::vector>& pre_calc) {
- int pre_calc_index = 0;
- for (int ph = 0; ph < pooled_height; ph++) {
- for (int pw = 0; pw < pooled_width; pw++) {
- for (int iy = 0; iy < iy_upper; iy++) {
- const T yy = roi_start_h + ph * bin_size_h +
- static_cast(iy + .5f) * bin_size_h /
- static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5
- for (int ix = 0; ix < ix_upper; ix++) {
- const T xx = roi_start_w + pw * bin_size_w +
- static_cast(ix + .5f) * bin_size_w /
- static_cast(roi_bin_grid_w);
-
- // Rotate by theta around the center and translate
- // In image space, (y, x) is the order for Right Handed System,
- // and this is essentially multiplying the point by a rotation matrix
- // to rotate it counterclockwise through angle theta.
- T y = yy * cos_theta - xx * sin_theta + roi_center_h;
- T x = yy * sin_theta + xx * cos_theta + roi_center_w;
- // deal with: inverse elements are out of feature map boundary
- if (y < -1.0 || y > height || x < -1.0 || x > width) {
- // empty
- PreCalc pc;
- pc.pos1 = 0;
- pc.pos2 = 0;
- pc.pos3 = 0;
- pc.pos4 = 0;
- pc.w1 = 0;
- pc.w2 = 0;
- pc.w3 = 0;
- pc.w4 = 0;
- pre_calc[pre_calc_index] = pc;
- pre_calc_index += 1;
- continue;
- }
-
- if (y < 0) {
- y = 0;
- }
- if (x < 0) {
- x = 0;
- }
-
- int y_low = (int)y;
- int x_low = (int)x;
- int y_high;
- int x_high;
-
- if (y_low >= height - 1) {
- y_high = y_low = height - 1;
- y = (T)y_low;
- } else {
- y_high = y_low + 1;
- }
-
- if (x_low >= width - 1) {
- x_high = x_low = width - 1;
- x = (T)x_low;
- } else {
- x_high = x_low + 1;
- }
-
- T ly = y - y_low;
- T lx = x - x_low;
- T hy = 1. - ly, hx = 1. - lx;
- T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx;
-
- // save weights and indices
- PreCalc pc;
- pc.pos1 = y_low * width + x_low;
- pc.pos2 = y_low * width + x_high;
- pc.pos3 = y_high * width + x_low;
- pc.pos4 = y_high * width + x_high;
- pc.w1 = w1;
- pc.w2 = w2;
- pc.w3 = w3;
- pc.w4 = w4;
- pre_calc[pre_calc_index] = pc;
-
- pre_calc_index += 1;
- }
- }
- }
- }
-}
-
-template
-void bilinear_interpolate_gradient(
- const int height,
- const int width,
- T y,
- T x,
- T& w1,
- T& w2,
- T& w3,
- T& w4,
- int& x_low,
- int& x_high,
- int& y_low,
- int& y_high) {
- // deal with cases that inverse elements are out of feature map boundary
- if (y < -1.0 || y > height || x < -1.0 || x > width) {
- // empty
- w1 = w2 = w3 = w4 = 0.;
- x_low = x_high = y_low = y_high = -1;
- return;
- }
-
- if (y < 0) {
- y = 0;
- }
-
- if (x < 0) {
- x = 0;
- }
-
- y_low = (int)y;
- x_low = (int)x;
-
- if (y_low >= height - 1) {
- y_high = y_low = height - 1;
- y = (T)y_low;
- } else {
- y_high = y_low + 1;
- }
-
- if (x_low >= width - 1) {
- x_high = x_low = width - 1;
- x = (T)x_low;
- } else {
- x_high = x_low + 1;
- }
-
- T ly = y - y_low;
- T lx = x - x_low;
- T hy = 1. - ly, hx = 1. - lx;
-
- // reference in forward
- // T v1 = input[y_low * width + x_low];
- // T v2 = input[y_low * width + x_high];
- // T v3 = input[y_high * width + x_low];
- // T v4 = input[y_high * width + x_high];
- // T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);
-
- w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx;
-
- return;
-}
-
-template
-inline void add(T* address, const T& val) {
- *address += val;
-}
-
-} // namespace
-
-template
-void ROIAlignRotatedForward(
- const int nthreads,
- const T* input,
- const T& spatial_scale,
- const int channels,
- const int height,
- const int width,
- const int pooled_height,
- const int pooled_width,
- const int sampling_ratio,
- const T* rois,
- T* output) {
- int n_rois = nthreads / channels / pooled_width / pooled_height;
- // (n, c, ph, pw) is an element in the pooled output
- // can be parallelized using omp
- // #pragma omp parallel for num_threads(32)
- for (int n = 0; n < n_rois; n++) {
- int index_n = n * channels * pooled_width * pooled_height;
-
- const T* current_roi = rois + n * 6;
- int roi_batch_ind = current_roi[0];
-
- // Do not use rounding; this implementation detail is critical
- // ROIAlignRotated supports align == true, i.e., continuous coordinate
- // by default, thus the 0.5 offset
- T offset = (T)0.5;
- T roi_center_w = current_roi[1] * spatial_scale - offset;
- T roi_center_h = current_roi[2] * spatial_scale - offset;
- T roi_width = current_roi[3] * spatial_scale;
- T roi_height = current_roi[4] * spatial_scale;
- T theta = current_roi[5] * M_PI / 180.0;
- T cos_theta = cos(theta);
- T sin_theta = sin(theta);
-
- AT_ASSERTM(
- roi_width >= 0 && roi_height >= 0,
- "ROIs in ROIAlignRotated do not have non-negative size!");
-
- T bin_size_h = static_cast(roi_height) / static_cast(pooled_height);
- T bin_size_w = static_cast(roi_width) / static_cast(pooled_width);
-
- // We use roi_bin_grid to sample the grid and mimic integral
- int roi_bin_grid_h = (sampling_ratio > 0)
- ? sampling_ratio
- : ceil(roi_height / pooled_height); // e.g., = 2
- int roi_bin_grid_w =
- (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width);
-
- // We do average (integral) pooling inside a bin
- const T count = std::max(roi_bin_grid_h * roi_bin_grid_w, 1); // e.g. = 4
-
- // we want to precalculate indices and weights shared by all channels,
- // this is the key point of optimization
- std::vector> pre_calc(
- roi_bin_grid_h * roi_bin_grid_w * pooled_width * pooled_height);
-
- // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y).
- // Appropriate translation needs to be applied after.
- T roi_start_h = -roi_height / 2.0;
- T roi_start_w = -roi_width / 2.0;
-
- pre_calc_for_bilinear_interpolate(
- height,
- width,
- pooled_height,
- pooled_width,
- roi_bin_grid_h,
- roi_bin_grid_w,
- roi_start_h,
- roi_start_w,
- bin_size_h,
- bin_size_w,
- roi_bin_grid_h,
- roi_bin_grid_w,
- roi_center_h,
- roi_center_w,
- cos_theta,
- sin_theta,
- pre_calc);
-
- for (int c = 0; c < channels; c++) {
- int index_n_c = index_n + c * pooled_width * pooled_height;
- const T* offset_input =
- input + (roi_batch_ind * channels + c) * height * width;
- int pre_calc_index = 0;
-
- for (int ph = 0; ph < pooled_height; ph++) {
- for (int pw = 0; pw < pooled_width; pw++) {
- int index = index_n_c + ph * pooled_width + pw;
-
- T output_val = 0.;
- for (int iy = 0; iy < roi_bin_grid_h; iy++) {
- for (int ix = 0; ix < roi_bin_grid_w; ix++) {
- PreCalc pc = pre_calc[pre_calc_index];
- output_val += pc.w1 * offset_input[pc.pos1] +
- pc.w2 * offset_input[pc.pos2] +
- pc.w3 * offset_input[pc.pos3] + pc.w4 * offset_input[pc.pos4];
-
- pre_calc_index += 1;
- }
- }
- output_val /= count;
-
- output[index] = output_val;
- } // for pw
- } // for ph
- } // for c
- } // for n
-}
-
-template
-void ROIAlignRotatedBackward(
- const int nthreads,
- // may not be contiguous. should index using n_stride, etc
- const T* grad_output,
- const T& spatial_scale,
- const int channels,
- const int height,
- const int width,
- const int pooled_height,
- const int pooled_width,
- const int sampling_ratio,
- T* grad_input,
- const T* rois,
- const int n_stride,
- const int c_stride,
- const int h_stride,
- const int w_stride) {
- for (int index = 0; index < nthreads; index++) {
- // (n, c, ph, pw) is an element in the pooled output
- int pw = index % pooled_width;
- int ph = (index / pooled_width) % pooled_height;
- int c = (index / pooled_width / pooled_height) % channels;
- int n = index / pooled_width / pooled_height / channels;
-
- const T* current_roi = rois + n * 6;
- int roi_batch_ind = current_roi[0];
-
- // Do not use rounding; this implementation detail is critical
- // ROIAlignRotated supports align == true, i.e., continuous coordinate
- // by default, thus the 0.5 offset
- T offset = (T)0.5;
- T roi_center_w = current_roi[1] * spatial_scale - offset;
- T roi_center_h = current_roi[2] * spatial_scale - offset;
- T roi_width = current_roi[3] * spatial_scale;
- T roi_height = current_roi[4] * spatial_scale;
- T theta = current_roi[5] * M_PI / 180.0;
- T cos_theta = cos(theta);
- T sin_theta = sin(theta);
-
- AT_ASSERTM(
- roi_width >= 0 && roi_height >= 0,
- "ROIs in ROIAlignRotated do not have non-negative size!");
-
- T bin_size_h = static_cast(roi_height) / static_cast(pooled_height);
- T bin_size_w = static_cast(roi_width) / static_cast(pooled_width);
-
- T* offset_grad_input =
- grad_input + ((roi_batch_ind * channels + c) * height * width);
-
- int output_offset = n * n_stride + c * c_stride;
- const T* offset_grad_output = grad_output + output_offset;
- const T grad_output_this_bin =
- offset_grad_output[ph * h_stride + pw * w_stride];
-
- // We use roi_bin_grid to sample the grid and mimic integral
- int roi_bin_grid_h = (sampling_ratio > 0)
- ? sampling_ratio
- : ceil(roi_height / pooled_height); // e.g., = 2
- int roi_bin_grid_w =
- (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width);
-
- // roi_start_h and roi_start_w are computed wrt the center of RoI (x, y).
- // Appropriate translation needs to be applied after.
- T roi_start_h = -roi_height / 2.0;
- T roi_start_w = -roi_width / 2.0;
-
- // We do average (integral) pooling inside a bin
- const T count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4
-
- for (int iy = 0; iy < roi_bin_grid_h; iy++) {
- const T yy = roi_start_h + ph * bin_size_h +
- static_cast(iy + .5f) * bin_size_h /
- static_cast(roi_bin_grid_h); // e.g., 0.5, 1.5
- for (int ix = 0; ix < roi_bin_grid_w; ix++) {
- const T xx = roi_start_w + pw * bin_size_w +
- static_cast(ix + .5f) * bin_size_w /
- static_cast(roi_bin_grid_w);
-
- // Rotate by theta around the center and translate
- T y = yy * cos_theta - xx * sin_theta + roi_center_h;
- T x = yy * sin_theta + xx * cos_theta + roi_center_w;
-
- T w1, w2, w3, w4;
- int x_low, x_high, y_low, y_high;
-
- bilinear_interpolate_gradient(
- height, width, y, x, w1, w2, w3, w4, x_low, x_high, y_low, y_high);
-
- T g1 = grad_output_this_bin * w1 / count;
- T g2 = grad_output_this_bin * w2 / count;
- T g3 = grad_output_this_bin * w3 / count;
- T g4 = grad_output_this_bin * w4 / count;
-
- if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0) {
- // atomic add is not needed for now since it is single threaded
- add(offset_grad_input + y_low * width + x_low, static_cast(g1));
- add(offset_grad_input + y_low * width + x_high, static_cast(g2));
- add(offset_grad_input + y_high * width + x_low, static_cast(g3));
- add(offset_grad_input + y_high * width + x_high, static_cast(g4));
- } // if
- } // ix
- } // iy
- } // for
-} // ROIAlignRotatedBackward
-
-at::Tensor ROIAlignRotated_forward_cpu(
- const at::Tensor& input,
- const at::Tensor& rois,
- const float spatial_scale,
- const int pooled_height,
- const int pooled_width,
- const int sampling_ratio) {
- AT_ASSERTM(input.device().is_cpu(), "input must be a CPU tensor");
- AT_ASSERTM(rois.device().is_cpu(), "rois must be a CPU tensor");
-
- at::TensorArg input_t{input, "input", 1}, rois_t{rois, "rois", 2};
-
- at::CheckedFrom c = "ROIAlign_forward_cpu";
- at::checkAllSameType(c, {input_t, rois_t});
-
- auto num_rois = rois.size(0);
- auto channels = input.size(1);
- auto height = input.size(2);
- auto width = input.size(3);
-
- at::Tensor output = at::zeros(
- {num_rois, channels, pooled_height, pooled_width}, input.options());
-
- auto output_size = num_rois * pooled_height * pooled_width * channels;
-
- if (output.numel() == 0) {
- return output;
- }
-
- auto input_ = input.contiguous(), rois_ = rois.contiguous();
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(
- input.scalar_type(), "ROIAlignRotated_forward", [&] {
- ROIAlignRotatedForward(
- output_size,
- input_.data_ptr(),
- spatial_scale,
- channels,
- height,
- width,
- pooled_height,
- pooled_width,
- sampling_ratio,
- rois_.data_ptr(),
- output.data_ptr());
- });
- return output;
-}
-
-at::Tensor ROIAlignRotated_backward_cpu(
- const at::Tensor& grad,
- const at::Tensor& rois,
- const float spatial_scale,
- const int pooled_height,
- const int pooled_width,
- const int batch_size,
- const int channels,
- const int height,
- const int width,
- const int sampling_ratio) {
- AT_ASSERTM(grad.device().is_cpu(), "grad must be a CPU tensor");
- AT_ASSERTM(rois.device().is_cpu(), "rois must be a CPU tensor");
-
- at::TensorArg grad_t{grad, "grad", 1}, rois_t{rois, "rois", 2};
-
- at::CheckedFrom c = "ROIAlignRotated_backward_cpu";
- at::checkAllSameType(c, {grad_t, rois_t});
-
- at::Tensor grad_input =
- at::zeros({batch_size, channels, height, width}, grad.options());
-
- // handle possibly empty gradients
- if (grad.numel() == 0) {
- return grad_input;
- }
-
- // get stride values to ensure indexing into gradients is correct.
- int n_stride = grad.stride(0);
- int c_stride = grad.stride(1);
- int h_stride = grad.stride(2);
- int w_stride = grad.stride(3);
-
- auto rois_ = rois.contiguous();
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(
- grad.scalar_type(), "ROIAlignRotated_forward", [&] {
- ROIAlignRotatedBackward(
- grad.numel(),
- grad.data_ptr(),
- spatial_scale,
- channels,
- height,
- width,
- pooled_height,
- pooled_width,
- sampling_ratio,
- grad_input.data_ptr(),
- rois_.data_ptr(),
- n_stride,
- c_stride,
- h_stride,
- w_stride);
- });
- return grad_input;
-}
-
-} // namespace detectron2
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/util/slio.py b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/util/slio.py
deleted file mode 100644
index 72c1f0f7b82cdc931d381feef64fe15815ba657e..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/util/slio.py
+++ /dev/null
@@ -1,177 +0,0 @@
-# ==========================================================
-# Modified from mmcv
-# ==========================================================
-
-import json
-import pickle
-from abc import ABCMeta, abstractmethod
-from pathlib import Path
-
-import yaml
-
-try:
- from yaml import CLoader as Loader, CDumper as Dumper
-except ImportError:
- from yaml import Loader, Dumper
-
-
-# ===========================
-# Rigister handler
-# ===========================
-
-
-class BaseFileHandler(metaclass=ABCMeta):
- @abstractmethod
- def load_from_fileobj(self, file, **kwargs):
- pass
-
- @abstractmethod
- def dump_to_fileobj(self, obj, file, **kwargs):
- pass
-
- @abstractmethod
- def dump_to_str(self, obj, **kwargs):
- pass
-
- def load_from_path(self, filepath, mode="r", **kwargs):
- with open(filepath, mode) as f:
- return self.load_from_fileobj(f, **kwargs)
-
- def dump_to_path(self, obj, filepath, mode="w", **kwargs):
- with open(filepath, mode) as f:
- self.dump_to_fileobj(obj, f, **kwargs)
-
-
-class JsonHandler(BaseFileHandler):
- def load_from_fileobj(self, file):
- return json.load(file)
-
- def dump_to_fileobj(self, obj, file, **kwargs):
- json.dump(obj, file, **kwargs)
-
- def dump_to_str(self, obj, **kwargs):
- return json.dumps(obj, **kwargs)
-
-
-class PickleHandler(BaseFileHandler):
- def load_from_fileobj(self, file, **kwargs):
- return pickle.load(file, **kwargs)
-
- def load_from_path(self, filepath, **kwargs):
- return super(PickleHandler, self).load_from_path(filepath, mode="rb", **kwargs)
-
- def dump_to_str(self, obj, **kwargs):
- kwargs.setdefault("protocol", 2)
- return pickle.dumps(obj, **kwargs)
-
- def dump_to_fileobj(self, obj, file, **kwargs):
- kwargs.setdefault("protocol", 2)
- pickle.dump(obj, file, **kwargs)
-
- def dump_to_path(self, obj, filepath, **kwargs):
- super(PickleHandler, self).dump_to_path(obj, filepath, mode="wb", **kwargs)
-
-
-class YamlHandler(BaseFileHandler):
- def load_from_fileobj(self, file, **kwargs):
- kwargs.setdefault("Loader", Loader)
- return yaml.load(file, **kwargs)
-
- def dump_to_fileobj(self, obj, file, **kwargs):
- kwargs.setdefault("Dumper", Dumper)
- yaml.dump(obj, file, **kwargs)
-
- def dump_to_str(self, obj, **kwargs):
- kwargs.setdefault("Dumper", Dumper)
- return yaml.dump(obj, **kwargs)
-
-
-file_handlers = {
- "json": JsonHandler(),
- "yaml": YamlHandler(),
- "yml": YamlHandler(),
- "pickle": PickleHandler(),
- "pkl": PickleHandler(),
-}
-
-# ===========================
-# load and dump
-# ===========================
-
-
-def is_str(x):
- """Whether the input is an string instance.
-
- Note: This method is deprecated since python 2 is no longer supported.
- """
- return isinstance(x, str)
-
-
-def slload(file, file_format=None, **kwargs):
- """Load data from json/yaml/pickle files.
-
- This method provides a unified api for loading data from serialized files.
-
- Args:
- file (str or :obj:`Path` or file-like object): Filename or a file-like
- object.
- file_format (str, optional): If not specified, the file format will be
- inferred from the file extension, otherwise use the specified one.
- Currently supported formats include "json", "yaml/yml" and
- "pickle/pkl".
-
- Returns:
- The content from the file.
- """
- if isinstance(file, Path):
- file = str(file)
- if file_format is None and is_str(file):
- file_format = file.split(".")[-1]
- if file_format not in file_handlers:
- raise TypeError(f"Unsupported format: {file_format}")
-
- handler = file_handlers[file_format]
- if is_str(file):
- obj = handler.load_from_path(file, **kwargs)
- elif hasattr(file, "read"):
- obj = handler.load_from_fileobj(file, **kwargs)
- else:
- raise TypeError('"file" must be a filepath str or a file-object')
- return obj
-
-
-def sldump(obj, file=None, file_format=None, **kwargs):
- """Dump data to json/yaml/pickle strings or files.
-
- This method provides a unified api for dumping data as strings or to files,
- and also supports custom arguments for each file format.
-
- Args:
- obj (any): The python object to be dumped.
- file (str or :obj:`Path` or file-like object, optional): If not
- specified, then the object is dump to a str, otherwise to a file
- specified by the filename or file-like object.
- file_format (str, optional): Same as :func:`load`.
-
- Returns:
- bool: True for success, False otherwise.
- """
- if isinstance(file, Path):
- file = str(file)
- if file_format is None:
- if is_str(file):
- file_format = file.split(".")[-1]
- elif file is None:
- raise ValueError("file_format must be specified since file is None")
- if file_format not in file_handlers:
- raise TypeError(f"Unsupported format: {file_format}")
-
- handler = file_handlers[file_format]
- if file is None:
- return handler.dump_to_str(obj, **kwargs)
- elif is_str(file):
- handler.dump_to_path(obj, file, **kwargs)
- elif hasattr(file, "write"):
- handler.dump_to_fileobj(obj, file, **kwargs)
- else:
- raise TypeError('"file" must be a filename str or a file-object')
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/segment_anything/__init__.py b/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/segment_anything/__init__.py
deleted file mode 100644
index 34383d83f5e76bc801f31b20e5651e383be348b6..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/segment_anything/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .build_sam import (
- build_sam,
- build_sam_vit_h,
- build_sam_vit_l,
- build_sam_vit_b,
- sam_model_registry,
-)
-from .predictor import SamPredictor
-from .automatic_mask_generator import SamAutomaticMaskGenerator
diff --git a/spaces/CassBunny/anything-v3.0/app.py b/spaces/CassBunny/anything-v3.0/app.py
deleted file mode 100644
index 62c8768d6f448b1a0387eaa5d551f3743ebd9462..0000000000000000000000000000000000000000
--- a/spaces/CassBunny/anything-v3.0/app.py
+++ /dev/null
@@ -1,276 +0,0 @@
-from diffusers import AutoencoderKL, UNet2DConditionModel, StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler
-import gradio as gr
-import torch
-from PIL import Image
-import utils
-import datetime
-import time
-import psutil
-
-start_time = time.time()
-is_colab = utils.is_google_colab()
-
-class Model:
- def __init__(self, name, path="", prefix=""):
- self.name = name
- self.path = path
- self.prefix = prefix
- self.pipe_t2i = None
- self.pipe_i2i = None
-
-models = [
- Model("anything v3", "Linaqruf/anything-v3.0", "anything v3 style"),
- ]
- # Model("Spider-Verse", "nitrosocke/spider-verse-diffusion", "spiderverse style "),
- # Model("Balloon Art", "Fictiverse/Stable_Diffusion_BalloonArt_Model", "BalloonArt "),
- # Model("Elden Ring", "nitrosocke/elden-ring-diffusion", "elden ring style "),
- # Model("Tron Legacy", "dallinmackay/Tron-Legacy-diffusion", "trnlgcy ")
- #Model("Pokémon", "lambdalabs/sd-pokemon-diffusers", ""),
- #Model("Pony Diffusion", "AstraliteHeart/pony-diffusion", ""),
- #Model("Robo Diffusion", "nousr/robo-diffusion", ""),
-
-scheduler = DPMSolverMultistepScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- num_train_timesteps=1000,
- trained_betas=None,
- predict_epsilon=True,
- thresholding=False,
- algorithm_type="dpmsolver++",
- solver_type="midpoint",
- lower_order_final=True,
-)
-
-custom_model = None
-if is_colab:
- models.insert(0, Model("Custom model"))
- custom_model = models[0]
-
-last_mode = "txt2img"
-current_model = models[1] if is_colab else models[0]
-current_model_path = current_model.path
-
-if is_colab:
- pipe = StableDiffusionPipeline.from_pretrained(current_model.path, torch_dtype=torch.float16, scheduler=scheduler, safety_checker=lambda images, clip_input: (images, False))
-
-else: # download all models
- print(f"{datetime.datetime.now()} Downloading vae...")
- vae = AutoencoderKL.from_pretrained(current_model.path, subfolder="vae", torch_dtype=torch.float16)
- for model in models:
- try:
- print(f"{datetime.datetime.now()} Downloading {model.name} model...")
- unet = UNet2DConditionModel.from_pretrained(model.path, subfolder="unet", torch_dtype=torch.float16)
- model.pipe_t2i = StableDiffusionPipeline.from_pretrained(model.path, unet=unet, vae=vae, torch_dtype=torch.float16, scheduler=scheduler)
- model.pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(model.path, unet=unet, vae=vae, torch_dtype=torch.float16, scheduler=scheduler)
- except Exception as e:
- print(f"{datetime.datetime.now()} Failed to load model " + model.name + ": " + str(e))
- models.remove(model)
- pipe = models[0].pipe_t2i
-
-if torch.cuda.is_available():
- pipe = pipe.to("cuda")
-
-device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶"
-
-def error_str(error, title="Error"):
- return f"""#### {title}
- {error}""" if error else ""
-
-def custom_model_changed(path):
- models[0].path = path
- global current_model
- current_model = models[0]
-
-def on_model_change(model_name):
-
- prefix = "Enter prompt. \"" + next((m.prefix for m in models if m.name == model_name), None) + "\" is prefixed automatically" if model_name != models[0].name else "Don't forget to use the custom model prefix in the prompt!"
-
- return gr.update(visible = model_name == models[0].name), gr.update(placeholder=prefix)
-
-def inference(model_name, prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt=""):
-
- print(psutil.virtual_memory()) # print memory usage
-
- global current_model
- for model in models:
- if model.name == model_name:
- current_model = model
- model_path = current_model.path
-
- generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None
-
- try:
- if img is not None:
- return img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None
- else:
- return txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator), None
- except Exception as e:
- return None, error_str(e)
-
-def txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator):
-
- print(f"{datetime.datetime.now()} txt_to_img, model: {current_model.name}")
-
- global last_mode
- global pipe
- global current_model_path
- if model_path != current_model_path or last_mode != "txt2img":
- current_model_path = model_path
-
- if is_colab or current_model == custom_model:
- pipe = StableDiffusionPipeline.from_pretrained(current_model_path, torch_dtype=torch.float16, scheduler=scheduler, safety_checker=lambda images, clip_input: (images, False))
- else:
- pipe = pipe.to("cpu")
- pipe = current_model.pipe_t2i
-
- if torch.cuda.is_available():
- pipe = pipe.to("cuda")
- last_mode = "txt2img"
-
- prompt = current_model.prefix + prompt
- result = pipe(
- prompt,
- negative_prompt = neg_prompt,
- # num_images_per_prompt=n_images,
- num_inference_steps = int(steps),
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return replace_nsfw_images(result)
-
-def img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator):
-
- print(f"{datetime.datetime.now()} img_to_img, model: {model_path}")
-
- global last_mode
- global pipe
- global current_model_path
- if model_path != current_model_path or last_mode != "img2img":
- current_model_path = model_path
-
- if is_colab or current_model == custom_model:
- pipe = StableDiffusionImg2ImgPipeline.from_pretrained(current_model_path, torch_dtype=torch.float16, scheduler=scheduler, safety_checker=lambda images, clip_input: (images, False))
- else:
- pipe = pipe.to("cpu")
- pipe = current_model.pipe_i2i
-
- if torch.cuda.is_available():
- pipe = pipe.to("cuda")
- last_mode = "img2img"
-
- prompt = current_model.prefix + prompt
- ratio = min(height / img.height, width / img.width)
- img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS)
- result = pipe(
- prompt,
- negative_prompt = neg_prompt,
- # num_images_per_prompt=n_images,
- init_image = img,
- num_inference_steps = int(steps),
- strength = strength,
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return replace_nsfw_images(result)
-
-def replace_nsfw_images(results):
-
- if is_colab:
- return results.images[0]
-
- for i in range(len(results.images)):
- if results.nsfw_content_detected[i]:
- results.images[i] = Image.open("nsfw.png")
- return results.images[0]
-
-css = """.finetuned-diffusion-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.finetuned-diffusion-div div h1{font-weight:900;margin-bottom:7px}.finetuned-diffusion-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem}
-"""
-with gr.Blocks(css=css) as demo:
- gr.HTML(
- f"""
-
-
-
Anything V3
-
-
- Demo for Anything V3
-
-
You can skip the queue by duplicating this space:
-
-
- """
- )
- with gr.Row():
-
- with gr.Column(scale=55):
- with gr.Group():
- model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name)
- with gr.Box(visible=False) as custom_model_group:
- custom_model_path = gr.Textbox(label="Custom model path", placeholder="Path to model, e.g. nitrosocke/Arcane-Diffusion", interactive=True)
- gr.HTML("
Custom models have to be downloaded first, so give it some time.
- """)
-
-print(f"Space built in {time.time() - start_time:.2f} seconds")
-
-if not is_colab:
- demo.queue(concurrency_count=1)
-demo.launch(debug=is_colab, share=is_colab)
\ No newline at end of file
diff --git a/spaces/ChandraMohanNayal/AutoGPT/data_ingestion.py b/spaces/ChandraMohanNayal/AutoGPT/data_ingestion.py
deleted file mode 100644
index b89a33dafd15c2e7bded0445a741a4a1c47ed417..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/data_ingestion.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import argparse
-import logging
-
-from autogpt.commands.file_operations import ingest_file, search_files
-from autogpt.config import Config
-from autogpt.memory import get_memory
-
-cfg = Config()
-
-
-def configure_logging():
- logging.basicConfig(
- filename="log-ingestion.txt",
- filemode="a",
- format="%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s",
- datefmt="%H:%M:%S",
- level=logging.DEBUG,
- )
- return logging.getLogger("AutoGPT-Ingestion")
-
-
-def ingest_directory(directory, memory, args):
- """
- Ingest all files in a directory by calling the ingest_file function for each file.
-
- :param directory: The directory containing the files to ingest
- :param memory: An object with an add() method to store the chunks in memory
- """
- try:
- files = search_files(directory)
- for file in files:
- ingest_file(file, memory, args.max_length, args.overlap)
- except Exception as e:
- print(f"Error while ingesting directory '{directory}': {str(e)}")
-
-
-def main() -> None:
- logger = configure_logging()
-
- parser = argparse.ArgumentParser(
- description="Ingest a file or a directory with multiple files into memory. "
- "Make sure to set your .env before running this script."
- )
- group = parser.add_mutually_exclusive_group(required=True)
- group.add_argument("--file", type=str, help="The file to ingest.")
- group.add_argument(
- "--dir", type=str, help="The directory containing the files to ingest."
- )
- parser.add_argument(
- "--init",
- action="store_true",
- help="Init the memory and wipe its content (default: False)",
- default=False,
- )
- parser.add_argument(
- "--overlap",
- type=int,
- help="The overlap size between chunks when ingesting files (default: 200)",
- default=200,
- )
- parser.add_argument(
- "--max_length",
- type=int,
- help="The max_length of each chunk when ingesting files (default: 4000)",
- default=4000,
- )
-
- args = parser.parse_args()
-
- # Initialize memory
- memory = get_memory(cfg, init=args.init)
- print("Using memory of type: " + memory.__class__.__name__)
-
- if args.file:
- try:
- ingest_file(args.file, memory, args.max_length, args.overlap)
- print(f"File '{args.file}' ingested successfully.")
- except Exception as e:
- logger.error(f"Error while ingesting file '{args.file}': {str(e)}")
- print(f"Error while ingesting file '{args.file}': {str(e)}")
- elif args.dir:
- try:
- ingest_directory(args.dir, memory, args)
- print(f"Directory '{args.dir}' ingested successfully.")
- except Exception as e:
- logger.error(f"Error while ingesting directory '{args.dir}': {str(e)}")
- print(f"Error while ingesting directory '{args.dir}': {str(e)}")
- else:
- print(
- "Please provide either a file path (--file) or a directory name (--dir)"
- " inside the auto_gpt_workspace directory as input."
- )
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/ChandraMohanNayal/AutoGPT/run_continuous.sh b/spaces/ChandraMohanNayal/AutoGPT/run_continuous.sh
deleted file mode 100644
index 1f4436c88503172c0578b15a8447ed8268502578..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/run_continuous.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-#!/bin/bash
-
-./run.sh --continuous $@
diff --git a/spaces/Cropinky/esrgan/realesrgan/archs/srvgg_arch.py b/spaces/Cropinky/esrgan/realesrgan/archs/srvgg_arch.py
deleted file mode 100644
index 39460965c9c5ee9cd6eb41c50d33574cb8ba6e50..0000000000000000000000000000000000000000
--- a/spaces/Cropinky/esrgan/realesrgan/archs/srvgg_arch.py
+++ /dev/null
@@ -1,69 +0,0 @@
-from basicsr.utils.registry import ARCH_REGISTRY
-from torch import nn as nn
-from torch.nn import functional as F
-
-
-@ARCH_REGISTRY.register()
-class SRVGGNetCompact(nn.Module):
- """A compact VGG-style network structure for super-resolution.
-
- It is a compact network structure, which performs upsampling in the last layer and no convolution is
- conducted on the HR feature space.
-
- Args:
- num_in_ch (int): Channel number of inputs. Default: 3.
- num_out_ch (int): Channel number of outputs. Default: 3.
- num_feat (int): Channel number of intermediate features. Default: 64.
- num_conv (int): Number of convolution layers in the body network. Default: 16.
- upscale (int): Upsampling factor. Default: 4.
- act_type (str): Activation type, options: 'relu', 'prelu', 'leakyrelu'. Default: prelu.
- """
-
- def __init__(self, num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu'):
- super(SRVGGNetCompact, self).__init__()
- self.num_in_ch = num_in_ch
- self.num_out_ch = num_out_ch
- self.num_feat = num_feat
- self.num_conv = num_conv
- self.upscale = upscale
- self.act_type = act_type
-
- self.body = nn.ModuleList()
- # the first conv
- self.body.append(nn.Conv2d(num_in_ch, num_feat, 3, 1, 1))
- # the first activation
- if act_type == 'relu':
- activation = nn.ReLU(inplace=True)
- elif act_type == 'prelu':
- activation = nn.PReLU(num_parameters=num_feat)
- elif act_type == 'leakyrelu':
- activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
- self.body.append(activation)
-
- # the body structure
- for _ in range(num_conv):
- self.body.append(nn.Conv2d(num_feat, num_feat, 3, 1, 1))
- # activation
- if act_type == 'relu':
- activation = nn.ReLU(inplace=True)
- elif act_type == 'prelu':
- activation = nn.PReLU(num_parameters=num_feat)
- elif act_type == 'leakyrelu':
- activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
- self.body.append(activation)
-
- # the last conv
- self.body.append(nn.Conv2d(num_feat, num_out_ch * upscale * upscale, 3, 1, 1))
- # upsample
- self.upsampler = nn.PixelShuffle(upscale)
-
- def forward(self, x):
- out = x
- for i in range(0, len(self.body)):
- out = self.body[i](out)
-
- out = self.upsampler(out)
- # add the nearest upsampled image, so that the network learns the residual
- base = F.interpolate(x, scale_factor=self.upscale, mode='nearest')
- out += base
- return out
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/reverseContourPen.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/reverseContourPen.py
deleted file mode 100644
index a3756ab17af131329e88c7136a230a32e3e7a8d5..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/reverseContourPen.py
+++ /dev/null
@@ -1,96 +0,0 @@
-from fontTools.misc.arrayTools import pairwise
-from fontTools.pens.filterPen import ContourFilterPen
-
-
-__all__ = ["reversedContour", "ReverseContourPen"]
-
-
-class ReverseContourPen(ContourFilterPen):
- """Filter pen that passes outline data to another pen, but reversing
- the winding direction of all contours. Components are simply passed
- through unchanged.
-
- Closed contours are reversed in such a way that the first point remains
- the first point.
- """
-
- def __init__(self, outPen, outputImpliedClosingLine=False):
- super().__init__(outPen)
- self.outputImpliedClosingLine = outputImpliedClosingLine
-
- def filterContour(self, contour):
- return reversedContour(contour, self.outputImpliedClosingLine)
-
-
-def reversedContour(contour, outputImpliedClosingLine=False):
- """Generator that takes a list of pen's (operator, operands) tuples,
- and yields them with the winding direction reversed.
- """
- if not contour:
- return # nothing to do, stop iteration
-
- # valid contours must have at least a starting and ending command,
- # can't have one without the other
- assert len(contour) > 1, "invalid contour"
-
- # the type of the last command determines if the contour is closed
- contourType = contour.pop()[0]
- assert contourType in ("endPath", "closePath")
- closed = contourType == "closePath"
-
- firstType, firstPts = contour.pop(0)
- assert firstType in ("moveTo", "qCurveTo"), (
- "invalid initial segment type: %r" % firstType
- )
- firstOnCurve = firstPts[-1]
- if firstType == "qCurveTo":
- # special case for TrueType paths contaning only off-curve points
- assert firstOnCurve is None, "off-curve only paths must end with 'None'"
- assert not contour, "only one qCurveTo allowed per off-curve path"
- firstPts = (firstPts[0],) + tuple(reversed(firstPts[1:-1])) + (None,)
-
- if not contour:
- # contour contains only one segment, nothing to reverse
- if firstType == "moveTo":
- closed = False # single-point paths can't be closed
- else:
- closed = True # off-curve paths are closed by definition
- yield firstType, firstPts
- else:
- lastType, lastPts = contour[-1]
- lastOnCurve = lastPts[-1]
- if closed:
- # for closed paths, we keep the starting point
- yield firstType, firstPts
- if firstOnCurve != lastOnCurve:
- # emit an implied line between the last and first points
- yield "lineTo", (lastOnCurve,)
- contour[-1] = (lastType, tuple(lastPts[:-1]) + (firstOnCurve,))
-
- if len(contour) > 1:
- secondType, secondPts = contour[0]
- else:
- # contour has only two points, the second and last are the same
- secondType, secondPts = lastType, lastPts
-
- if not outputImpliedClosingLine:
- # if a lineTo follows the initial moveTo, after reversing it
- # will be implied by the closePath, so we don't emit one;
- # unless the lineTo and moveTo overlap, in which case we keep the
- # duplicate points
- if secondType == "lineTo" and firstPts != secondPts:
- del contour[0]
- if contour:
- contour[-1] = (lastType, tuple(lastPts[:-1]) + secondPts)
- else:
- # for open paths, the last point will become the first
- yield firstType, (lastOnCurve,)
- contour[-1] = (lastType, tuple(lastPts[:-1]) + (firstOnCurve,))
-
- # we iterate over all segment pairs in reverse order, and yield
- # each one with the off-curve points reversed (if any), and
- # with the on-curve point of the following segment
- for (curType, curPts), (_, nextPts) in pairwise(contour, reverse=True):
- yield curType, tuple(reversed(curPts[:-1])) + (nextPts[-1],)
-
- yield "closePath" if closed else "endPath", ()
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Image-0fe369ad.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Image-0fe369ad.js
deleted file mode 100644
index f201414aaacde6c8751195516c0c6f150bc5afb1..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Image-0fe369ad.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as h,e as g,s as d,J as n,K as e,p as m,M as i,n as l,A as u}from"./index-1d65707a.js";function f(c){let t,r,s,o;return{c(){t=n("svg"),r=n("rect"),s=n("circle"),o=n("polyline"),e(r,"x","3"),e(r,"y","3"),e(r,"width","18"),e(r,"height","18"),e(r,"rx","2"),e(r,"ry","2"),e(s,"cx","8.5"),e(s,"cy","8.5"),e(s,"r","1.5"),e(o,"points","21 15 16 10 5 21"),e(t,"xmlns","http://www.w3.org/2000/svg"),e(t,"width","100%"),e(t,"height","100%"),e(t,"viewBox","0 0 24 24"),e(t,"fill","none"),e(t,"stroke","currentColor"),e(t,"stroke-width","1.5"),e(t,"stroke-linecap","round"),e(t,"stroke-linejoin","round"),e(t,"class","feather feather-image")},m(a,p){m(a,t,p),i(t,r),i(t,s),i(t,o)},p:l,i:l,o:l,d(a){a&&u(t)}}}class x extends h{constructor(t){super(),g(this,t,null,f,d,{})}}export{x as I};
-//# sourceMappingURL=Image-0fe369ad.js.map
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-085f5795.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-085f5795.js
deleted file mode 100644
index 4f9d7f247f754f5d61a620eb0f78a44760d04085..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-085f5795.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as le,e as re,s as ne,J as G,K as d,p as v,M as Y,n as H,A as C,ak as j,N,T as Q,B as pe,h as L,G as et,O as U,Z as Ze,ar as Ct,z as p,u as $,v as A,y as x,V as St,C as Mt,a7 as we,ai as Et,ao as tt,L as Rt,U as R,Q as F,Y as Dt,a1 as yt,F as te,k as T,o as z,x as B,ap as Ne,aw as Lt,m as me,j as Z,t as K,a9 as jt,ab as Ut,ac as qt,ad as Ft,E as Ht,ae as Nt,q as Wt,r as Ot}from"./index-3370be2a.js";import{f as nt,B as Yt}from"./Button-89624748.js";import{B as Tt}from"./BlockLabel-56db415e.js";import{I as We}from"./Image-93033d87.js";import{C as Xt,i as Jt,U as Pt,W as Qt}from"./StaticImage.svelte_svelte_type_style_lang-e84b963e.js";import{I as Ae}from"./IconButton-abe5ede9.js";import{C as Vt,M as Oe}from"./ModifyUpload-d8fc50ab.js";import{U as Gt}from"./Upload-f29b2460.js";import{u as Zt,S as Kt}from"./ShareButton-39feba51.js";import{E as $t}from"./Empty-585389a4.js";import{D as xt}from"./Download-fdaaf5d4.js";import"./Blocks-f0129fcd.js";import{U as en}from"./UploadText-28892309.js";import{E as ds}from"./Image-8a3c68cc.js";import"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";function tn(t){let e,n,s;return{c(){e=G("svg"),n=G("path"),s=G("path"),d(n,"d","M28.828 3.172a4.094 4.094 0 0 0-5.656 0L4.05 22.292A6.954 6.954 0 0 0 2 27.242V30h2.756a6.952 6.952 0 0 0 4.95-2.05L28.828 8.829a3.999 3.999 0 0 0 0-5.657zM10.91 18.26l2.829 2.829l-2.122 2.121l-2.828-2.828zm-2.619 8.276A4.966 4.966 0 0 1 4.756 28H4v-.759a4.967 4.967 0 0 1 1.464-3.535l1.91-1.91l2.829 2.828zM27.415 7.414l-12.261 12.26l-2.829-2.828l12.262-12.26a2.047 2.047 0 0 1 2.828 0a2 2 0 0 1 0 2.828z"),d(n,"fill","currentColor"),d(s,"d","M6.5 15a3.5 3.5 0 0 1-2.475-5.974l3.5-3.5a1.502 1.502 0 0 0 0-2.121a1.537 1.537 0 0 0-2.121 0L3.415 5.394L2 3.98l1.99-1.988a3.585 3.585 0 0 1 4.95 0a3.504 3.504 0 0 1 0 4.949L5.439 10.44a1.502 1.502 0 0 0 0 2.121a1.537 1.537 0 0 0 2.122 0l4.024-4.024L13 9.95l-4.025 4.024A3.475 3.475 0 0 1 6.5 15z"),d(s,"fill","currentColor"),d(e,"width","100%"),d(e,"height","100%"),d(e,"viewBox","0 0 32 32")},m(r,a){v(r,e,a),Y(e,n),Y(e,s)},p:H,i:H,o:H,d(r){r&&C(e)}}}class nn extends le{constructor(e){super(),re(this,e,null,tn,ne,{})}}function sn(t){let e,n,s,r,a,i,u;return{c(){e=G("svg"),n=G("circle"),s=G("circle"),r=G("circle"),a=G("circle"),i=G("circle"),u=G("path"),d(n,"cx","10"),d(n,"cy","12"),d(n,"r","2"),d(n,"fill","currentColor"),d(s,"cx","16"),d(s,"cy","9"),d(s,"r","2"),d(s,"fill","currentColor"),d(r,"cx","22"),d(r,"cy","12"),d(r,"r","2"),d(r,"fill","currentColor"),d(a,"cx","23"),d(a,"cy","18"),d(a,"r","2"),d(a,"fill","currentColor"),d(i,"cx","19"),d(i,"cy","23"),d(i,"r","2"),d(i,"fill","currentColor"),d(u,"fill","currentColor"),d(u,"d","M16.54 2A14 14 0 0 0 2 16a4.82 4.82 0 0 0 6.09 4.65l1.12-.31a3 3 0 0 1 3.79 2.9V27a3 3 0 0 0 3 3a14 14 0 0 0 14-14.54A14.05 14.05 0 0 0 16.54 2Zm8.11 22.31A11.93 11.93 0 0 1 16 28a1 1 0 0 1-1-1v-3.76a5 5 0 0 0-5-5a5.07 5.07 0 0 0-1.33.18l-1.12.31A2.82 2.82 0 0 1 4 16A12 12 0 0 1 16.47 4A12.18 12.18 0 0 1 28 15.53a11.89 11.89 0 0 1-3.35 8.79Z"),d(e,"width","100%"),d(e,"height","100%"),d(e,"viewBox","0 0 32 32")},m(l,_){v(l,e,_),Y(e,n),Y(e,s),Y(e,r),Y(e,a),Y(e,i),Y(e,u)},p:H,i:H,o:H,d(l){l&&C(e)}}}class ln extends le{constructor(e){super(),re(this,e,null,sn,ne,{})}}function rn(t){let e,n;return{c(){e=G("svg"),n=G("path"),d(n,"fill","currentColor"),d(n,"d","M7 27h23v2H7zm20.38-16.49l-7.93-7.92a2 2 0 0 0-2.83 0l-14 14a2 2 0 0 0 0 2.83L7.13 24h9.59l10.66-10.66a2 2 0 0 0 0-2.83zM15.89 22H8l-4-4l6.31-6.31l7.93 7.92zm3.76-3.76l-7.92-7.93L18 4l8 7.93z"),d(e,"xmlns","http://www.w3.org/2000/svg"),d(e,"width","100%"),d(e,"height","100%"),d(e,"viewBox","0 0 32 32")},m(s,r){v(s,e,r),Y(e,n)},p:H,i:H,o:H,d(s){s&&C(e)}}}class an extends le{constructor(e){super(),re(this,e,null,rn,ne,{})}}function un(t){let e,n;return{c(){e=G("svg"),n=G("path"),d(n,"d","M17 3a2.828 2.828 0 1 1 4 4L7.5 20.5 2 22l1.5-5.5L17 3z"),d(e,"xmlns","http://www.w3.org/2000/svg"),d(e,"width","100%"),d(e,"height","100%"),d(e,"viewBox","0 0 24 24"),d(e,"fill","none"),d(e,"stroke","currentColor"),d(e,"stroke-width","1.5"),d(e,"stroke-linecap","round"),d(e,"stroke-linejoin","round"),d(e,"class","feather feather-edit-2")},m(s,r){v(s,e,r),Y(e,n)},p:H,i:H,o:H,d(s){s&&C(e)}}}let st=class extends le{constructor(e){super(),re(this,e,null,un,ne,{})}};const zt=t=>{let e=t.currentTarget;const n=e.getBoundingClientRect(),s=e.naturalWidth/n.width,r=e.naturalHeight/n.height;if(s>r){n.width;const u=e.naturalHeight/s,l=(n.height-u)/2;var a=Math.round((t.clientX-n.left)*s),i=Math.round((t.clientY-n.top-l)*s)}else{const u=e.naturalWidth/r;n.height;const l=(n.width-u)/2;var a=Math.round((t.clientX-n.left-l)*r),i=Math.round((t.clientY-n.top)*r)}return a<0||a>=e.naturalWidth||i<0||i>=e.naturalHeight?null:[a,i]};function on(t){let e,n;return{c(){e=N("img"),Q(e.src,n=t[0])||d(e,"src",n),d(e,"alt","")},m(s,r){v(s,e,r),t[4](e)},p(s,[r]){r&1&&!Q(e.src,n=s[0])&&d(e,"src",n)},i:H,o:H,d(s){s&&C(e),t[4](null)}}}function fn(t,e,n){let{image:s}=e,r;const a=pe();let i;function u(){i.destroy()}function l(){i&&u(),i=new Xt(r,{autoCropArea:1,cropend(){const c=i.getCroppedCanvas().toDataURL();a("crop",c)}}),a("crop",s)}function _(c){L[c?"unshift":"push"](()=>{r=c,n(1,r)})}return t.$$set=c=>{"image"in c&&n(0,s=c.image)},[s,r,u,l,_]}class Bt extends le{constructor(e){super(),re(this,e,fn,on,ne,{image:0,destroy:2,create:3})}get image(){return this.$$.ctx[0]}set image(e){this.$$set({image:e}),j()}get destroy(){return this.$$.ctx[2]}get create(){return this.$$.ctx[3]}}class it{constructor(e,n){this.x=e,this.y=n}}class lt extends it{update(e){this.x=e.x,this.y=e.y}moveByAngle(e,n){const s=e+Math.PI/2;this.x=this.x+Math.sin(s)*n,this.y=this.y-Math.cos(s)*n}equalsTo(e){return this.x===e.x&&this.y===e.y}getDifferenceTo(e){return new it(this.x-e.x,this.y-e.y)}getDistanceTo(e){const n=this.getDifferenceTo(e);return Math.sqrt(Math.pow(n.x,2)+Math.pow(n.y,2))}getAngleTo(e){const n=this.getDifferenceTo(e);return Math.atan2(n.y,n.x)}toObject(){return{x:this.x,y:this.y}}}const _n=30;class hn{constructor({radius:e=_n,enabled:n=!0,initialPoint:s={x:0,y:0}}={}){this.radius=e,this._isEnabled=n,this.pointer=new lt(s.x,s.y),this.brush=new lt(s.x,s.y),this.angle=0,this.distance=0,this._hasMoved=!1}enable(){this._isEnabled=!0}disable(){this._isEnabled=!1}isEnabled(){return this._isEnabled}setRadius(e){this.radius=e}getRadius(){return this.radius}getBrushCoordinates(){return this.brush.toObject()}getPointerCoordinates(){return this.pointer.toObject()}getBrush(){return this.brush}getPointer(){return this.pointer}getAngle(){return this.angle}getDistance(){return this.distance}brushHasMoved(){return this._hasMoved}update(e,{both:n=!1}={}){return this._hasMoved=!1,this.pointer.equalsTo(e)&&!n?!1:(this.pointer.update(e),n?(this._hasMoved=!0,this.brush.update(e),!0):(this._isEnabled?(this.distance=this.pointer.getDistanceTo(this.brush),this.angle=this.pointer.getAngleTo(this.brush),this.distance>this.radius&&(this.brush.moveByAngle(this.angle,this.distance-this.radius),this._hasMoved=!0)):(this.distance=0,this.angle=0,this.brush.update(e),this._hasMoved=!0),!0))}}function rt(t,e,n){const s=t.slice();return s[61]=e[n].name,s[62]=e[n].zIndex,s[63]=e,s[64]=n,s}function at(t){let e,n,s;return{c(){e=N("div"),e.textContent="Start drawing",d(e,"class","start-prompt svelte-yigbas")},m(r,a){v(r,e,a),s=!0},i(r){s||(r&&Ze(()=>{s&&(n||(n=tt(e,nt,{duration:50},!0)),n.run(1))}),s=!0)},o(r){r&&(n||(n=tt(e,nt,{duration:50},!1)),n.run(0)),s=!1},d(r){r&&C(e),r&&n&&n.end()}}}function ut(t){let e,n=t[61],s,r;const a=()=>t[30](e,n),i=()=>t[30](null,n);return{c(){e=N("canvas"),d(e,"key",t[61]),Rt(e,"z-index",t[62]),d(e,"class","svelte-yigbas"),R(e,"lr",t[5]),R(e,"tb",!t[5])},m(u,l){v(u,e,l),a(),s||(r=[F(e,"mousedown",t[61]==="interface"?t[7]:void 0),F(e,"mousemove",t[61]==="interface"?t[8]:void 0),F(e,"mouseup",t[61]==="interface"?t[9]:void 0),F(e,"mouseout",t[61]==="interface"?t[9]:void 0),F(e,"blur",t[61]==="interface"?t[9]:void 0),F(e,"touchstart",t[61]==="interface"?t[7]:void 0),F(e,"touchmove",t[61]==="interface"?t[8]:void 0),F(e,"touchend",t[61]==="interface"?t[9]:void 0),F(e,"touchcancel",t[61]==="interface"?t[9]:void 0),F(e,"click",Dt(t[29]))],s=!0)},p(u,l){t=u,n!==t[61]&&(i(),n=t[61],a()),l[0]&32&&R(e,"lr",t[5]),l[0]&32&&R(e,"tb",!t[5])},d(u){u&&C(e),i(),s=!1,yt(r)}}}function cn(t){let e,n,s,r=t[4]===0&&at(),a=et(t[6]),i=[];for(let u=0;ut[32].call(e))},m(u,l){v(u,e,l),r&&r.m(e,null),Y(e,n);for(let _=0;_{r=null}),x()),l[0]&993){a=et(u[6]);let _;for(_=0;_h?(m=b[0],M=b[0]/h,V=(b[1]-M)/2):(S=0,V=0,m=b[0],M=b[1]),k.temp.drawImage(i,S,V,m,M)}Mt(async()=>{Object.keys(E).forEach(m=>{n(26,k[m]=E[m].getContext("2d"),k)}),await we(),i&&(i.addEventListener("load",m=>{c==="webcam"?(k.temp.save(),k.temp.translate(g,0),k.temp.scale(-1,1),k.temp.drawImage(i,0,0),k.temp.restore()):oe(),k.drawing.drawImage(E.temp,0,0,g,o),he()}),setTimeout(()=>{c==="webcam"?(k.temp.save(),k.temp.translate(g,0),k.temp.scale(-1,1),k.temp.drawImage(i,0,0),k.temp.restore()):oe(),k.drawing.drawImage(E.temp,0,0,g,o),de({lines:X.slice()}),he()},100)),n(28,J=new hn({radius:_*.05,enabled:!0,initialPoint:{x:g/2,y:o/2}})),O=new Jt((m,M,...y)=>{Ee()}),O.observe(se),Me(),n(24,I=!0),requestAnimationFrame(()=>{ce(),requestAnimationFrame(()=>{ke()})})});function ce(){const m=g/2,M=o/2;J.update({x:m,y:M},{both:!0}),J.update({x:m,y:M},{both:!1}),ae=!0,ue=!0}Et(()=>{n(24,I=!1),O.unobserve(se)});function ie(m){Fe(),i&&(c==="webcam"?(k.temp.save(),k.temp.translate(g,0),k.temp.scale(-1,1),k.temp.drawImage(i,0,0),k.temp.restore()):oe(),(!X||!X.length)&&k.drawing.drawImage(E.temp,0,0,g,o)),de({lines:m}),n(4,ee=m.length),n(27,X=m),k.drawing.drawImage(E.temp,0,0,g,o),u==="mask"&&k.mask.drawImage(E.temp_fake,0,0,g,o),X.length==0&&r("clear")}function Ie(){ie([]),he()}function ve(){const m=X.slice(0,-1);ie(m),he()}let de=({lines:m})=>{m.forEach(M=>{const{points:y,brush_color:h,brush_radius:S}=M;Le({points:y,brush_color:h,brush_radius:S}),u==="mask"&&je({points:y,brush_color:h,brush_radius:S})}),qe(),u==="mask"&&Ue()},Ce=m=>{m.preventDefault(),_e=!0;const{x:M,y}=Re(m);m.touches&&m.touches.length>0&&J.update({x:M,y},{both:!0}),De(M,y),n(4,ee+=1)},w=m=>{m.preventDefault();const{x:M,y}=Re(m);De(M,y)},Ye=m=>{m.preventDefault(),w(m),fe=!1,_e=!1,qe(),u==="mask"&&Ue()},Te=0,ze=0,Be=0,Se=!1,Ee=async()=>{if(b&&se){const y=se?.getBoundingClientRect(),h=b[0]/b[1],S=y.width/y.height;n(5,Se=h{ze=o,Te=g,Be=f},10),await we(),ke()},be=async(m,M,y,h=!0)=>{if(!I)return;await we();const S=window.devicePixelRatio||1;m.width=M.width*(h?S:1),m.height=M.height*(h?S:1);const V=m.getContext("2d");h&&V.scale(S,S),m.style.width=`${y.width}px`,m.style.height=`${y.height}px`},Re=m=>{const M=E.interface.getBoundingClientRect();let y=m.clientX,h=m.clientY;return m.changedTouches&&m.changedTouches.length>0&&(y=m.changedTouches[0].clientX,h=m.changedTouches[0].clientY),{x:(y-M.left)/M.width*g,y:(h-M.top)/M.height*o}},De=(m,M)=>{J.update({x:m,y:M});const y=!J.isEnabled();(_e&&!fe||y&&_e)&&(fe=!0,q.push(J.brush.toObject())),fe&&(q.push(J.brush.toObject()),Le({points:q,brush_color:l,brush_radius:_}),u==="mask"&&je({points:q,brush_color:l,brush_radius:_})),ae=!0},Le=({points:m,brush_color:M,brush_radius:y})=>{if(!m||m.length<2||(n(26,k.temp.lineJoin="round",k),n(26,k.temp.lineCap="round",k),n(26,k.temp.strokeStyle=M,k),n(26,k.temp.lineWidth=y,k),!m||m.length<2))return;let h=m[0],S=m[1];k.temp.moveTo(S.x,S.y),k.temp.beginPath();for(var V=1,Ge=m.length;V{if(!m||m.length<2)return;n(26,k.temp_fake.lineJoin="round",k),n(26,k.temp_fake.lineCap="round",k),n(26,k.temp_fake.strokeStyle="#fff",k),n(26,k.temp_fake.lineWidth=y,k);let h=m[0],S=m[1];k.temp_fake.moveTo(S.x,S.y),k.temp_fake.beginPath();for(var V=1,Ge=m.length;V{q.length<1||(q.length=0,k.mask.drawImage(E.temp_fake,0,0,g,o),he())},qe=()=>{q.length<1||(X.push({points:q.slice(),brush_color:l,brush_radius:_}),u!=="mask"&&(q.length=0),k.drawing.drawImage(E.temp,0,0,g,o),he())},he=()=>{const m=He();r("change",m)};function ke(){return n(27,X=[]),Fe(),n(4,ee=0),!0}function Fe(){ue=!0,k.temp.clearRect(0,0,g,o),n(26,k.temp.fillStyle=u==="mask"?"transparent":"#FFFFFF",k),k.temp.fillRect(0,0,g,o),u==="mask"&&(k.temp_fake.clearRect(0,0,E.temp_fake.width,E.temp_fake.height),k.mask.clearRect(0,0,g,o),n(26,k.mask.fillStyle="#000",k),k.mask.fillRect(0,0,g,o))}let Me=({once:m=!1}={})=>{if(ae||ue){const M=J.getPointerCoordinates(),y=J.getBrushCoordinates();Xe(k.interface,M,y),ae=!1,ue=!1}m||window.requestAnimationFrame(()=>{Me()})},Xe=(m,M,y)=>{m.clearRect(0,0,g,o),m.beginPath(),m.fillStyle=l,m.arc(y.x,y.y,_/2,0,Math.PI*2,!0),m.fill(),m.beginPath(),m.fillStyle=mn,m.arc(y.x,y.y,s,0,Math.PI*2,!0),m.fill()};function He(){return u==="mask"?E.mask.toDataURL("image/jpg"):E.drawing.toDataURL("image/jpg")}function Je(m){te.call(this,t,m)}function Pe(m,M){L[m?"unshift":"push"](()=>{E[M]=m,n(0,E)})}function Qe(m){L[m?"unshift":"push"](()=>{se=m,n(3,se)})}function Ve(){D=this.offsetWidth,W=this.offsetHeight,n(1,D),n(2,W)}return t.$$set=m=>{"value"in m&&n(13,a=m.value),"value_img"in m&&n(14,i=m.value_img),"mode"in m&&n(15,u=m.mode),"brush_color"in m&&n(16,l=m.brush_color),"brush_radius"in m&&n(10,_=m.brush_radius),"source"in m&&n(17,c=m.source),"width"in m&&n(11,g=m.width),"height"in m&&n(12,o=m.height),"container_height"in m&&n(18,f=m.container_height),"shape"in m&&n(19,b=m.shape)},t.$$.update=()=>{t.$$.dirty[0]&530432&&b&&(g||o)&&(n(11,g=b[0]),n(12,o=b[1])),t.$$.dirty[0]&16785408&&I&&!a&&ke(),t.$$.dirty[0]&251811841&&I&&i!==P&&(n(25,P=i),ke(),setTimeout(()=>{c==="webcam"?(k.temp.save(),k.temp.translate(g,0),k.temp.scale(-1,1),k.temp.drawImage(i,0,0),k.temp.restore()):oe(),k.drawing.drawImage(E.temp,0,0,g,o),de({lines:X.slice()}),he()},50)),t.$$.dirty[0]&268436480&&J&&(ce(),J.setRadius(_*.05)),t.$$.dirty[0]&6144&&(g||o)&&Ee(),t.$$.dirty[0]&1024&&(s=_*.075)},[E,D,W,se,ee,Se,ge,Ce,w,Ye,_,g,o,a,i,u,l,c,f,b,Ie,ve,ke,He,I,P,k,X,J,Je,Pe,Qe,Ve]}class Ke extends le{constructor(e){super(),re(this,e,gn,cn,ne,{value:13,value_img:14,mode:15,brush_color:16,brush_radius:10,source:17,width:11,height:12,container_height:18,shape:19,clear_mask:20,undo:21,clear:22,get_image_data:23},null,[-1,-1,-1])}get clear_mask(){return this.$$.ctx[20]}get undo(){return this.$$.ctx[21]}get clear(){return this.$$.ctx[22]}get get_image_data(){return this.$$.ctx[23]}}function ft(t){let e,n;return e=new Ae({props:{Icon:an,label:"Clear"}}),e.$on("click",t[3]),{c(){T(e.$$.fragment)},m(s,r){z(e,s,r),n=!0},p:H,i(s){n||(p(e.$$.fragment,s),n=!0)},o(s){A(e.$$.fragment,s),n=!1},d(s){B(e,s)}}}function dn(t){let e,n,s,r,a,i;n=new Ae({props:{Icon:Pt,label:"Undo"}}),n.$on("click",t[2]);let u=t[0]&&ft(t);return a=new Ae({props:{Icon:Vt,label:"Remove Image"}}),a.$on("click",t[4]),{c(){e=N("div"),T(n.$$.fragment),s=U(),u&&u.c(),r=U(),T(a.$$.fragment),d(e,"class","svelte-s6ybro")},m(l,_){v(l,e,_),z(n,e,null),Y(e,s),u&&u.m(e,null),Y(e,r),z(a,e,null),i=!0},p(l,[_]){l[0]?u?(u.p(l,_),_&1&&p(u,1)):(u=ft(l),u.c(),p(u,1),u.m(e,r)):u&&($(),A(u,1,1,()=>{u=null}),x())},i(l){i||(p(n.$$.fragment,l),p(u),p(a.$$.fragment,l),i=!0)},o(l){A(n.$$.fragment,l),A(u),A(a.$$.fragment,l),i=!1},d(l){l&&C(e),B(n),u&&u.d(),B(a)}}}function bn(t,e,n){const s=pe();let{show_eraser:r=!1}=e;const a=()=>s("undo"),i=l=>{s("clear_mask"),l.stopPropagation()},u=l=>{s("remove_image"),l.stopPropagation()};return t.$$set=l=>{"show_eraser"in l&&n(0,r=l.show_eraser)},[r,s,a,i,u]}class $e extends le{constructor(e){super(),re(this,e,bn,dn,ne,{show_eraser:0})}}function _t(t){let e,n,s,r,a;return{c(){e=N("input"),d(e,"aria-label","Brush radius"),d(e,"type","range"),d(e,"min",n=.5*(t[2]/t[6])),d(e,"max",s=75*(t[2]/t[6])),d(e,"class","svelte-p4aq0j")},m(i,u){v(i,e,u),Ne(e,t[0]),r||(a=[F(e,"change",t[10]),F(e,"input",t[10])],r=!0)},p(i,u){u&68&&n!==(n=.5*(i[2]/i[6]))&&d(e,"min",n),u&68&&s!==(s=75*(i[2]/i[6]))&&d(e,"max",s),u&1&&Ne(e,i[0])},d(i){i&&C(e),r=!1,yt(a)}}}function ht(t){let e,n,s,r;n=new Ae({props:{Icon:ln,label:"Select brush color"}}),n.$on("click",t[11]);let a=t[5]&&ct(t);return{c(){e=N("span"),T(n.$$.fragment),s=U(),a&&a.c(),d(e,"class","col svelte-p4aq0j")},m(i,u){v(i,e,u),z(n,e,null),Y(e,s),a&&a.m(e,null),r=!0},p(i,u){i[5]?a?a.p(i,u):(a=ct(i),a.c(),a.m(e,null)):a&&(a.d(1),a=null)},i(i){r||(p(n.$$.fragment,i),r=!0)},o(i){A(n.$$.fragment,i),r=!1},d(i){i&&C(e),B(n),a&&a.d()}}}function ct(t){let e,n,s;return{c(){e=N("input"),d(e,"aria-label","Brush color"),d(e,"type","color"),d(e,"class","svelte-p4aq0j")},m(r,a){v(r,e,a),Ne(e,t[1]),n||(s=F(e,"input",t[12]),n=!0)},p(r,a){a&2&&Ne(e,r[1])},d(r){r&&C(e),n=!1,s()}}}function kn(t){let e,n,s,r,a,i;s=new Ae({props:{Icon:nn,label:"Use brush"}}),s.$on("click",t[9]);let u=t[4]&&_t(t),l=t[3]!=="mask"&&ht(t);return{c(){e=N("div"),n=N("span"),T(s.$$.fragment),r=U(),u&&u.c(),a=U(),l&&l.c(),d(n,"class","brush svelte-p4aq0j"),d(e,"class","wrap svelte-p4aq0j")},m(_,c){v(_,e,c),Y(e,n),z(s,n,null),Y(n,r),u&&u.m(n,null),Y(e,a),l&&l.m(e,null),i=!0},p(_,[c]){_[4]?u?u.p(_,c):(u=_t(_),u.c(),u.m(n,null)):u&&(u.d(1),u=null),_[3]!=="mask"?l?(l.p(_,c),c&8&&p(l,1)):(l=ht(_),l.c(),p(l,1),l.m(e,null)):l&&($(),A(l,1,1,()=>{l=null}),x())},i(_){i||(p(s.$$.fragment,_),p(l),i=!0)},o(_){A(s.$$.fragment,_),A(l),i=!1},d(_){_&&C(e),B(s),u&&u.d(),l&&l.d()}}}function wn(t,e,n){let s;pe();let r=!1,a=!1,{brush_radius:i=20}=e,{brush_color:u="#000"}=e,{container_height:l}=e,{img_width:_}=e,{img_height:c}=e,{mode:g="other"}=e;const o=()=>n(4,r=!r);function f(){i=Lt(this.value),n(0,i)}const b=()=>n(5,a=!a);function I(){u=this.value,n(1,u)}return t.$$set=D=>{"brush_radius"in D&&n(0,i=D.brush_radius),"brush_color"in D&&n(1,u=D.brush_color),"container_height"in D&&n(7,l=D.container_height),"img_width"in D&&n(2,_=D.img_width),"img_height"in D&&n(8,c=D.img_height),"mode"in D&&n(3,g=D.mode)},t.$$.update=()=>{t.$$.dirty&388&&n(6,s=l*(_/c))},[i,u,_,g,r,a,s,l,c,o,f,b,I]}class xe extends le{constructor(e){super(),re(this,e,wn,kn,ne,{brush_radius:0,brush_color:1,container_height:7,img_width:2,img_height:8,mode:3})}}function pn(t){let e,n,s,r;return{c(){e=N("img"),Q(e.src,n=t[0].image||t[0])||d(e,"src",n),d(e,"alt",""),d(e,"class","svelte-p3y7hu"),R(e,"webcam",t[5]==="webcam"&&t[9]),R(e,"selectable",t[10])},m(a,i){v(a,e,i),s||(r=F(e,"click",t[29]),s=!0)},p(a,i){i[0]&1&&!Q(e.src,n=a[0].image||a[0])&&d(e,"src",n),i[0]&544&&R(e,"webcam",a[5]==="webcam"&&a[9]),i[0]&1024&&R(e,"selectable",a[10])},i:H,o:H,d(a){a&&C(e),s=!1,r()}}}function An(t){let e=t[21],n,s,r,a=mt(t),i=t[16]>0&>(t);return{c(){a.c(),n=U(),i&&i.c(),s=me()},m(u,l){a.m(u,l),v(u,n,l),i&&i.m(u,l),v(u,s,l),r=!0},p(u,l){l[0]&2097152&&ne(e,e=u[21])?(a.d(1),a=mt(u),a.c(),a.m(n.parentNode,n)):a.p(u,l),u[16]>0?i?(i.p(u,l),l[0]&65536&&p(i,1)):(i=gt(u),i.c(),p(i,1),i.m(s.parentNode,s)):i&&($(),A(i,1,1,()=>{i=null}),x())},i(u){r||(p(i),r=!0)},o(u){A(i),r=!1},d(u){u&&(C(n),C(s)),a.d(u),i&&i.d(u)}}}function In(t){let e,n,s,r,a,i,u;return e=new Oe({props:{editable:!0}}),e.$on("edit",t[52]),e.$on("clear",t[24]),{c(){T(e.$$.fragment),n=U(),s=N("img"),Q(s.src,r=t[0])||d(s,"src",r),d(s,"alt",""),d(s,"class","svelte-p3y7hu"),R(s,"selectable",t[10]),R(s,"webcam",t[5]==="webcam"&&t[9])},m(l,_){z(e,l,_),v(l,n,_),v(l,s,_),a=!0,i||(u=F(s,"click",t[29]),i=!0)},p(l,_){(!a||_[0]&1&&!Q(s.src,r=l[0]))&&d(s,"src",r),(!a||_[0]&1024)&&R(s,"selectable",l[10]),(!a||_[0]&544)&&R(s,"webcam",l[5]==="webcam"&&l[9])},i(l){a||(p(e.$$.fragment,l),a=!0)},o(l){A(e.$$.fragment,l),a=!1},d(l){l&&(C(n),C(s)),B(e,l),i=!1,u()}}}function vn(t){let e,n,s,r,a={image:t[0]};return e=new Bt({props:a}),t[50](e),e.$on("crop",t[25]),s=new Oe({}),s.$on("clear",t[51]),{c(){T(e.$$.fragment),n=U(),T(s.$$.fragment)},m(i,u){z(e,i,u),v(i,n,u),z(s,i,u),r=!0},p(i,u){const l={};u[0]&1&&(l.image=i[0]),e.$set(l)},i(i){r||(p(e.$$.fragment,i),p(s.$$.fragment,i),r=!0)},o(i){A(e.$$.fragment,i),A(s.$$.fragment,i),r=!1},d(i){i&&C(n),t[50](null),B(e,i),B(s,i)}}}function Cn(t){let e,n,s=t[5]==="webcam"&&!t[21]&&bt(t);return{c(){s&&s.c(),e=me()},m(r,a){s&&s.m(r,a),v(r,e,a),n=!0},p(r,a){r[5]==="webcam"&&!r[21]?s?(s.p(r,a),a[0]&2097184&&p(s,1)):(s=bt(r),s.c(),p(s,1),s.m(e.parentNode,e)):s&&($(),A(s,1,1,()=>{s=null}),x())},i(r){n||(p(s),n=!0)},o(r){A(s),n=!1},d(r){r&&C(e),s&&s.d(r)}}}function Mn(t){let e,n,s,r,a,i,u;e=new $e({}),e.$on("undo",t[42]),e.$on("remove_image",t[27]);let l=t[1]==="color-sketch"&&kt(t);function _(o){t[45](o)}function c(o){t[46](o)}let g={value:t[0],mode:t[13],width:t[16]||t[20],height:t[15]||t[19],container_height:t[17]||t[19],shape:t[6]};return t[2]!==void 0&&(g.brush_radius=t[2]),t[22]!==void 0&&(g.brush_color=t[22]),r=new Ke({props:g}),L.push(()=>Z(r,"brush_radius",_)),L.push(()=>Z(r,"brush_color",c)),t[47](r),r.$on("change",t[25]),r.$on("clear",t[27]),{c(){T(e.$$.fragment),n=U(),l&&l.c(),s=U(),T(r.$$.fragment)},m(o,f){z(e,o,f),v(o,n,f),l&&l.m(o,f),v(o,s,f),z(r,o,f),u=!0},p(o,f){o[1]==="color-sketch"?l?(l.p(o,f),f[0]&2&&p(l,1)):(l=kt(o),l.c(),p(l,1),l.m(s.parentNode,s)):l&&($(),A(l,1,1,()=>{l=null}),x());const b={};f[0]&1&&(b.value=o[0]),f[0]&8192&&(b.mode=o[13]),f[0]&1114112&&(b.width=o[16]||o[20]),f[0]&557056&&(b.height=o[15]||o[19]),f[0]&655360&&(b.container_height=o[17]||o[19]),f[0]&64&&(b.shape=o[6]),!a&&f[0]&4&&(a=!0,b.brush_radius=o[2],K(()=>a=!1)),!i&&f[0]&4194304&&(i=!0,b.brush_color=o[22],K(()=>i=!1)),r.$set(b)},i(o){u||(p(e.$$.fragment,o),p(l),p(r.$$.fragment,o),u=!0)},o(o){A(e.$$.fragment,o),A(l),A(r.$$.fragment,o),u=!1},d(o){o&&(C(n),C(s)),B(e,o),l&&l.d(o),t[47](null),B(r,o)}}}function yn(t){let e,n,s;function r(i){t[41](i)}let a={filetype:"image/*",include_file_metadata:!1,disable_click:!!t[0],$$slots:{default:[Rn]},$$scope:{ctx:t}};return t[12]!==void 0&&(a.dragging=t[12]),e=new Gt({props:a}),L.push(()=>Z(e,"dragging",r)),e.$on("load",t[23]),{c(){T(e.$$.fragment)},m(i,u){z(e,i,u),s=!0},p(i,u){const l={};u[0]&1&&(l.disable_click=!!i[0]),u[0]&8384231|u[1]&1073741824&&(l.$$scope={dirty:u,ctx:i}),!n&&u[0]&4096&&(n=!0,l.dragging=i[12],K(()=>n=!1)),e.$set(l)},i(i){s||(p(e.$$.fragment,i),s=!0)},o(i){A(e.$$.fragment,i),s=!1},d(i){B(e,i)}}}function mt(t){let e,n,s,r;return{c(){e=N("img"),d(e,"class","absolute-img svelte-p3y7hu"),Q(e.src,n=t[21]||t[0]?.image||t[0])||d(e,"src",n),d(e,"alt",""),R(e,"webcam",t[5]==="webcam"&&t[9])},m(a,i){v(a,e,i),t[53](e),s||(r=F(e,"load",t[26]),s=!0)},p(a,i){i[0]&2097153&&!Q(e.src,n=a[21]||a[0]?.image||a[0])&&d(e,"src",n),i[0]&544&&R(e,"webcam",a[5]==="webcam"&&a[9])},d(a){a&&C(e),t[53](null),s=!1,r()}}}function gt(t){let e,n,s,r,a,i,u,l;function _(f){t[55](f)}function c(f){t[56](f)}let g={value:t[0],mode:t[13],width:t[16]||t[20],height:t[15]||t[19],container_height:t[17]||t[19],value_img:t[18],source:t[5]};t[2]!==void 0&&(g.brush_radius=t[2]),t[22]!==void 0&&(g.brush_color=t[22]),e=new Ke({props:g}),t[54](e),L.push(()=>Z(e,"brush_radius",_)),L.push(()=>Z(e,"brush_color",c)),e.$on("change",t[25]),a=new $e({}),a.$on("undo",t[57]),a.$on("remove_image",t[27]);let o=(t[1]==="color-sketch"||t[1]==="sketch")&&dt(t);return{c(){T(e.$$.fragment),r=U(),T(a.$$.fragment),i=U(),o&&o.c(),u=me()},m(f,b){z(e,f,b),v(f,r,b),z(a,f,b),v(f,i,b),o&&o.m(f,b),v(f,u,b),l=!0},p(f,b){const I={};b[0]&1&&(I.value=f[0]),b[0]&8192&&(I.mode=f[13]),b[0]&1114112&&(I.width=f[16]||f[20]),b[0]&557056&&(I.height=f[15]||f[19]),b[0]&655360&&(I.container_height=f[17]||f[19]),b[0]&262144&&(I.value_img=f[18]),b[0]&32&&(I.source=f[5]),!n&&b[0]&4&&(n=!0,I.brush_radius=f[2],K(()=>n=!1)),!s&&b[0]&4194304&&(s=!0,I.brush_color=f[22],K(()=>s=!1)),e.$set(I),f[1]==="color-sketch"||f[1]==="sketch"?o?(o.p(f,b),b[0]&2&&p(o,1)):(o=dt(f),o.c(),p(o,1),o.m(u.parentNode,u)):o&&($(),A(o,1,1,()=>{o=null}),x())},i(f){l||(p(e.$$.fragment,f),p(a.$$.fragment,f),p(o),l=!0)},o(f){A(e.$$.fragment,f),A(a.$$.fragment,f),A(o),l=!1},d(f){f&&(C(r),C(i),C(u)),t[54](null),B(e,f),B(a,f),o&&o.d(f)}}}function dt(t){let e,n,s,r;function a(l){t[58](l)}function i(l){t[59](l)}let u={container_height:t[17]||t[19],img_width:t[16]||t[20],img_height:t[15]||t[19],mode:t[13]};return t[2]!==void 0&&(u.brush_radius=t[2]),t[22]!==void 0&&(u.brush_color=t[22]),e=new xe({props:u}),L.push(()=>Z(e,"brush_radius",a)),L.push(()=>Z(e,"brush_color",i)),{c(){T(e.$$.fragment)},m(l,_){z(e,l,_),r=!0},p(l,_){const c={};_[0]&655360&&(c.container_height=l[17]||l[19]),_[0]&1114112&&(c.img_width=l[16]||l[20]),_[0]&557056&&(c.img_height=l[15]||l[19]),_[0]&8192&&(c.mode=l[13]),!n&&_[0]&4&&(n=!0,c.brush_radius=l[2],K(()=>n=!1)),!s&&_[0]&4194304&&(s=!0,c.brush_color=l[22],K(()=>s=!1)),e.$set(c)},i(l){r||(p(e.$$.fragment,l),r=!0)},o(l){A(e.$$.fragment,l),r=!1},d(l){B(e,l)}}}function bt(t){let e,n;return e=new Qt({props:{streaming:t[7],pending:t[8],mirror_webcam:t[9]}}),e.$on("capture",t[48]),e.$on("stream",t[25]),e.$on("error",t[49]),{c(){T(e.$$.fragment)},m(s,r){z(e,s,r),n=!0},p(s,r){const a={};r[0]&128&&(a.streaming=s[7]),r[0]&256&&(a.pending=s[8]),r[0]&512&&(a.mirror_webcam=s[9]),e.$set(a)},i(s){n||(p(e.$$.fragment,s),n=!0)},o(s){A(e.$$.fragment,s),n=!1},d(s){B(e,s)}}}function kt(t){let e,n,s,r;function a(l){t[43](l)}function i(l){t[44](l)}let u={container_height:t[17]||t[19],img_width:t[16]||t[20],img_height:t[15]||t[19]};return t[2]!==void 0&&(u.brush_radius=t[2]),t[22]!==void 0&&(u.brush_color=t[22]),e=new xe({props:u}),L.push(()=>Z(e,"brush_radius",a)),L.push(()=>Z(e,"brush_color",i)),{c(){T(e.$$.fragment)},m(l,_){z(e,l,_),r=!0},p(l,_){const c={};_[0]&655360&&(c.container_height=l[17]||l[19]),_[0]&1114112&&(c.img_width=l[16]||l[20]),_[0]&557056&&(c.img_height=l[15]||l[19]),!n&&_[0]&4&&(n=!0,c.brush_radius=l[2],K(()=>n=!1)),!s&&_[0]&4194304&&(s=!0,c.brush_color=l[22],K(()=>s=!1)),e.$set(c)},i(l){r||(p(e.$$.fragment,l),r=!0)},o(l){A(e.$$.fragment,l),r=!1},d(l){B(e,l)}}}function Tn(t){let e,n,s,r;return{c(){e=N("img"),Q(e.src,n=t[0].image||t[0])||d(e,"src",n),d(e,"alt","hello"),d(e,"class","svelte-p3y7hu"),R(e,"webcam",t[5]==="webcam"&&t[9]),R(e,"selectable",t[10])},m(a,i){v(a,e,i),s||(r=F(e,"click",t[29]),s=!0)},p(a,i){i[0]&1&&!Q(e.src,n=a[0].image||a[0])&&d(e,"src",n),i[0]&544&&R(e,"webcam",a[5]==="webcam"&&a[9]),i[0]&1024&&R(e,"selectable",a[10])},i:H,o:H,d(a){a&&C(e),s=!1,r()}}}function zn(t){let e=t[21],n,s,r,a=wt(t),i=t[16]>0&&pt(t);return{c(){a.c(),n=U(),i&&i.c(),s=me()},m(u,l){a.m(u,l),v(u,n,l),i&&i.m(u,l),v(u,s,l),r=!0},p(u,l){l[0]&2097152&&ne(e,e=u[21])?(a.d(1),a=wt(u),a.c(),a.m(n.parentNode,n)):a.p(u,l),u[16]>0?i?(i.p(u,l),l[0]&65536&&p(i,1)):(i=pt(u),i.c(),p(i,1),i.m(s.parentNode,s)):i&&($(),A(i,1,1,()=>{i=null}),x())},i(u){r||(p(i),r=!0)},o(u){A(i),r=!1},d(u){u&&(C(n),C(s)),a.d(u),i&&i.d(u)}}}function Bn(t){let e,n,s,r,a,i,u;return e=new Oe({props:{editable:!0}}),e.$on("edit",t[33]),e.$on("clear",t[24]),{c(){T(e.$$.fragment),n=U(),s=N("img"),Q(s.src,r=t[0])||d(s,"src",r),d(s,"alt",""),d(s,"class","svelte-p3y7hu"),R(s,"scale-x-[-1]",t[5]==="webcam"&&t[9]),R(s,"selectable",t[10])},m(l,_){z(e,l,_),v(l,n,_),v(l,s,_),a=!0,i||(u=F(s,"click",t[29]),i=!0)},p(l,_){(!a||_[0]&1&&!Q(s.src,r=l[0]))&&d(s,"src",r),(!a||_[0]&544)&&R(s,"scale-x-[-1]",l[5]==="webcam"&&l[9]),(!a||_[0]&1024)&&R(s,"selectable",l[10])},i(l){a||(p(e.$$.fragment,l),a=!0)},o(l){A(e.$$.fragment,l),a=!1},d(l){l&&(C(n),C(s)),B(e,l),i=!1,u()}}}function Sn(t){let e,n,s,r,a={image:t[0]};return e=new Bt({props:a}),t[31](e),e.$on("crop",t[25]),s=new Oe({}),s.$on("clear",t[32]),{c(){T(e.$$.fragment),n=U(),T(s.$$.fragment)},m(i,u){z(e,i,u),v(i,n,u),z(s,i,u),r=!0},p(i,u){const l={};u[0]&1&&(l.image=i[0]),e.$set(l)},i(i){r||(p(e.$$.fragment,i),p(s.$$.fragment,i),r=!0)},o(i){A(e.$$.fragment,i),A(s.$$.fragment,i),r=!1},d(i){i&&C(n),t[31](null),B(e,i),B(s,i)}}}function En(t){let e;const n=t[30].default,s=jt(n,t,t[61],null);return{c(){s&&s.c()},m(r,a){s&&s.m(r,a),e=!0},p(r,a){s&&s.p&&(!e||a[1]&1073741824)&&Ut(s,n,r,r[61],e?Ft(n,r[61],a,null):qt(r[61]),null)},i(r){e||(p(s,r),e=!0)},o(r){A(s,r),e=!1},d(r){s&&s.d(r)}}}function wt(t){let e,n,s,r;return{c(){e=N("img"),d(e,"class","absolute-img svelte-p3y7hu"),Q(e.src,n=t[21]||t[0]?.image||t[0])||d(e,"src",n),d(e,"alt",""),R(e,"webcam",t[5]==="webcam"&&t[9])},m(a,i){v(a,e,i),t[34](e),s||(r=F(e,"load",t[26]),s=!0)},p(a,i){i[0]&2097153&&!Q(e.src,n=a[21]||a[0]?.image||a[0])&&d(e,"src",n),i[0]&544&&R(e,"webcam",a[5]==="webcam"&&a[9])},d(a){a&&C(e),t[34](null),s=!1,r()}}}function pt(t){let e,n,s,r,a,i,u,l;function _(f){t[36](f)}function c(f){t[37](f)}let g={value:t[0],mode:t[13],width:t[16]||t[20],height:t[15]||t[19],container_height:t[17]||t[19],value_img:t[18],source:t[5],shape:t[6]};t[2]!==void 0&&(g.brush_radius=t[2]),t[22]!==void 0&&(g.brush_color=t[22]),e=new Ke({props:g}),t[35](e),L.push(()=>Z(e,"brush_radius",_)),L.push(()=>Z(e,"brush_color",c)),e.$on("change",t[25]),a=new $e({props:{show_eraser:t[18]}}),a.$on("undo",t[38]),a.$on("clear_mask",t[28]),a.$on("remove_image",t[27]);let o=(t[1]==="color-sketch"||t[1]==="sketch")&&At(t);return{c(){T(e.$$.fragment),r=U(),T(a.$$.fragment),i=U(),o&&o.c(),u=me()},m(f,b){z(e,f,b),v(f,r,b),z(a,f,b),v(f,i,b),o&&o.m(f,b),v(f,u,b),l=!0},p(f,b){const I={};b[0]&1&&(I.value=f[0]),b[0]&8192&&(I.mode=f[13]),b[0]&1114112&&(I.width=f[16]||f[20]),b[0]&557056&&(I.height=f[15]||f[19]),b[0]&655360&&(I.container_height=f[17]||f[19]),b[0]&262144&&(I.value_img=f[18]),b[0]&32&&(I.source=f[5]),b[0]&64&&(I.shape=f[6]),!n&&b[0]&4&&(n=!0,I.brush_radius=f[2],K(()=>n=!1)),!s&&b[0]&4194304&&(s=!0,I.brush_color=f[22],K(()=>s=!1)),e.$set(I);const D={};b[0]&262144&&(D.show_eraser=f[18]),a.$set(D),f[1]==="color-sketch"||f[1]==="sketch"?o?(o.p(f,b),b[0]&2&&p(o,1)):(o=At(f),o.c(),p(o,1),o.m(u.parentNode,u)):o&&($(),A(o,1,1,()=>{o=null}),x())},i(f){l||(p(e.$$.fragment,f),p(a.$$.fragment,f),p(o),l=!0)},o(f){A(e.$$.fragment,f),A(a.$$.fragment,f),A(o),l=!1},d(f){f&&(C(r),C(i),C(u)),t[35](null),B(e,f),B(a,f),o&&o.d(f)}}}function At(t){let e,n,s,r;function a(l){t[39](l)}function i(l){t[40](l)}let u={container_height:t[17]||t[19],img_width:t[16]||t[20],img_height:t[15]||t[19],mode:t[13]};return t[2]!==void 0&&(u.brush_radius=t[2]),t[22]!==void 0&&(u.brush_color=t[22]),e=new xe({props:u}),L.push(()=>Z(e,"brush_radius",a)),L.push(()=>Z(e,"brush_color",i)),{c(){T(e.$$.fragment)},m(l,_){z(e,l,_),r=!0},p(l,_){const c={};_[0]&655360&&(c.container_height=l[17]||l[19]),_[0]&1114112&&(c.img_width=l[16]||l[20]),_[0]&557056&&(c.img_height=l[15]||l[19]),_[0]&8192&&(c.mode=l[13]),!n&&_[0]&4&&(n=!0,c.brush_radius=l[2],K(()=>n=!1)),!s&&_[0]&4194304&&(s=!0,c.brush_color=l[22],K(()=>s=!1)),e.$set(c)},i(l){r||(p(e.$$.fragment,l),r=!0)},o(l){A(e.$$.fragment,l),r=!1},d(l){B(e,l)}}}function Rn(t){let e,n,s,r;const a=[En,Sn,Bn,zn,Tn],i=[];function u(l,_){return l[0]===null&&!l[21]||l[7]?0:l[1]==="select"?1:l[1]==="editor"?2:(l[1]==="sketch"||l[1]==="color-sketch")&&(l[0]!==null||l[21])?3:4}return e=u(t),n=i[e]=a[e](t),{c(){n.c(),s=me()},m(l,_){i[e].m(l,_),v(l,s,_),r=!0},p(l,_){let c=e;e=u(l),e===c?i[e].p(l,_):($(),A(i[c],1,1,()=>{i[c]=null}),x(),n=i[e],n?n.p(l,_):(n=i[e]=a[e](l),n.c()),p(n,1),n.m(s.parentNode,s))},i(l){r||(p(n),r=!0)},o(l){A(n),r=!1},d(l){l&&C(s),i[e].d(l)}}}function Dn(t){let e,n,s,r,a,i,u;e=new Tt({props:{show_label:t[4],Icon:t[5]==="canvas"?st:We,label:t[3]||(t[5]==="canvas"?"Sketch":"Image")}});const l=[yn,Mn,Cn,vn,In,An,pn],_=[];function c(g,o){return g[5]==="upload"?0:g[5]==="canvas"?1:g[0]===null&&!g[21]||g[7]?2:g[1]==="select"?3:g[1]==="editor"?4:(g[1]==="sketch"||g[1]==="color-sketch")&&(g[0]!==null||g[21])?5:6}return r=c(t),a=_[r]=l[r](t),{c(){T(e.$$.fragment),n=U(),s=N("div"),a.c(),d(s,"data-testid","image"),d(s,"class","image-container svelte-p3y7hu"),Ze(()=>t[60].call(s))},m(g,o){z(e,g,o),v(g,n,o),v(g,s,o),_[r].m(s,null),i=Ct(s,t[60].bind(s)),u=!0},p(g,o){const f={};o[0]&16&&(f.show_label=g[4]),o[0]&32&&(f.Icon=g[5]==="canvas"?st:We),o[0]&40&&(f.label=g[3]||(g[5]==="canvas"?"Sketch":"Image")),e.$set(f);let b=r;r=c(g),r===b?_[r].p(g,o):($(),A(_[b],1,1,()=>{_[b]=null}),x(),a=_[r],a?a.p(g,o):(a=_[r]=l[r](g),a.c()),p(a,1),a.m(s,null))},i(g){u||(p(e.$$.fragment,g),p(a),u=!0)},o(g){A(e.$$.fragment,g),A(a),u=!1},d(g){g&&(C(n),C(s)),B(e,g),_[r].d(),i()}}}function Ln(t,e,n){let s,{$$slots:r={},$$scope:a}=e,{value:i}=e,{label:u=void 0}=e,{show_label:l}=e,{source:_="upload"}=e,{tool:c="editor"}=e,{shape:g}=e,{streaming:o=!1}=e,{pending:f=!1}=e,{mirror_webcam:b}=e,{brush_radius:I}=e,{selectable:D=!1}=e,W,P;i&&(_==="upload"||_==="webcam")&&c==="sketch"&&(i={image:i,mask:null});function ge({detail:h}){c==="color-sketch"?n(21,ie=h):n(0,i=(_==="upload"||_==="webcam")&&c==="sketch"?{image:h,mask:null}:h),q("upload",h)}function E({detail:h}){n(0,i=null),n(21,ie=void 0),q("clear")}async function k({detail:h},S){O==="mask"?_==="webcam"&&S?n(0,i={image:h,mask:null}):n(0,i={image:typeof i=="string"?i:i?.image||null,mask:h}):(_==="upload"||_==="webcam")&&c==="sketch"?n(0,i={image:h,mask:null}):n(0,i=h),await we(),q(o?"stream":"edit")}const q=pe();let X=!1;function ae(h){const S=h.currentTarget;n(16,J=S.naturalWidth),n(15,_e=S.naturalHeight),n(17,se=S.getBoundingClientRect().height)}async function ue(){W.clear(),await we(),n(0,i=null),n(21,ie=void 0)}async function fe(){W.clear_mask(),await we()}let _e=0,J=0,se=0,O,ee,oe,ce,ie;Mt(async()=>{c==="color-sketch"&&i&&typeof i=="string"&&(n(21,ie=i),await we(),ae({currentTarget:ee}))});const Ie=h=>{let S=zt(h);S&&q("select",{index:S,value:null})};function ve(h){L[h?"unshift":"push"](()=>{P=h,n(11,P),n(0,i)})}const de=h=>(E(h),n(1,c="editor")),Ce=()=>n(1,c="select");function w(h){L[h?"unshift":"push"](()=>{ee=h,n(18,ee)})}function Ye(h){L[h?"unshift":"push"](()=>{W=h,n(14,W)})}function Te(h){I=h,n(2,I)}function ze(h){s=h,n(22,s),n(13,O),n(5,_),n(1,c)}const Be=()=>W.undo();function Se(h){I=h,n(2,I)}function Ee(h){s=h,n(22,s),n(13,O),n(5,_),n(1,c)}function be(h){X=h,n(12,X)}const Re=()=>W.undo();function De(h){I=h,n(2,I)}function Le(h){s=h,n(22,s),n(13,O),n(5,_),n(1,c)}function je(h){I=h,n(2,I)}function Ue(h){s=h,n(22,s),n(13,O),n(5,_),n(1,c)}function qe(h){L[h?"unshift":"push"](()=>{W=h,n(14,W)})}const he=h=>c==="color-sketch"?ge(h):k(h,!0);function ke(h){te.call(this,t,h)}function Fe(h){L[h?"unshift":"push"](()=>{P=h,n(11,P),n(0,i)})}const Me=h=>(E(h),n(1,c="editor")),Xe=()=>n(1,c="select");function He(h){L[h?"unshift":"push"](()=>{ee=h,n(18,ee)})}function Je(h){L[h?"unshift":"push"](()=>{W=h,n(14,W)})}function Pe(h){I=h,n(2,I)}function Qe(h){s=h,n(22,s),n(13,O),n(5,_),n(1,c)}const Ve=()=>W.undo();function m(h){I=h,n(2,I)}function M(h){s=h,n(22,s),n(13,O),n(5,_),n(1,c)}function y(){oe=this.offsetHeight,ce=this.offsetWidth,n(19,oe),n(20,ce)}return t.$$set=h=>{"value"in h&&n(0,i=h.value),"label"in h&&n(3,u=h.label),"show_label"in h&&n(4,l=h.show_label),"source"in h&&n(5,_=h.source),"tool"in h&&n(1,c=h.tool),"shape"in h&&n(6,g=h.shape),"streaming"in h&&n(7,o=h.streaming),"pending"in h&&n(8,f=h.pending),"mirror_webcam"in h&&n(9,b=h.mirror_webcam),"brush_radius"in h&&n(2,I=h.brush_radius),"selectable"in h&&n(10,D=h.selectable),"$$scope"in h&&n(61,a=h.$$scope)},t.$$.update=()=>{t.$$.dirty[0]&4096&&q("drag",X),t.$$.dirty[0]&34&&(_==="canvas"&&c==="sketch"?n(13,O="bw-sketch"):c==="color-sketch"?n(13,O="color-sketch"):(_==="upload"||_==="webcam")&&c==="sketch"?n(13,O="mask"):n(13,O="editor")),t.$$.dirty[0]&8192&&n(22,s=O=="mask"?"#000000":"#000"),t.$$.dirty[0]&1&&(i===null||i.image===null&&i.mask===null)&&n(21,ie=void 0),t.$$.dirty[0]&2049&&P&&(i?(n(11,P.image=i,P),P.create()):P.destroy())},[i,c,I,u,l,_,g,o,f,b,D,P,X,O,W,_e,J,se,ee,oe,ce,ie,s,ge,E,k,ae,ue,fe,Ie,r,ve,de,Ce,w,Ye,Te,ze,Be,Se,Ee,be,Re,De,Le,je,Ue,qe,he,ke,Fe,Me,Xe,He,Je,Pe,Qe,Ve,m,M,y,a]}let jn=class extends le{constructor(e){super(),re(this,e,Ln,Dn,ne,{value:0,label:3,show_label:4,source:5,tool:1,shape:6,streaming:7,pending:8,mirror_webcam:9,brush_radius:2,selectable:10},null,[-1,-1,-1])}};function Un(t){let e,n,s,r,a,i,u,l,_,c;s=new Ae({props:{Icon:xt,label:"Download"}});let g=t[4]&&It(t);return{c(){e=N("div"),n=N("a"),T(s.$$.fragment),r=U(),g&&g.c(),a=U(),i=N("img"),d(n,"href",t[0]),d(n,"target",window.__is_colab__?"_blank":null),d(n,"download","image"),d(e,"class","icon-buttons svelte-1btp92j"),Q(i.src,u=t[0])||d(i,"src",u),d(i,"alt",""),d(i,"class","svelte-1btp92j"),R(i,"selectable",t[3])},m(o,f){v(o,e,f),Y(e,n),z(s,n,null),Y(e,r),g&&g.m(e,null),v(o,a,f),v(o,i,f),l=!0,_||(c=F(i,"click",t[5]),_=!0)},p(o,f){(!l||f&1)&&d(n,"href",o[0]),o[4]?g?(g.p(o,f),f&16&&p(g,1)):(g=It(o),g.c(),p(g,1),g.m(e,null)):g&&($(),A(g,1,1,()=>{g=null}),x()),(!l||f&1&&!Q(i.src,u=o[0]))&&d(i,"src",u),(!l||f&8)&&R(i,"selectable",o[3])},i(o){l||(p(s.$$.fragment,o),p(g),l=!0)},o(o){A(s.$$.fragment,o),A(g),l=!1},d(o){o&&(C(e),C(a),C(i)),B(s),g&&g.d(),_=!1,c()}}}function qn(t){let e,n;return e=new $t({props:{unpadded_box:!0,size:"large",$$slots:{default:[Fn]},$$scope:{ctx:t}}}),{c(){T(e.$$.fragment)},m(s,r){z(e,s,r),n=!0},p(s,r){const a={};r&1024&&(a.$$scope={dirty:r,ctx:s}),e.$set(a)},i(s){n||(p(e.$$.fragment,s),n=!0)},o(s){A(e.$$.fragment,s),n=!1},d(s){B(e,s)}}}function It(t){let e,n;return e=new Kt({props:{formatter:t[6],value:t[0]}}),e.$on("share",t[7]),e.$on("error",t[8]),{c(){T(e.$$.fragment)},m(s,r){z(e,s,r),n=!0},p(s,r){const a={};r&1&&(a.value=s[0]),e.$set(a)},i(s){n||(p(e.$$.fragment,s),n=!0)},o(s){A(e.$$.fragment,s),n=!1},d(s){B(e,s)}}}function Fn(t){let e,n;return e=new We({}),{c(){T(e.$$.fragment)},m(s,r){z(e,s,r),n=!0},i(s){n||(p(e.$$.fragment,s),n=!0)},o(s){A(e.$$.fragment,s),n=!1},d(s){B(e,s)}}}function Hn(t){let e,n,s,r,a,i;e=new Tt({props:{show_label:t[2],Icon:We,label:t[1]||"Image"}});const u=[qn,Un],l=[];function _(c,g){return c[0]===null?0:1}return s=_(t),r=l[s]=u[s](t),{c(){T(e.$$.fragment),n=U(),r.c(),a=me()},m(c,g){z(e,c,g),v(c,n,g),l[s].m(c,g),v(c,a,g),i=!0},p(c,[g]){const o={};g&4&&(o.show_label=c[2]),g&2&&(o.label=c[1]||"Image"),e.$set(o);let f=s;s=_(c),s===f?l[s].p(c,g):($(),A(l[f],1,1,()=>{l[f]=null}),x(),r=l[s],r?r.p(c,g):(r=l[s]=u[s](c),r.c()),p(r,1),r.m(a.parentNode,a))},i(c){i||(p(e.$$.fragment,c),p(r),i=!0)},o(c){A(e.$$.fragment,c),A(r),i=!1},d(c){c&&(C(n),C(a)),B(e,c),l[s].d(c)}}}function Nn(t,e,n){let{value:s}=e,{label:r=void 0}=e,{show_label:a}=e,{selectable:i=!1}=e,{show_share_button:u=!1}=e;const l=pe(),_=f=>{let b=zt(f);b&&l("select",{index:b,value:null})},c=async f=>f?``:"";function g(f){te.call(this,t,f)}function o(f){te.call(this,t,f)}return t.$$set=f=>{"value"in f&&n(0,s=f.value),"label"in f&&n(1,r=f.label),"show_label"in f&&n(2,a=f.show_label),"selectable"in f&&n(3,i=f.selectable),"show_share_button"in f&&n(4,u=f.show_share_button)},t.$$.update=()=>{t.$$.dirty&1&&s&&l("change",s)},[s,r,a,i,u,_,c,g,o]}class Wn extends le{constructor(e){super(),re(this,e,Nn,Hn,ne,{value:0,label:1,show_label:2,selectable:3,show_share_button:4})}}function On(t){let e,n,s;function r(i){t[27](i)}let a={brush_radius:t[15],shape:t[14],source:t[5],tool:t[6],selectable:t[16],label:t[7],show_label:t[8],pending:t[10],streaming:t[9],mirror_webcam:t[13],$$slots:{default:[Xn]},$$scope:{ctx:t}};return t[0]!==void 0&&(a.value=t[0]),e=new jn({props:a}),L.push(()=>Z(e,"value",r)),e.$on("edit",t[28]),e.$on("clear",t[29]),e.$on("stream",t[30]),e.$on("drag",t[31]),e.$on("upload",t[32]),e.$on("select",t[33]),e.$on("share",t[34]),e.$on("error",t[35]),{c(){T(e.$$.fragment)},m(i,u){z(e,i,u),s=!0},p(i,u){const l={};u[0]&32768&&(l.brush_radius=i[15]),u[0]&16384&&(l.shape=i[14]),u[0]&32&&(l.source=i[5]),u[0]&64&&(l.tool=i[6]),u[0]&65536&&(l.selectable=i[16]),u[0]&128&&(l.label=i[7]),u[0]&256&&(l.show_label=i[8]),u[0]&1024&&(l.pending=i[10]),u[0]&512&&(l.streaming=i[9]),u[0]&8192&&(l.mirror_webcam=i[13]),u[1]&32&&(l.$$scope={dirty:u,ctx:i}),!n&&u[0]&1&&(n=!0,l.value=i[0],K(()=>n=!1)),e.$set(l)},i(i){s||(p(e.$$.fragment,i),s=!0)},o(i){A(e.$$.fragment,i),s=!1},d(i){B(e,i)}}}function Yn(t){let e,n;return e=new Wn({props:{value:t[0],label:t[7],show_label:t[8],selectable:t[16],show_share_button:t[21]}}),e.$on("select",t[24]),e.$on("share",t[25]),e.$on("error",t[26]),{c(){T(e.$$.fragment)},m(s,r){z(e,s,r),n=!0},p(s,r){const a={};r[0]&1&&(a.value=s[0]),r[0]&128&&(a.label=s[7]),r[0]&256&&(a.show_label=s[8]),r[0]&65536&&(a.selectable=s[16]),r[0]&2097152&&(a.show_share_button=s[21]),e.$set(a)},i(s){n||(p(e.$$.fragment,s),n=!0)},o(s){A(e.$$.fragment,s),n=!1},d(s){B(e,s)}}}function Xn(t){let e,n;return e=new en({props:{type:"image"}}),{c(){T(e.$$.fragment)},m(s,r){z(e,s,r),n=!0},p:H,i(s){n||(p(e.$$.fragment,s),n=!0)},o(s){A(e.$$.fragment,s),n=!1},d(s){B(e,s)}}}function Jn(t){let e,n,s,r,a,i;const u=[t[1]];let l={};for(let o=0;o{c[I]=null}),x(),r=c[s],r?r.p(o,f):(r=c[s]=_[s](o),r.c()),p(r,1),r.m(a.parentNode,a))},i(o){i||(p(e.$$.fragment,o),p(r),i=!0)},o(o){A(e.$$.fragment,o),A(r),i=!1},d(o){o&&(C(n),C(a)),B(e,o),c[s].d(o)}}}function Pn(t){let e,n;return e=new Yt({props:{visible:t[4],variant:t[20]==="dynamic"&&t[0]===null&&t[5]==="upload"?"dashed":"solid",border_mode:t[22]?"focus":"base",padding:!1,elem_id:t[2],elem_classes:t[3],height:t[11]||(t[5]==="webcam"||t[20]==="static"?void 0:vt),width:t[12],allow_overflow:!1,container:t[17],scale:t[18],min_width:t[19],$$slots:{default:[Jn]},$$scope:{ctx:t}}}),{c(){T(e.$$.fragment)},m(s,r){z(e,s,r),n=!0},p(s,r){const a={};r[0]&16&&(a.visible=s[4]),r[0]&1048609&&(a.variant=s[20]==="dynamic"&&s[0]===null&&s[5]==="upload"?"dashed":"solid"),r[0]&4194304&&(a.border_mode=s[22]?"focus":"base"),r[0]&4&&(a.elem_id=s[2]),r[0]&8&&(a.elem_classes=s[3]),r[0]&1050656&&(a.height=s[11]||(s[5]==="webcam"||s[20]==="static"?void 0:vt)),r[0]&4096&&(a.width=s[12]),r[0]&131072&&(a.container=s[17]),r[0]&262144&&(a.scale=s[18]),r[0]&524288&&(a.min_width=s[19]),r[0]&7464931|r[1]&32&&(a.$$scope={dirty:r,ctx:s}),e.$set(a)},i(s){n||(p(e.$$.fragment,s),n=!0)},o(s){A(e.$$.fragment,s),n=!1},d(s){B(e,s)}}}const vt=240;function Qn(t,e,n){let{elem_id:s=""}=e,{elem_classes:r=[]}=e,{visible:a=!0}=e,{value:i=null}=e,{source:u="upload"}=e,{tool:l="editor"}=e,{label:_}=e,{show_label:c}=e,{streaming:g}=e,{pending:o}=e,{height:f}=e,{width:b}=e,{mirror_webcam:I}=e,{shape:D}=e,{brush_radius:W}=e,{selectable:P=!1}=e,{container:ge=!0}=e,{scale:E=null}=e,{min_width:k=void 0}=e,{loading_status:q}=e,{mode:X}=e,{show_share_button:ae=!1}=e;const ue=pe();let fe;function _e(w){te.call(this,t,w)}function J(w){te.call(this,t,w)}function se(w){te.call(this,t,w)}function O(w){i=w,n(0,i)}function ee(w){te.call(this,t,w)}function oe(w){te.call(this,t,w)}function ce(w){te.call(this,t,w)}const ie=({detail:w})=>n(22,fe=w);function Ie(w){te.call(this,t,w)}function ve(w){te.call(this,t,w)}function de(w){te.call(this,t,w)}const Ce=({detail:w})=>{n(1,q=q||{}),n(1,q.status="error",q),ue("error",w)};return t.$$set=w=>{"elem_id"in w&&n(2,s=w.elem_id),"elem_classes"in w&&n(3,r=w.elem_classes),"visible"in w&&n(4,a=w.visible),"value"in w&&n(0,i=w.value),"source"in w&&n(5,u=w.source),"tool"in w&&n(6,l=w.tool),"label"in w&&n(7,_=w.label),"show_label"in w&&n(8,c=w.show_label),"streaming"in w&&n(9,g=w.streaming),"pending"in w&&n(10,o=w.pending),"height"in w&&n(11,f=w.height),"width"in w&&n(12,b=w.width),"mirror_webcam"in w&&n(13,I=w.mirror_webcam),"shape"in w&&n(14,D=w.shape),"brush_radius"in w&&n(15,W=w.brush_radius),"selectable"in w&&n(16,P=w.selectable),"container"in w&&n(17,ge=w.container),"scale"in w&&n(18,E=w.scale),"min_width"in w&&n(19,k=w.min_width),"loading_status"in w&&n(1,q=w.loading_status),"mode"in w&&n(20,X=w.mode),"show_share_button"in w&&n(21,ae=w.show_share_button)},t.$$.update=()=>{t.$$.dirty[0]&1&&n(0,i=i||null),t.$$.dirty[0]&1&&ue("change")},[i,q,s,r,a,u,l,_,c,g,o,f,b,I,D,W,P,ge,E,k,X,ae,fe,ue,_e,J,se,O,ee,oe,ce,ie,Ie,ve,de,Ce]}class Vn extends le{constructor(e){super(),re(this,e,Qn,Pn,ne,{elem_id:2,elem_classes:3,visible:4,value:0,source:5,tool:6,label:7,show_label:8,streaming:9,pending:10,height:11,width:12,mirror_webcam:13,shape:14,brush_radius:15,selectable:16,container:17,scale:18,min_width:19,loading_status:1,mode:20,show_share_button:21},null,[-1,-1])}get elem_id(){return this.$$.ctx[2]}set elem_id(e){this.$$set({elem_id:e}),j()}get elem_classes(){return this.$$.ctx[3]}set elem_classes(e){this.$$set({elem_classes:e}),j()}get visible(){return this.$$.ctx[4]}set visible(e){this.$$set({visible:e}),j()}get value(){return this.$$.ctx[0]}set value(e){this.$$set({value:e}),j()}get source(){return this.$$.ctx[5]}set source(e){this.$$set({source:e}),j()}get tool(){return this.$$.ctx[6]}set tool(e){this.$$set({tool:e}),j()}get label(){return this.$$.ctx[7]}set label(e){this.$$set({label:e}),j()}get show_label(){return this.$$.ctx[8]}set show_label(e){this.$$set({show_label:e}),j()}get streaming(){return this.$$.ctx[9]}set streaming(e){this.$$set({streaming:e}),j()}get pending(){return this.$$.ctx[10]}set pending(e){this.$$set({pending:e}),j()}get height(){return this.$$.ctx[11]}set height(e){this.$$set({height:e}),j()}get width(){return this.$$.ctx[12]}set width(e){this.$$set({width:e}),j()}get mirror_webcam(){return this.$$.ctx[13]}set mirror_webcam(e){this.$$set({mirror_webcam:e}),j()}get shape(){return this.$$.ctx[14]}set shape(e){this.$$set({shape:e}),j()}get brush_radius(){return this.$$.ctx[15]}set brush_radius(e){this.$$set({brush_radius:e}),j()}get selectable(){return this.$$.ctx[16]}set selectable(e){this.$$set({selectable:e}),j()}get container(){return this.$$.ctx[17]}set container(e){this.$$set({container:e}),j()}get scale(){return this.$$.ctx[18]}set scale(e){this.$$set({scale:e}),j()}get min_width(){return this.$$.ctx[19]}set min_width(e){this.$$set({min_width:e}),j()}get loading_status(){return this.$$.ctx[1]}set loading_status(e){this.$$set({loading_status:e}),j()}get mode(){return this.$$.ctx[20]}set mode(e){this.$$set({mode:e}),j()}get show_share_button(){return this.$$.ctx[21]}set show_share_button(e){this.$$set({show_share_button:e}),j()}}const _s=Vn,hs=["static","dynamic"],cs=t=>({type:{payload:"string"},description:{payload:"image data as base64 string"},example_data:"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAACklEQVR4nGMAAQAABQABDQottAAAAABJRU5ErkJggg=="});export{_s as Component,ds as ExampleComponent,cs as document,hs as modes};
-//# sourceMappingURL=index-085f5795.js.map
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_multi_commits.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_multi_commits.py
deleted file mode 100644
index c41d2a36fc0971ad031e05d851e632b263f10e48..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_multi_commits.py
+++ /dev/null
@@ -1,305 +0,0 @@
-# coding=utf-8
-# Copyright 2023-present, the HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Contains utilities to multi-commits (i.e. push changes iteratively on a PR)."""
-import re
-from dataclasses import dataclass, field
-from hashlib import sha256
-from typing import TYPE_CHECKING, Iterable, List, Optional, Set, Tuple, Union
-
-from ._commit_api import CommitOperationAdd, CommitOperationDelete
-from .community import DiscussionWithDetails
-from .utils import experimental
-from .utils._cache_manager import _format_size
-
-
-if TYPE_CHECKING:
- from .hf_api import HfApi
-
-
-class MultiCommitException(Exception):
- """Base exception for any exception happening while doing a multi-commit."""
-
-
-MULTI_COMMIT_PR_DESCRIPTION_TEMPLATE = """
-## {commit_message}
-
-{commit_description}
-
-**Multi commit ID:** {multi_commit_id}
-
-Scheduled commits:
-
-{multi_commit_strategy}
-
-_This is a PR opened using the `huggingface_hub` library in the context of a multi-commit. PR can be commented as a usual PR. However, please be aware that manually updating the PR description, changing the PR status, or pushing new commits, is not recommended as it might corrupt the commit process. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._
-"""
-
-MULTI_COMMIT_PR_COMPLETION_COMMENT_TEMPLATE = """
-Multi-commit is now completed! You can ping the repo owner to review the changes. This PR can now be commented or modified without risking to corrupt it.
-
-_This is a comment posted using the `huggingface_hub` library in the context of a multi-commit. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._
-"""
-
-MULTI_COMMIT_PR_CLOSING_COMMENT_TEMPLATE = """
-`create_pr=False` has been passed so PR is automatically merged.
-
-_This is a comment posted using the `huggingface_hub` library in the context of a multi-commit. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._
-"""
-
-MULTI_COMMIT_PR_CLOSE_COMMENT_FAILURE_NO_CHANGES_TEMPLATE = """
-Cannot merge Pull Requests as no changes are associated. This PR will be closed automatically.
-
-_This is a comment posted using the `huggingface_hub` library in the context of a multi-commit. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._
-"""
-
-MULTI_COMMIT_PR_CLOSE_COMMENT_FAILURE_BAD_REQUEST_TEMPLATE = """
-An error occurred while trying to merge the Pull Request: `{error_message}`.
-
-_This is a comment posted using the `huggingface_hub` library in the context of a multi-commit. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._
-"""
-
-
-STEP_ID_REGEX = re.compile(r"- \[(?P[ |x])\].*(?P[a-fA-F0-9]{64})", flags=re.MULTILINE)
-
-
-@experimental
-def plan_multi_commits(
- operations: Iterable[Union[CommitOperationAdd, CommitOperationDelete]],
- max_operations_per_commit: int = 50,
- max_upload_size_per_commit: int = 2 * 1024 * 1024 * 1024,
-) -> Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]:
- """Split a list of operations in a list of commits to perform.
-
- Implementation follows a sub-optimal (yet simple) algorithm:
- 1. Delete operations are grouped together by commits of maximum `max_operations_per_commits` operations.
- 2. All additions exceeding `max_upload_size_per_commit` are committed 1 by 1.
- 3. All remaining additions are grouped together and split each time the `max_operations_per_commit` or the
- `max_upload_size_per_commit` limit is reached.
-
- We do not try to optimize the splitting to get the lowest number of commits as this is a NP-hard problem (see
- [bin packing problem](https://en.wikipedia.org/wiki/Bin_packing_problem)). For our use case, it is not problematic
- to use a sub-optimal solution so we favored an easy-to-explain implementation.
-
- Args:
- operations (`List` of [`~hf_api.CommitOperation`]):
- The list of operations to split into commits.
- max_operations_per_commit (`int`):
- Maximum number of operations in a single commit. Defaults to 50.
- max_upload_size_per_commit (`int`):
- Maximum size to upload (in bytes) in a single commit. Defaults to 2GB. Files bigger than this limit are
- uploaded, 1 per commit.
-
- Returns:
- `Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]`: a tuple. First item is a list of
- lists of [`CommitOperationAdd`] representing the addition commits to push. The second item is a list of lists
- of [`CommitOperationDelete`] representing the deletion commits.
-
-
-
- `plan_multi_commits` is experimental. Its API and behavior is subject to change in the future without prior notice.
-
-
-
- Example:
- ```python
- >>> from huggingface_hub import HfApi, plan_multi_commits
- >>> addition_commits, deletion_commits = plan_multi_commits(
- ... operations=[
- ... CommitOperationAdd(...),
- ... CommitOperationAdd(...),
- ... CommitOperationDelete(...),
- ... CommitOperationDelete(...),
- ... CommitOperationAdd(...),
- ... ],
- ... )
- >>> HfApi().create_commits_on_pr(
- ... repo_id="my-cool-model",
- ... addition_commits=addition_commits,
- ... deletion_commits=deletion_commits,
- ... (...)
- ... verbose=True,
- ... )
- ```
-
-
-
- The initial order of the operations is not guaranteed! All deletions will be performed before additions. If you are
- not updating multiple times the same file, you are fine.
-
-
- """
- addition_commits: List[List[CommitOperationAdd]] = []
- deletion_commits: List[List[CommitOperationDelete]] = []
-
- additions: List[CommitOperationAdd] = []
- additions_size = 0
- deletions: List[CommitOperationDelete] = []
- for op in operations:
- if isinstance(op, CommitOperationDelete):
- # Group delete operations together
- deletions.append(op)
- if len(deletions) >= max_operations_per_commit:
- deletion_commits.append(deletions)
- deletions = []
-
- elif op.upload_info.size >= max_upload_size_per_commit:
- # Upload huge files 1 by 1
- addition_commits.append([op])
-
- elif additions_size + op.upload_info.size < max_upload_size_per_commit:
- # Group other additions and split if size limit is reached (either max_nb_files or max_upload_size)
- additions.append(op)
- additions_size += op.upload_info.size
-
- else:
- addition_commits.append(additions)
- additions = [op]
- additions_size = op.upload_info.size
-
- if len(additions) >= max_operations_per_commit:
- addition_commits.append(additions)
- additions = []
- additions_size = 0
-
- if len(additions) > 0:
- addition_commits.append(additions)
- if len(deletions) > 0:
- deletion_commits.append(deletions)
-
- return addition_commits, deletion_commits
-
-
-@dataclass
-class MultiCommitStep:
- """Dataclass containing a list of CommitOperation to commit at once.
-
- A [`MultiCommitStep`] is one atomic part of a [`MultiCommitStrategy`]. Each step is identified by its own
- deterministic ID based on the list of commit operations (hexadecimal sha256). ID is persistent between re-runs if
- the list of commits is kept the same.
- """
-
- operations: List[Union[CommitOperationAdd, CommitOperationDelete]]
-
- id: str = field(init=False)
- completed: bool = False
-
- def __post_init__(self) -> None:
- if len(self.operations) == 0:
- raise ValueError("A MultiCommitStep must have at least 1 commit operation, got 0.")
-
- # Generate commit id
- sha = sha256()
- for op in self.operations:
- if isinstance(op, CommitOperationAdd):
- sha.update(b"ADD")
- sha.update(op.path_in_repo.encode())
- sha.update(op.upload_info.sha256)
- elif isinstance(op, CommitOperationDelete):
- sha.update(b"DELETE")
- sha.update(op.path_in_repo.encode())
- sha.update(str(op.is_folder).encode())
- else:
- NotImplementedError()
- self.id = sha.hexdigest()
-
- def __str__(self) -> str:
- """Format a step for PR description.
-
- Formatting can be changed in the future as long as it is single line, starts with `- [ ]`/`- [x]` and contains
- `self.id`. Must be able to match `STEP_ID_REGEX`.
- """
- additions = [op for op in self.operations if isinstance(op, CommitOperationAdd)]
- file_deletions = [op for op in self.operations if isinstance(op, CommitOperationDelete) and not op.is_folder]
- folder_deletions = [op for op in self.operations if isinstance(op, CommitOperationDelete) and op.is_folder]
- if len(additions) > 0:
- return (
- f"- [{'x' if self.completed else ' '}] Upload {len(additions)} file(s) "
- f"totalling {_format_size(sum(add.upload_info.size for add in additions))}"
- f" ({self.id})"
- )
- else:
- return (
- f"- [{'x' if self.completed else ' '}] Delete {len(file_deletions)} file(s) and"
- f" {len(folder_deletions)} folder(s) ({self.id})"
- )
-
-
-@dataclass
-class MultiCommitStrategy:
- """Dataclass containing a list of [`MultiCommitStep`] to commit iteratively.
-
- A strategy is identified by its own deterministic ID based on the list of its steps (hexadecimal sha256). ID is
- persistent between re-runs if the list of commits is kept the same.
- """
-
- addition_commits: List[MultiCommitStep]
- deletion_commits: List[MultiCommitStep]
-
- id: str = field(init=False)
- all_steps: Set[str] = field(init=False)
-
- def __post_init__(self) -> None:
- self.all_steps = {step.id for step in self.addition_commits + self.deletion_commits}
- if len(self.all_steps) < len(self.addition_commits) + len(self.deletion_commits):
- raise ValueError("Got duplicate commits in MultiCommitStrategy. All commits must be unique.")
-
- if len(self.all_steps) == 0:
- raise ValueError("A MultiCommitStrategy must have at least 1 commit, got 0.")
-
- # Generate strategy id
- sha = sha256()
- for step in self.addition_commits + self.deletion_commits:
- sha.update("new step".encode())
- sha.update(step.id.encode())
- self.id = sha.hexdigest()
-
-
-def multi_commit_create_pull_request(
- api: "HfApi",
- repo_id: str,
- commit_message: str,
- commit_description: Optional[str],
- strategy: MultiCommitStrategy,
- token: Optional[str],
- repo_type: Optional[str],
-) -> DiscussionWithDetails:
- return api.create_pull_request(
- repo_id=repo_id,
- title=f"[WIP] {commit_message} (multi-commit {strategy.id})",
- description=multi_commit_generate_comment(
- commit_message=commit_message, commit_description=commit_description, strategy=strategy
- ),
- token=token,
- repo_type=repo_type,
- )
-
-
-def multi_commit_generate_comment(
- commit_message: str,
- commit_description: Optional[str],
- strategy: MultiCommitStrategy,
-) -> str:
- return MULTI_COMMIT_PR_DESCRIPTION_TEMPLATE.format(
- commit_message=commit_message,
- commit_description=commit_description or "",
- multi_commit_id=strategy.id,
- multi_commit_strategy="\n".join(
- str(commit) for commit in strategy.deletion_commits + strategy.addition_commits
- ),
- )
-
-
-def multi_commit_parse_pr_description(description: str) -> Set[str]:
- return {match[1] for match in STEP_ID_REGEX.findall(description)}
diff --git a/spaces/Daniton/midjourney-singular/README.md b/spaces/Daniton/midjourney-singular/README.md
deleted file mode 100644
index 46355cc6a2446e208faec43cc6a29c87030bfcbc..0000000000000000000000000000000000000000
--- a/spaces/Daniton/midjourney-singular/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Superjourney
-emoji: 👁
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-duplicated_from: Daniton/superjourney
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DeepakJaiz/QA_evaluator/text_utils.py b/spaces/DeepakJaiz/QA_evaluator/text_utils.py
deleted file mode 100644
index fd4188ebfd069ff7cefed0ae8cfb5a58ff090579..0000000000000000000000000000000000000000
--- a/spaces/DeepakJaiz/QA_evaluator/text_utils.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import re
-from langchain.prompts import PromptTemplate
-
-
-def clean_pdf_text(text: str) -> str:
- """Cleans text extracted from a PDF file."""
- # TODO: Remove References/Bibliography section.
- return remove_citations(text)
-
-
-def remove_citations(text: str) -> str:
- """Removes in-text citations from a string."""
- # (Author, Year)
- text = re.sub(r'\([A-Za-z0-9,.\s]+\s\d{4}\)', '', text)
- # [1], [2], [3-5], [3, 33, 49, 51]
- text = re.sub(r'\[[0-9,-]+(,\s[0-9,-]+)*\]', '', text)
- return text
-
-
-template = """You are a teacher grading a quiz.
-You are given a question, the student's answer, and the true answer, and are asked to score the student answer as either CORRECT or INCORRECT.
-Example Format:
-QUESTION: question here
-STUDENT ANSWER: student's answer here
-TRUE ANSWER: true answer here
-GRADE: CORRECT or INCORRECT here
-Grade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin!
-QUESTION: {query}
-STUDENT ANSWER: {result}
-TRUE ANSWER: {answer}
-GRADE:
-And explain why the STUDENT ANSWER is correct or incorrect.
-"""
-
-GRADE_ANSWER_PROMPT = PromptTemplate(input_variables=["query", "result", "answer"], template=template)
-
-template = """You are a teacher grading a quiz.
-You are given a question, the student's answer, and the true answer, and are asked to score the student answer as either CORRECT or INCORRECT.
-You are also asked to identify potential sources of bias in the question and in the true answer.
-Example Format:
-QUESTION: question here
-STUDENT ANSWER: student's answer here
-TRUE ANSWER: true answer here
-GRADE: CORRECT or INCORRECT here
-Grade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin!
-QUESTION: {query}
-STUDENT ANSWER: {result}
-TRUE ANSWER: {answer}
-GRADE:
-And explain why the STUDENT ANSWER is correct or incorrect, identify potential sources of bias in the QUESTION, and identify potential sources of bias in the TRUE ANSWER.
-"""
-
-GRADE_ANSWER_PROMPT_BIAS_CHECK = PromptTemplate(input_variables=["query", "result", "answer"], template=template)
-
-template = """You are assessing a submitted student answer to a question relative to the true answer based on the provided criteria:
-
- ***
- QUESTION: {query}
- ***
- STUDENT ANSWER: {result}
- ***
- TRUE ANSWER: {answer}
- ***
- Criteria:
- relevance: Is the submission referring to a real quote from the text?"
- conciseness: Is the answer concise and to the point?"
- correct: Is the answer correct?"
- ***
- Does the submission meet the criterion? First, write out in a step by step manner your reasoning about the criterion to be sure that your conclusion is correct. Avoid simply stating the correct answers at the outset. Then print the "CORRECT" or "INCORRECT" (without quotes or punctuation) on its own line corresponding to the correct answer.
- Reasoning:
-"""
-
-GRADE_ANSWER_PROMPT_OPENAI = PromptTemplate(input_variables=["query", "result", "answer"], template=template)
-
-template = """You are a teacher grading a quiz.
-You are given a question, the student's answer, and the true answer, and are asked to score the student answer as either CORRECT or INCORRECT.
-Example Format:
-QUESTION: question here
-STUDENT ANSWER: student's answer here
-TRUE ANSWER: true answer here
-GRADE: CORRECT or INCORRECT here
-Grade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin!
-QUESTION: {query}
-STUDENT ANSWER: {result}
-TRUE ANSWER: {answer}
-GRADE:"""
-
-GRADE_ANSWER_PROMPT_FAST = PromptTemplate(input_variables=["query", "result", "answer"], template=template)
-
-template = """
- Given the question: \n
- {query}
- Decide if the following retrieved context is relevant: \n
- {result}
- Answer in the following format: \n
- "Context is relevant: True or False." \n
- And explain why it supports or does not support the correct answer: {answer}"""
-
-GRADE_DOCS_PROMPT = PromptTemplate(input_variables=["query", "result", "answer"], template=template)
-
-template = """
- Given the question: \n
- {query}
- Decide if the following retrieved context is relevant to the {answer}: \n
- {result}
- Answer in the following format: \n
- "Context is relevant: True or False." \n """
-
-GRADE_DOCS_PROMPT_FAST = PromptTemplate(input_variables=["query", "result", "answer"], template=template)
\ No newline at end of file
diff --git a/spaces/Detomo/aisatsu-api/Dockerfile b/spaces/Detomo/aisatsu-api/Dockerfile
deleted file mode 100644
index 44980662064947625e6fff3e0b6d03a368b6d55b..0000000000000000000000000000000000000000
--- a/spaces/Detomo/aisatsu-api/Dockerfile
+++ /dev/null
@@ -1,26 +0,0 @@
-FROM python:3.9
-
-WORKDIR /content
-
-RUN mkdir /content/cache/
-
-RUN export TRANSFORMERS_CACHE=/content/cache/
-
-COPY ./requirements.txt /content/requirements.txt
-
-RUN pip install --no-cache-dir --upgrade -r /content/requirements.txt
-RUN apt-get update && apt-get install -y ffmpeg
-
-COPY . .
-
-RUN adduser --disabled-password --gecos '' admin
-RUN adduser admin sudo
-RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
-
-RUN chown -R admin:admin /content
-RUN chmod -R 777 /content
-USER admin
-
-EXPOSE 7860
-
-CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"]
diff --git a/spaces/Dimalker/Faceswapper/roop/metadata.py b/spaces/Dimalker/Faceswapper/roop/metadata.py
deleted file mode 100644
index 35b0f0245a38eb9ec024f2ed2c829044f6051c29..0000000000000000000000000000000000000000
--- a/spaces/Dimalker/Faceswapper/roop/metadata.py
+++ /dev/null
@@ -1,2 +0,0 @@
-name = 'roop'
-version = '1.1.0'
diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/vctree/panoptic_fpn_r50_fpn_1x_sgdet_psg.py b/spaces/ECCV2022/PSG/OpenPSG/configs/vctree/panoptic_fpn_r50_fpn_1x_sgdet_psg.py
deleted file mode 100644
index d0f05d87f47ebc28920183e317aa26d0abb15026..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/PSG/OpenPSG/configs/vctree/panoptic_fpn_r50_fpn_1x_sgdet_psg.py
+++ /dev/null
@@ -1,49 +0,0 @@
-_base_ = [
- '../motifs/panoptic_fpn_r50_fpn_1x_predcls_psg.py',
-]
-
-model = dict(
- relation_head=dict(
- type='VCTreeHead',
- head_config=dict(
- # NOTE: Evaluation type
- use_gt_box=False,
- use_gt_label=False,
- ),
- ),
- roi_head=dict(bbox_head=dict(type='SceneGraphBBoxHead'), ),
-)
-
-evaluation = dict(interval=1,
- metric='sgdet',
- relation_mode=True,
- classwise=True,
- iou_thrs=0.5,
- detection_method='pan_seg')
-
-# Change batch size and learning rate
-data = dict(samples_per_gpu=16,
- # workers_per_gpu=2
- )
-# optimizer = dict(lr=0.003)
-
-# Log config
-project_name = 'openpsg'
-expt_name = 'vctree_panoptic_fpn_r50_fpn_1x_sgdet_psg'
-work_dir = f'./work_dirs/{expt_name}'
-
-log_config = dict(
- interval=50,
- hooks=[
- dict(type='TextLoggerHook'),
- # dict(type='TensorboardLoggerHook')
- dict(
- type='WandbLoggerHook',
- init_kwargs=dict(
- project=project_name,
- name=expt_name,
- # config=work_dir + "/cfg.yaml"
- ),
- ),
- ],
-)
diff --git a/spaces/EDGAhab/Aatrox-Talking/commons.py b/spaces/EDGAhab/Aatrox-Talking/commons.py
deleted file mode 100644
index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000
--- a/spaces/EDGAhab/Aatrox-Talking/commons.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/ElAnon/emsai/app.py b/spaces/ElAnon/emsai/app.py
deleted file mode 100644
index 080e00e9c0f7ec194b6b27de3a1c38f4465ace6d..0000000000000000000000000000000000000000
--- a/spaces/ElAnon/emsai/app.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from aitextgen import aitextgen
-import gradio as gr
-
-
-ai=aitextgen(model='EleutherAI/gpt-neo-1.3B',to_gpu=False) # EleutherAI/gpt-neo-2.7B EleutherAI/gpt-neo-1.3B
-
-def ai_text(Input):
- generated_text = ai.generate_one(max_length = 450, prompt = Input, no_repeat_ngram_size = 3) #repetition_penalty = 1.9)
- #print(type(generated_text))
- return generated_text
-
-
-title_ = "AIBG"
-description_ = " Enter 450 words blog "
-output_text = gr.outputs.Textbox()
-iface=gr.Interface(ai_text,"textbox", output_text, title=title_,description=description_)#.launch()
-iface.launch()
\ No newline at end of file
diff --git a/spaces/FathomNet/fathomnet2023-comp-baseline/app.py b/spaces/FathomNet/fathomnet2023-comp-baseline/app.py
deleted file mode 100644
index e2ac80f7f994d7798eb543166137703abf396dc9..0000000000000000000000000000000000000000
--- a/spaces/FathomNet/fathomnet2023-comp-baseline/app.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import glob
-import gradio as gr
-from ultralytics import YOLO
-
-model_path = "fathomnet23-comp-baseline.pt"
-model = YOLO(model_path)
-
-
-def run(image_path):
- results = model.predict(image_path)
- return results[0].plot()[:, :, ::-1] # reverse channels for gradio
-
-
-title = "FathomNet2023 Competition Baseline"
-description = (
- "Gradio demo for the FathomNet2023 Baseline Model: Developed by researchers"
- " at the Monterey Bay Aquarium Research Institute (MBARI) to serve as a"
- " baseline YOLOv8m model for the FathomNet2023 Kaggle Competition, in"
- " conjunction with the Fine Grained Visual Categorization workshop at CVPR"
- " 2023. The training dataset comprises both the FathomNet2023 competition"
- " split and internal MBARI data, including 290 fine-grained taxonomic"
- " categories of benthic animals."
-)
-
-examples = glob.glob("images/*.png")
-
-interface = gr.Interface(
- run,
- inputs=[gr.components.Image(type="filepath")],
- outputs=gr.components.Image(type="numpy"),
- title=title,
- description=description,
- examples=examples,
-)
-
-interface.queue().launch()
diff --git a/spaces/FrozenBurning/SceneDreamer/install.sh b/spaces/FrozenBurning/SceneDreamer/install.sh
deleted file mode 100644
index cb2ef938b25db517e0437c9219d46c0e4d472293..0000000000000000000000000000000000000000
--- a/spaces/FrozenBurning/SceneDreamer/install.sh
+++ /dev/null
@@ -1,21 +0,0 @@
-export CUDA_VERSION=$(nvcc --version| grep -Po "(\d+\.)+\d+" | head -1)
-CURRENT=$(pwd)
-for p in correlation channelnorm resample2d bias_act upfirdn2d; do
- cd imaginaire/third_party/${p};
- rm -rf build dist *info;
- python setup.py install;
- python -m pip install .
- cd ${CURRENT};
-done
-
-for p in gancraft/voxlib; do
- cd imaginaire/model_utils/${p};
- make all
- python -m pip install .
- cd ${CURRENT};
-done
-
-cd gridencoder
-python setup.py build_ext --inplace
-python -m pip install .
-cd ${CURRENT}
\ No newline at end of file
diff --git a/spaces/GT4SD/multitask-text-and-chemistry-t5/model_cards/article.md b/spaces/GT4SD/multitask-text-and-chemistry-t5/model_cards/article.md
deleted file mode 100644
index 680d5222e6c57e5143d12a2b743dc76809d4802e..0000000000000000000000000000000000000000
--- a/spaces/GT4SD/multitask-text-and-chemistry-t5/model_cards/article.md
+++ /dev/null
@@ -1,63 +0,0 @@
-# Model documentation & parameters
-
-**Language model**: Type of language model to be used.
-
-**Prefix**: Task specific prefix for task definition (see the provided examples for specific tasks).
-
-**Text prompt**: The text input of the model.
-
-**Num beams**: Number of beams to be used for the text generation.
-
-
-
-# Model card -- Multitask Text and Chemistry T5
-
-**Model Details**: Multitask Text and Chemistry T5 : a multi-domain, multi-task language model to solve a wide range of tasks in both the chemical and natural language domains. Published by [Christofidellis et al.](https://arxiv.org/pdf/2301.12586.pdf)
-
-**Developers**: Dimitrios Christofidellis*, Giorgio Giannone*, Jannis Born, Teodoro Laino and Matteo Manica from IBM Research and Ole Winther from Technical University of Denmark.
-
-**Distributors**: Model natively integrated into GT4SD.
-
-**Model date**: 2022.
-
-**Model type**: A Transformer-based language model that is trained on a multi-domain and a multi-task dataset by aggregating available datasets
-for the tasks of Forward reaction prediction, Retrosynthesis, Molecular captioning, Text-conditional de novo generation and Paragraph to actions.
-
-**Information about training algorithms, parameters, fairness constraints or other applied approaches, and features**:
-N.A.
-
-**Paper or other resource for more information**:
-The Multitask Text and Chemistry T5 [Christofidellis et al.](https://arxiv.org/pdf/2301.12586.pdf)
-
-
-**License**: MIT
-
-**Where to send questions or comments about the model**: Open an issue on [GT4SD repository](https://github.com/GT4SD/gt4sd-core).
-
-**Intended Use. Use cases that were envisioned during development**: N.A.
-
-**Primary intended uses/users**: N.A.
-
-**Out-of-scope use cases**: Production-level inference, producing molecules with harmful properties.
-
-**Metrics**: N.A.
-
-**Datasets**: N.A.
-
-**Ethical Considerations**: Unclear, please consult with original authors in case of questions.
-
-**Caveats and Recommendations**: Unclear, please consult with original authors in case of questions.
-
-Model card prototype inspired by [Mitchell et al. (2019)](https://dl.acm.org/doi/abs/10.1145/3287560.3287596?casa_token=XD4eHiE2cRUAAAAA:NL11gMa1hGPOUKTAbtXnbVQBDBbjxwcjGECF_i-WC_3g1aBgU1Hbz_f2b4kI_m1in-w__1ztGeHnwHs)
-
-## Citation
-```bib
-@article{christofidellis2023unifying,
- title={Unifying Molecular and Textual Representations via Multi-task Language Modelling},
- author={Christofidellis, Dimitrios and Giannone, Giorgio and Born, Jannis and Winther, Ole and Laino, Teodoro and Manica, Matteo},
- journal={arXiv preprint arXiv:2301.12586},
- year={2023}
-}
-```
-
-*equal contribution
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_multi_task_goal_demo10.sh b/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_multi_task_goal_demo10.sh
deleted file mode 100644
index aebc7e8f9ac6b536d765fc9f5c566871811f9994..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_multi_task_goal_demo10.sh
+++ /dev/null
@@ -1,74 +0,0 @@
-#!/bin/bash
-
-DATA_DIR=$1
-TRAINTASK=${2-'[rainbow-stack,bowl-ball-placement]'}
-TESTTASK=${3-'[rainbow-stack,bowl-ball-placement]'}
-TASKNAME=${4-'mix-two'}
-STEPS=${5-'10000'}
-
-DISP=False
-
-echo "Training multi-task dataset... Folder: $DATA_DIR Task $TRAINTASK"
-
-# You can parallelize these depending on how much resources you have
-
-#############################
-## Language-Conditioned Tasks
-# [align-rope,assembling-kits-seq-seen-colors,assembling-kits-seq-unseen-colors,packing-shapes,stack-block-pyramid-seq-unseen-colors,
-# separating-piles-seen-colors,separating-piles-unseen-colors,towers-of-hanoi-seq-seen-colors,towers-of-hanoi-seq-unseen-colors]
-
-# example: sh scripts/traintest_scripts/train_test_multi_task_indistribution.sh data "[align-rope,sweeping-piles,align-box-corner,block-insertion,manipulating-rope,place-red-in-green]" 6taskindomain
-# sh scripts/traintest_scripts/train_test_multi_task_goal.sh data "[align-rope,sweeping-piles,align-box-corner,block-insertion,manipulating-rope,place-red-in-green]" "[towers-of-hanoi]" 6taskgen
-# sh scripts/traintest_scripts/train_test_multi_task_goal.sh data "[align-rope,sweeping-piles,align-box-corner]" "[towers-of-hanoi]" 3taskgen
-# sh scripts/traintest_scripts/train_test_multi_task_goal.sh data "[align-rope]" "[towers-of-hanoi]" 1taskgen
-# sh scripts/traintest_scripts/train_test_multi_task_goal.sh data "[align-rope,sweeping-piles,align-box-corner,block-insertion,manipulating-rope,place-red-in-green]" "[towers-of-hanoi]" 10taskgen
-
-trap "kill 0" SIGINT
-
-python cliport/train.py train.task=$TRAINTASK \
- train.agent=cliport \
- train.model_task=$TASKNAME \
- train.attn_stream_fusion_type=add \
- train.trans_stream_fusion_type=conv \
- train.lang_fusion_type=mult \
- train.n_demos=10 \
- train.n_steps=${STEPS} \
- dataset.cache=True \
- train.exp_folder=exps/exp-$TASKNAME-smaller \
- dataset.type=multi \
- train.load_from_last_ckpt=False \
- train.training_step_scale=500 # scale up training steps
-
-
-# Convert Python list to Bash array
-
-bash_array=$(python3 -c "import sys; print(' '.join((sys.argv[1])[1:-1].split(',')))" "$TRAINTASK")
-
-
-# Convert the space-separated string to a bash array
-echo "Testing multi-task dataset... Folder: $DATA_DIR Task $TESTTASK"
-
-
-for task in $bash_array
- do
- echo "Testing $task"
- # TEST
-
- # bash scripts/generate_gpt_datasets.sh data $task
-
- python cliport/eval.py model_task=$TASKNAME \
- eval_task=$task \
- agent=cliport \
- mode=test \
- n_demos=100 \
- train_demos=10 \
- checkpoint_type=test_best \
- type=single \
- exp_folder=exps/exp-$TASKNAME-smaller \
- update_results=True &
- done
-wait
-
-python notebooks/print_results.py -r=exps/exp-$TASKNAME-smaller
-
-echo "Finished Training."
\ No newline at end of file
diff --git a/spaces/GeorgeOrville/bingo/src/components/chat-panel.tsx b/spaces/GeorgeOrville/bingo/src/components/chat-panel.tsx
deleted file mode 100644
index 1fbc3c2bf05b914e0c229661832fbb560745f488..0000000000000000000000000000000000000000
--- a/spaces/GeorgeOrville/bingo/src/components/chat-panel.tsx
+++ /dev/null
@@ -1,153 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import Image from 'next/image'
-import Textarea from 'react-textarea-autosize'
-import { useAtomValue } from 'jotai'
-import { useEnterSubmit } from '@/lib/hooks/use-enter-submit'
-import { cn } from '@/lib/utils'
-
-import BrushIcon from '@/assets/images/brush.svg'
-import ChatIcon from '@/assets/images/chat.svg'
-import VisualSearchIcon from '@/assets/images/visual-search.svg'
-import SendIcon from '@/assets/images/send.svg'
-import PinIcon from '@/assets/images/pin.svg'
-import PinFillIcon from '@/assets/images/pin-fill.svg'
-
-import { useBing } from '@/lib/hooks/use-bing'
-import { voiceListenAtom } from '@/state'
-import Voice from './voice'
-import { ChatImage } from './chat-image'
-import { ChatAttachments } from './chat-attachments'
-
-export interface ChatPanelProps
- extends Pick<
- ReturnType,
- | 'generating'
- | 'input'
- | 'setInput'
- | 'sendMessage'
- | 'resetConversation'
- | 'isSpeaking'
- | 'attachmentList'
- | 'uploadImage'
- | 'setAttachmentList'
- > {
- id?: string
- className?: string
-}
-
-export function ChatPanel({
- isSpeaking,
- generating,
- input,
- setInput,
- className,
- sendMessage,
- resetConversation,
- attachmentList,
- uploadImage,
- setAttachmentList
-}: ChatPanelProps) {
- const inputRef = React.useRef(null)
- const {formRef, onKeyDown} = useEnterSubmit()
- const [focused, setFocused] = React.useState(false)
- const [active, setActive] = React.useState(false)
- const [pin, setPin] = React.useState(false)
- const [tid, setTid] = React.useState()
- const voiceListening = useAtomValue(voiceListenAtom)
-
- const setBlur = React.useCallback(() => {
- clearTimeout(tid)
- setActive(false)
- const _tid = setTimeout(() => setFocused(false), 2000);
- setTid(_tid)
- }, [tid])
-
- const setFocus = React.useCallback(() => {
- setFocused(true)
- setActive(true)
- clearTimeout(tid)
- inputRef.current?.focus()
- }, [tid])
-
- React.useEffect(() => {
- if (input) {
- setFocus()
- }
- }, [input])
-
- return (
-
- )
-}
diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/Readme.md b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/Readme.md
deleted file mode 100644
index bc528c3474faeff4784aecfc44c9fd8aeac092b6..0000000000000000000000000000000000000000
--- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/Readme.md
+++ /dev/null
@@ -1,167 +0,0 @@
-# Waifu2x
-
- Re-implementation on the original [waifu2x](https://github.com/nagadomi/waifu2x) in PyTorch with additional super resolution models. This repo is mainly used to explore interesting super resolution models. User-friendly tools may not be available now ><.
-
-## Dependencies
-* Python 3x
-* [PyTorch](https://pytorch.org/) >= 1 ( > 0.41 shall also work, but not guarantee)
-* [Nvidia/Apex](https://github.com/NVIDIA/apex/) (used for mixed precision training, you may use the [python codes](https://github.com/NVIDIA/apex/tree/master/apex/fp16_utils) directly)
-
-Optinal: Nvidia GPU. Model inference (32 fp only) can run in cpu only.
-
-## What's New
-* Add [CARN Model (Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network)](https://github.com/nmhkahn/CARN-pytorch). Model Codes are adapted from the authors's [github repo](https://github.com/nmhkahn/CARN-pytorch). I add [Spatial Channel Squeeze Excitation](https://arxiv.org/abs/1709.01507) and swap all 1x1 convolution with 3x3 standard convolutions. The model is trained in fp 16 with Nvidia's [apex](https://github.com/NVIDIA/apex). Details and plots on model variant can be found in [docs/CARN](./docs/CARN)
-
-* Dilated Convolution seems less effective (if not make the model worse) in super resolution, though it brings some improvement in image segmentation, especially when dilated rate increases and then decreases. Further investigation is needed.
-
-## How to Use
-Compare the input image and upscaled image
-```python
-from utils.prepare_images import *
-from Models import *
-from torchvision.utils import save_image
-model_cran_v2 = CARN_V2(color_channels=3, mid_channels=64, conv=nn.Conv2d,
- single_conv_size=3, single_conv_group=1,
- scale=2, activation=nn.LeakyReLU(0.1),
- SEBlock=True, repeat_blocks=3, atrous=(1, 1, 1))
-
-model_cran_v2 = network_to_half(model_cran_v2)
-checkpoint = "model_check_points/CRAN_V2/CARN_model_checkpoint.pt"
-model_cran_v2.load_state_dict(torch.load(checkpoint, 'cpu'))
-# if use GPU, then comment out the next line so it can use fp16.
-model_cran_v2 = model_cran_v2.float()
-
-demo_img = "input_image.png"
-img = Image.open(demo_img).convert("RGB")
-
-# origin
-img_t = to_tensor(img).unsqueeze(0)
-
-# used to compare the origin
-img = img.resize((img.size[0] // 2, img.size[1] // 2), Image.BICUBIC)
-
-# overlapping split
-# if input image is too large, then split it into overlapped patches
-# details can be found at [here](https://github.com/nagadomi/waifu2x/issues/238)
-img_splitter = ImageSplitter(seg_size=64, scale_factor=2, boarder_pad_size=3)
-img_patches = img_splitter.split_img_tensor(img, scale_method=None, img_pad=0)
-with torch.no_grad():
- out = [model_cran_v2(i) for i in img_patches]
-img_upscale = img_splitter.merge_img_tensor(out)
-
-final = torch.cat([img_t, img_upscale])
-save_image(final, 'out.png', nrow=2)
-```
-
- ## Training
-
- If possible, fp16 training is preferred because it is much faster with minimal quality decrease.
-
- Sample training script is available in `train.py`, but you may need to change some liens.
-
- ### Image Processing
- Original images are all at least 3k x 3K. I downsample them by LANCZOS so that one side has at most 2048, then I randomly cut them into 256x256 patches as target and use 128x128 with jpeg noise as input images. All input patches have at least 14 kb, and they are stored in SQLite with BLOB format. SQlite seems to have [better performance](https://www.sqlite.org/intern-v-extern-blob.html) than file system for small objects. H5 file format may not be optimal because of its larger size.
-
- Although convolutions can take in any sizes of images, the content of image matters. For real life images, small patches may maintain color,brightness, etc variances in small regions, but for digital drawn images, colors are added in block areas. A small patch may end up showing entirely one color, and the model has little to learn.
-
- For example, the following two plots come from CARN and have the same settings, including initial parameters. Both training loss and ssim are lower for 64x64, but they perform worse in test time compared to 128x128.
-
- 
- 
-
-
-Downsampling methods are uniformly chosen among ```[PIL.Image.BILINEAR, PIL.Image.BICUBIC, PIL.Image.LANCZOS]``` , so different patches in the same image might be down-scaled in different ways.
-
-Image noise are from JPEG format only. They are added by re-encoding PNG images into PIL's JPEG data with various quality. Noise level 1 means quality ranges uniformly from [75, 95]; level 2 means quality ranges uniformly from [50, 75].
-
-
- ## Models
- Models are tuned and modified with extra features.
-
-
-* [DCSCN 12](https://github.com/jiny2001/dcscn-super-resolution)
-
-* [CRAN](https://github.com/nmhkahn/CARN-pytorch)
-
- #### From [Waifu2x](https://github.com/nagadomi/waifu2x)
- * [Upconv7](https://github.com/nagadomi/waifu2x/blob/7d156917ae1113ab847dab15c75db7642231e7fa/lib/srcnn.lua#L360)
-
- * [Vgg_7](https://github.com/nagadomi/waifu2x/blob/7d156917ae1113ab847dab15c75db7642231e7fa/lib/srcnn.lua#L334)
-
- * [Cascaded Residual U-Net with SEBlock](https://github.com/nagadomi/waifu2x/blob/7d156917ae1113ab847dab15c75db7642231e7fa/lib/srcnn.lua#L514) (PyTorch codes are not available and under testing)
-
- #### Models Comparison
- Images are from [Key: サマボケ(Summer Pocket)](http://key.visualarts.gr.jp/summer/).
-
- The left column is the original image, and the right column is bicubic, DCSCN, CRAN_V2
-
-
-
-
-
-
-
-
- ##### Scores
- The list will be updated after I add more models.
-
-Images are twitter icons (PNG) from [Key: サマボケ(Summer Pocket)](http://key.visualarts.gr.jp/summer/). They are cropped into non-overlapping 96x96 patches and down-scaled by 2. Then images are re-encoded into JPEG format with quality from [75, 95]. Scores are PSNR and MS-SSIM.
-
-| | Total Parameters | BICUBIC | Random* |
-| :---: | :---: | :---: | :---: |
-| CRAN V2| 2,149,607 | 34.0985 (0.9924) | 34.0509 (0.9922) |
-| DCSCN 12 |1,889,974 | 31.5358 (0.9851) | 31.1457 (0.9834) |
-| Upconv 7| 552,480| 31.4566 (0.9788) | 30.9492 (0.9772) |
-
-*uniformly select down scale methods from Image.BICUBIC, Image.BILINEAR, Image.LANCZOS.
-
-
-
-
-
- #### DCSCN
-[Fast and Accurate Image Super Resolution by Deep CNN with Skip Connection and Network in Network](https://github.com/jiny2001/dcscn-super-resolution#fast-and-accurate-image-super-resolution-by-deep-cnn-with-skip-connection-and-network-in-network)
-
- DCSCN is very interesting as it has relatively quick forward computation, and both the shallow model (layerr 8) and deep model (layer 12) are quick to train. The settings are different from the paper.
-
- * I use exponential decay to decrease the number of feature filters in each layer. [Here](https://github.com/jiny2001/dcscn-super-resolution/blob/a868775930c6b36922897b0203468f3f1481e935/DCSCN.py#L204) is the original filter decay method.
-
- * I also increase the reconstruction filters from 48 to 128.
-
- * All activations are replaced by SELU. Dropout and weight decay are not added neither because they significantly increase the training time.
-
- * The loss function is changed from MSE to L1.
- According to [Loss Functions for Image Restoration with Neural
-Networks](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&uact=8&ved=0ahUKEwi7kuGt_7_bAhXrqVQKHRqhCcUQFghUMAM&url=http%3A%2F%2Fresearch.nvidia.com%2Fsites%2Fdefault%2Ffiles%2Fpubs%2F2017-03_Loss-Functions-for%2Fcomparison_tci.pdf&usg=AOvVaw1p0ndOKRH2ZaEsumO7d_bA), L1 seems to be more robust and converges faster than MSE. But the authors find the results from L1 and MSE are [similar](https://github.com/jiny2001/dcscn-super-resolution/issues/29).
-
-
- I need to thank jiny2001 (one of the paper's author) to test the difference of SELU and PRELU. SELU seems more stable and has fewer parameters to train. It is a good drop in replacement
- >layers=8, filters=96 and dataset=yang91+bsd200.
- 
- The details can be found in [here]( https://github.com/jiny2001/dcscn-super-resolution/issues/29).
-
-
-
- A pre-trained 12-layer model as well as model parameters are available. The model run time is around 3-5 times of Waifu2x. The output quality is usually visually indistinguishable, but its PSNR and SSIM are bit higher. Though, such comparison is not fair since the 12-layer model has around 1,889,974 parameters, 5 times more than waifu2x's Upconv_7 model.
-
- #### CARN
- Channels are set to 64 across all blocks, so residual adds are very effective. Increase the channels to 128 lower the loss curve a little bit but doubles the total parameters from 0.9 Millions to 3 Millions. 32 Channels has much worse performance. Increasing the number of cascaded blocks from 3 to 5 doesn't lower the loss a lot.
-
- SE Blocks seems to have the most obvious improvement without increasing the computation a lot. Partial based padding seems have little effect if not decrease the quality. Atrous convolution is slower about 10%-20% than normal convolution in Pytorch 1.0, but there are no obvious improvement.
-
-Another more effective model is to add upscaled input image to the final convolution. A simple bilinear upscaled image seems sufficient.
-
-More examples on model configurations can be found in [docs/CARN folder](./docs/CARN/carn_plot_loss.md)
-
-
-
-
-
-### Waifu2x Original Models
-Models can load waifu2x's pre-trained weights. The function ```forward_checkpoint``` sets the ```nn.LeakyReLU``` to compute data inplace.
-
-#### Upconv_7
-Original waifu2x's model. PyTorch's implementation with cpu only is around 5 times longer for large images. The output images have very close PSNR and SSIM scores compared to images generated from the [caffe version](https://github.com/lltcggie/waifu2x-caffe) , thought they are not identical.
-
-#### Vgg_7
-Not tested yet, but it is ready to use.
diff --git a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/encoders/psp_encoders.py b/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/encoders/psp_encoders.py
deleted file mode 100644
index dc49acd11f062cbd29f839ee3c04bce7fa84f479..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/encoders/psp_encoders.py
+++ /dev/null
@@ -1,200 +0,0 @@
-from enum import Enum
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import Conv2d, BatchNorm2d, PReLU, Sequential, Module
-
-from e4e.models.encoders.helpers import get_blocks, bottleneck_IR, bottleneck_IR_SE, _upsample_add
-from e4e.models.stylegan2.model import EqualLinear
-
-
-class ProgressiveStage(Enum):
- WTraining = 0
- Delta1Training = 1
- Delta2Training = 2
- Delta3Training = 3
- Delta4Training = 4
- Delta5Training = 5
- Delta6Training = 6
- Delta7Training = 7
- Delta8Training = 8
- Delta9Training = 9
- Delta10Training = 10
- Delta11Training = 11
- Delta12Training = 12
- Delta13Training = 13
- Delta14Training = 14
- Delta15Training = 15
- Delta16Training = 16
- Delta17Training = 17
- Inference = 18
-
-
-class GradualStyleBlock(Module):
- def __init__(self, in_c, out_c, spatial):
- super(GradualStyleBlock, self).__init__()
- self.out_c = out_c
- self.spatial = spatial
- num_pools = int(np.log2(spatial))
- modules = []
- modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1),
- nn.LeakyReLU()]
- for i in range(num_pools - 1):
- modules += [
- Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1),
- nn.LeakyReLU()
- ]
- self.convs = nn.Sequential(*modules)
- self.linear = EqualLinear(out_c, out_c, lr_mul=1)
-
- def forward(self, x):
- x = self.convs(x)
- x = x.view(-1, self.out_c)
- x = self.linear(x)
- return x
-
-
-class GradualStyleEncoder(Module):
- def __init__(self, num_layers, mode='ir', opts=None):
- super(GradualStyleEncoder, self).__init__()
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- self.styles = nn.ModuleList()
- log_size = int(math.log(opts.stylegan_size, 2))
- self.style_count = 2 * log_size - 2
- self.coarse_ind = 3
- self.middle_ind = 7
- for i in range(self.style_count):
- if i < self.coarse_ind:
- style = GradualStyleBlock(512, 512, 16)
- elif i < self.middle_ind:
- style = GradualStyleBlock(512, 512, 32)
- else:
- style = GradualStyleBlock(512, 512, 64)
- self.styles.append(style)
- self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0)
- self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0)
-
- def forward(self, x):
- x = self.input_layer(x)
-
- latents = []
- modulelist = list(self.body._modules.values())
- for i, l in enumerate(modulelist):
- x = l(x)
- if i == 6:
- c1 = x
- elif i == 20:
- c2 = x
- elif i == 23:
- c3 = x
-
- for j in range(self.coarse_ind):
- latents.append(self.styles[j](c3))
-
- p2 = _upsample_add(c3, self.latlayer1(c2))
- for j in range(self.coarse_ind, self.middle_ind):
- latents.append(self.styles[j](p2))
-
- p1 = _upsample_add(p2, self.latlayer2(c1))
- for j in range(self.middle_ind, self.style_count):
- latents.append(self.styles[j](p1))
-
- out = torch.stack(latents, dim=1)
- return out
-
-
-class Encoder4Editing(Module):
- def __init__(self, num_layers, mode='ir', opts=None):
- super(Encoder4Editing, self).__init__()
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- self.styles = nn.ModuleList()
- log_size = int(math.log(opts.stylegan_size, 2))
- self.style_count = 2 * log_size - 2
- self.coarse_ind = 3
- self.middle_ind = 7
-
- for i in range(self.style_count):
- if i < self.coarse_ind:
- style = GradualStyleBlock(512, 512, 16)
- elif i < self.middle_ind:
- style = GradualStyleBlock(512, 512, 32)
- else:
- style = GradualStyleBlock(512, 512, 64)
- self.styles.append(style)
-
- self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0)
- self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0)
-
- self.progressive_stage = ProgressiveStage.Inference
-
- def get_deltas_starting_dimensions(self):
- ''' Get a list of the initial dimension of every delta from which it is applied '''
- return list(range(self.style_count)) # Each dimension has a delta applied to it
-
- def set_progressive_stage(self, new_stage: ProgressiveStage):
- self.progressive_stage = new_stage
- print('Changed progressive stage to: ', new_stage)
-
- def forward(self, x):
- x = self.input_layer(x)
-
- modulelist = list(self.body._modules.values())
- for i, l in enumerate(modulelist):
- x = l(x)
- if i == 6:
- c1 = x
- elif i == 20:
- c2 = x
- elif i == 23:
- c3 = x
-
- # Infer main W and duplicate it
- w0 = self.styles[0](c3)
- w = w0.repeat(self.style_count, 1, 1).permute(1, 0, 2)
- stage = self.progressive_stage.value
- features = c3
- for i in range(1, min(stage + 1, self.style_count)): # Infer additional deltas
- if i == self.coarse_ind:
- p2 = _upsample_add(c3, self.latlayer1(c2)) # FPN's middle features
- features = p2
- elif i == self.middle_ind:
- p1 = _upsample_add(p2, self.latlayer2(c1)) # FPN's fine features
- features = p1
- delta_i = self.styles[i](features)
- w[:, i] += delta_i
- return w
diff --git a/spaces/Gradio-Blocks/anime-colorization/pixel_guide_diffusion/logger.py b/spaces/Gradio-Blocks/anime-colorization/pixel_guide_diffusion/logger.py
deleted file mode 100644
index b1d856dcfea6b56a2ee8d37b286887430dbfac30..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/anime-colorization/pixel_guide_diffusion/logger.py
+++ /dev/null
@@ -1,495 +0,0 @@
-"""
-Logger copied from OpenAI baselines to avoid extra RL-based dependencies:
-https://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/logger.py
-"""
-
-import os
-import sys
-import shutil
-import os.path as osp
-import json
-import time
-import datetime
-import tempfile
-import warnings
-from collections import defaultdict
-from contextlib import contextmanager
-
-DEBUG = 10
-INFO = 20
-WARN = 30
-ERROR = 40
-
-DISABLED = 50
-
-
-class KVWriter(object):
- def writekvs(self, kvs):
- raise NotImplementedError
-
-
-class SeqWriter(object):
- def writeseq(self, seq):
- raise NotImplementedError
-
-
-class HumanOutputFormat(KVWriter, SeqWriter):
- def __init__(self, filename_or_file):
- if isinstance(filename_or_file, str):
- self.file = open(filename_or_file, "wt")
- self.own_file = True
- else:
- assert hasattr(filename_or_file, "read"), (
- "expected file or str, got %s" % filename_or_file
- )
- self.file = filename_or_file
- self.own_file = False
-
- def writekvs(self, kvs):
- # Create strings for printing
- key2str = {}
- for (key, val) in sorted(kvs.items()):
- if hasattr(val, "__float__"):
- valstr = "%-8.3g" % val
- else:
- valstr = str(val)
- key2str[self._truncate(key)] = self._truncate(valstr)
-
- # Find max widths
- if len(key2str) == 0:
- print("WARNING: tried to write empty key-value dict")
- return
- else:
- keywidth = max(map(len, key2str.keys()))
- valwidth = max(map(len, key2str.values()))
-
- # Write out the data
- dashes = "-" * (keywidth + valwidth + 7)
- lines = [dashes]
- for (key, val) in sorted(key2str.items(), key=lambda kv: kv[0].lower()):
- lines.append(
- "| %s%s | %s%s |"
- % (key, " " * (keywidth - len(key)), val, " " * (valwidth - len(val)))
- )
- lines.append(dashes)
- self.file.write("\n".join(lines) + "\n")
-
- # Flush the output to the file
- self.file.flush()
-
- def _truncate(self, s):
- maxlen = 30
- return s[: maxlen - 3] + "..." if len(s) > maxlen else s
-
- def writeseq(self, seq):
- seq = list(seq)
- for (i, elem) in enumerate(seq):
- self.file.write(elem)
- if i < len(seq) - 1: # add space unless this is the last one
- self.file.write(" ")
- self.file.write("\n")
- self.file.flush()
-
- def close(self):
- if self.own_file:
- self.file.close()
-
-
-class JSONOutputFormat(KVWriter):
- def __init__(self, filename):
- self.file = open(filename, "wt")
-
- def writekvs(self, kvs):
- for k, v in sorted(kvs.items()):
- if hasattr(v, "dtype"):
- kvs[k] = float(v)
- self.file.write(json.dumps(kvs) + "\n")
- self.file.flush()
-
- def close(self):
- self.file.close()
-
-
-class CSVOutputFormat(KVWriter):
- def __init__(self, filename):
- self.file = open(filename, "w+t")
- self.keys = []
- self.sep = ","
-
- def writekvs(self, kvs):
- # Add our current row to the history
- extra_keys = list(kvs.keys() - self.keys)
- extra_keys.sort()
- if extra_keys:
- self.keys.extend(extra_keys)
- self.file.seek(0)
- lines = self.file.readlines()
- self.file.seek(0)
- for (i, k) in enumerate(self.keys):
- if i > 0:
- self.file.write(",")
- self.file.write(k)
- self.file.write("\n")
- for line in lines[1:]:
- self.file.write(line[:-1])
- self.file.write(self.sep * len(extra_keys))
- self.file.write("\n")
- for (i, k) in enumerate(self.keys):
- if i > 0:
- self.file.write(",")
- v = kvs.get(k)
- if v is not None:
- self.file.write(str(v))
- self.file.write("\n")
- self.file.flush()
-
- def close(self):
- self.file.close()
-
-
-class TensorBoardOutputFormat(KVWriter):
- """
- Dumps key/value pairs into TensorBoard's numeric format.
- """
-
- def __init__(self, dir):
- os.makedirs(dir, exist_ok=True)
- self.dir = dir
- self.step = 1
- prefix = "events"
- path = osp.join(osp.abspath(dir), prefix)
- import tensorflow as tf
- from tensorflow.python import pywrap_tensorflow
- from tensorflow.core.util import event_pb2
- from tensorflow.python.util import compat
-
- self.tf = tf
- self.event_pb2 = event_pb2
- self.pywrap_tensorflow = pywrap_tensorflow
- self.writer = pywrap_tensorflow.EventsWriter(compat.as_bytes(path))
-
- def writekvs(self, kvs):
- def summary_val(k, v):
- kwargs = {"tag": k, "simple_value": float(v)}
- return self.tf.Summary.Value(**kwargs)
-
- summary = self.tf.Summary(value=[summary_val(k, v) for k, v in kvs.items()])
- event = self.event_pb2.Event(wall_time=time.time(), summary=summary)
- event.step = (
- self.step
- ) # is there any reason why you'd want to specify the step?
- self.writer.WriteEvent(event)
- self.writer.Flush()
- self.step += 1
-
- def close(self):
- if self.writer:
- self.writer.Close()
- self.writer = None
-
-
-def make_output_format(format, ev_dir, log_suffix=""):
- os.makedirs(ev_dir, exist_ok=True)
- if format == "stdout":
- return HumanOutputFormat(sys.stdout)
- elif format == "log":
- return HumanOutputFormat(osp.join(ev_dir, "log%s.txt" % log_suffix))
- elif format == "json":
- return JSONOutputFormat(osp.join(ev_dir, "progress%s.json" % log_suffix))
- elif format == "csv":
- return CSVOutputFormat(osp.join(ev_dir, "progress%s.csv" % log_suffix))
- elif format == "tensorboard":
- return TensorBoardOutputFormat(osp.join(ev_dir, "tb%s" % log_suffix))
- else:
- raise ValueError("Unknown format specified: %s" % (format,))
-
-
-# ================================================================
-# API
-# ================================================================
-
-
-def logkv(key, val):
- """
- Log a value of some diagnostic
- Call this once for each diagnostic quantity, each iteration
- If called many times, last value will be used.
- """
- get_current().logkv(key, val)
-
-
-def logkv_mean(key, val):
- """
- The same as logkv(), but if called many times, values averaged.
- """
- get_current().logkv_mean(key, val)
-
-
-def logkvs(d):
- """
- Log a dictionary of key-value pairs
- """
- for (k, v) in d.items():
- logkv(k, v)
-
-
-def dumpkvs():
- """
- Write all of the diagnostics from the current iteration
- """
- return get_current().dumpkvs()
-
-
-def getkvs():
- return get_current().name2val
-
-
-def log(*args, level=INFO):
- """
- Write the sequence of args, with no separators, to the console and output files (if you've configured an output file).
- """
- get_current().log(*args, level=level)
-
-
-def debug(*args):
- log(*args, level=DEBUG)
-
-
-def info(*args):
- log(*args, level=INFO)
-
-
-def warn(*args):
- log(*args, level=WARN)
-
-
-def error(*args):
- log(*args, level=ERROR)
-
-
-def set_level(level):
- """
- Set logging threshold on current logger.
- """
- get_current().set_level(level)
-
-
-def set_comm(comm):
- get_current().set_comm(comm)
-
-
-def get_dir():
- """
- Get directory that log files are being written to.
- will be None if there is no output directory (i.e., if you didn't call start)
- """
- return get_current().get_dir()
-
-
-record_tabular = logkv
-dump_tabular = dumpkvs
-
-
-@contextmanager
-def profile_kv(scopename):
- logkey = "wait_" + scopename
- tstart = time.time()
- try:
- yield
- finally:
- get_current().name2val[logkey] += time.time() - tstart
-
-
-def profile(n):
- """
- Usage:
- @profile("my_func")
- def my_func(): code
- """
-
- def decorator_with_name(func):
- def func_wrapper(*args, **kwargs):
- with profile_kv(n):
- return func(*args, **kwargs)
-
- return func_wrapper
-
- return decorator_with_name
-
-
-# ================================================================
-# Backend
-# ================================================================
-
-
-def get_current():
- if Logger.CURRENT is None:
- _configure_default_logger()
-
- return Logger.CURRENT
-
-
-class Logger(object):
- DEFAULT = None # A logger with no output files. (See right below class definition)
- # So that you can still log to the terminal without setting up any output files
- CURRENT = None # Current logger being used by the free functions above
-
- def __init__(self, dir, output_formats, comm=None):
- self.name2val = defaultdict(float) # values this iteration
- self.name2cnt = defaultdict(int)
- self.level = INFO
- self.dir = dir
- self.output_formats = output_formats
- self.comm = comm
-
- # Logging API, forwarded
- # ----------------------------------------
- def logkv(self, key, val):
- self.name2val[key] = val
-
- def logkv_mean(self, key, val):
- oldval, cnt = self.name2val[key], self.name2cnt[key]
- self.name2val[key] = oldval * cnt / (cnt + 1) + val / (cnt + 1)
- self.name2cnt[key] = cnt + 1
-
- def dumpkvs(self):
- if self.comm is None:
- d = self.name2val
- else:
- d = mpi_weighted_mean(
- self.comm,
- {
- name: (val, self.name2cnt.get(name, 1))
- for (name, val) in self.name2val.items()
- },
- )
- if self.comm.rank != 0:
- d["dummy"] = 1 # so we don't get a warning about empty dict
- out = d.copy() # Return the dict for unit testing purposes
- for fmt in self.output_formats:
- if isinstance(fmt, KVWriter):
- fmt.writekvs(d)
- self.name2val.clear()
- self.name2cnt.clear()
- return out
-
- def log(self, *args, level=INFO):
- if self.level <= level:
- self._do_log(args)
-
- # Configuration
- # ----------------------------------------
- def set_level(self, level):
- self.level = level
-
- def set_comm(self, comm):
- self.comm = comm
-
- def get_dir(self):
- return self.dir
-
- def close(self):
- for fmt in self.output_formats:
- fmt.close()
-
- # Misc
- # ----------------------------------------
- def _do_log(self, args):
- for fmt in self.output_formats:
- if isinstance(fmt, SeqWriter):
- fmt.writeseq(map(str, args))
-
-
-def get_rank_without_mpi_import():
- # check environment variables here instead of importing mpi4py
- # to avoid calling MPI_Init() when this module is imported
- for varname in ["PMI_RANK", "OMPI_COMM_WORLD_RANK"]:
- if varname in os.environ:
- return int(os.environ[varname])
- return 0
-
-
-def mpi_weighted_mean(comm, local_name2valcount):
- """
- Copied from: https://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/common/mpi_util.py#L110
- Perform a weighted average over dicts that are each on a different node
- Input: local_name2valcount: dict mapping key -> (value, count)
- Returns: key -> mean
- """
- all_name2valcount = comm.gather(local_name2valcount)
- if comm.rank == 0:
- name2sum = defaultdict(float)
- name2count = defaultdict(float)
- for n2vc in all_name2valcount:
- for (name, (val, count)) in n2vc.items():
- try:
- val = float(val)
- except ValueError:
- if comm.rank == 0:
- warnings.warn(
- "WARNING: tried to compute mean on non-float {}={}".format(
- name, val
- )
- )
- else:
- name2sum[name] += val * count
- name2count[name] += count
- return {name: name2sum[name] / name2count[name] for name in name2sum}
- else:
- return {}
-
-
-def configure(dir=None, format_strs=None, comm=None, log_suffix=""):
- """
- If comm is provided, average all numerical stats across that comm
- """
- if dir is None:
- dir = os.getenv("OPENAI_LOGDIR")
- if dir is None:
- dir = osp.join(
- tempfile.gettempdir(),
- datetime.datetime.now().strftime("openai-%Y-%m-%d-%H-%M-%S-%f"),
- )
- assert isinstance(dir, str)
- dir = os.path.expanduser(dir)
- os.makedirs(os.path.expanduser(dir), exist_ok=True)
-
- rank = get_rank_without_mpi_import()
- if rank > 0:
- log_suffix = log_suffix + "-rank%03i" % rank
-
- if format_strs is None:
- if rank == 0:
- format_strs = os.getenv("OPENAI_LOG_FORMAT", "stdout,log,csv").split(",")
- else:
- format_strs = os.getenv("OPENAI_LOG_FORMAT_MPI", "log").split(",")
- format_strs = filter(None, format_strs)
- output_formats = [make_output_format(f, dir, log_suffix) for f in format_strs]
-
- Logger.CURRENT = Logger(dir=dir, output_formats=output_formats, comm=comm)
- if output_formats:
- log("Logging to %s" % dir)
-
-
-def _configure_default_logger():
- configure()
- Logger.DEFAULT = Logger.CURRENT
-
-
-def reset():
- if Logger.CURRENT is not Logger.DEFAULT:
- Logger.CURRENT.close()
- Logger.CURRENT = Logger.DEFAULT
- log("Reset logger")
-
-
-@contextmanager
-def scoped_configure(dir=None, format_strs=None, comm=None):
- prevlogger = Logger.CURRENT
- configure(dir=dir, format_strs=format_strs, comm=comm)
- try:
- yield
- finally:
- Logger.CURRENT.close()
- Logger.CURRENT = prevlogger
-
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fcos/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/fcos/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco.py
deleted file mode 100644
index c25561e51687ce9189bb01bf0335cae5306a883b..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fcos/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco.py
+++ /dev/null
@@ -1,51 +0,0 @@
-_base_ = 'fcos_r50_caffe_fpn_gn-head_1x_coco.py'
-
-model = dict(
- pretrained='open-mmlab://detectron2/resnet50_caffe',
- bbox_head=dict(
- norm_on_bbox=True,
- centerness_on_reg=True,
- dcn_on_last_conv=False,
- center_sampling=True,
- conv_bias=True,
- loss_bbox=dict(type='GIoULoss', loss_weight=1.0)),
- # training and testing settings
- test_cfg=dict(nms=dict(type='nms', iou_threshold=0.6)))
-
-# dataset settings
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
-optimizer_config = dict(_delete_=True, grad_clip=None)
-
-lr_config = dict(warmup='linear')
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fsaf/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/fsaf/README.md
deleted file mode 100644
index 42468c8bf596d675d74e0c1d453e0641c5dc3b9c..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fsaf/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# Feature Selective Anchor-Free Module for Single-Shot Object Detection
-
-[ALGORITHM]
-
-FSAF is an anchor-free method published in CVPR2019 ([https://arxiv.org/pdf/1903.00621.pdf](https://arxiv.org/pdf/1903.00621.pdf)).
-Actually it is equivalent to the anchor-based method with only one anchor at each feature map position in each FPN level.
-And this is how we implemented it.
-Only the anchor-free branch is released for its better compatibility with the current framework and less computational budget.
-
-In the original paper, feature maps within the central 0.2-0.5 area of a gt box are tagged as ignored. However,
-it is empirically found that a hard threshold (0.2-0.2) gives a further gain on the performance. (see the table below)
-
-## Main Results
-
-### Results on R50/R101/X101-FPN
-
-| Backbone | ignore range | ms-train| Lr schd |Train Mem (GB)| Train time (s/iter) | Inf time (fps) | box AP | Config | Download |
-|:----------:| :-------: |:-------:|:-------:|:------------:|:---------------:|:--------------:|:-------------:|:------:|:--------:|
-| R-50 | 0.2-0.5 | N | 1x | 3.15 | 0.43 | 12.3 | 36.0 (35.9) | | [model](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_pscale0.2_nscale0.5_r50_fpn_1x_coco/fsaf_pscale0.2_nscale0.5_r50_fpn_1x_coco_20200715-b555b0e0.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_pscale0.2_nscale0.5_r50_fpn_1x_coco/fsaf_pscale0.2_nscale0.5_r50_fpn_1x_coco_20200715_094657.log.json) |
-| R-50 | 0.2-0.2 | N | 1x | 3.15 | 0.43 | 13.0 | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf/fsaf_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r50_fpn_1x_coco/fsaf_r50_fpn_1x_coco-94ccc51f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r50_fpn_1x_coco/fsaf_r50_fpn_1x_coco_20200428_072327.log.json)|
-| R-101 | 0.2-0.2 | N | 1x | 5.08 | 0.58 | 10.8 | 39.3 (37.9) | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf/fsaf_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r101_fpn_1x_coco/fsaf_r101_fpn_1x_coco-9e71098f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r101_fpn_1x_coco/fsaf_r101_fpn_1x_coco_20200428_160348.log.json)|
-| X-101 | 0.2-0.2 | N | 1x | 9.38 | 1.23 | 5.6 | 42.4 (41.0) | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf/fsaf_x101_64x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_x101_64x4d_fpn_1x_coco/fsaf_x101_64x4d_fpn_1x_coco-e3f6e6fd.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_x101_64x4d_fpn_1x_coco/fsaf_x101_64x4d_fpn_1x_coco_20200428_160424.log.json)|
-
-**Notes:**
-
-- *1x means the model is trained for 12 epochs.*
-- *AP values in the brackets represent those reported in the original paper.*
-- *All results are obtained with a single model and single-scale test.*
-- *X-101 backbone represents ResNext-101-64x4d.*
-- *All pretrained backbones use pytorch style.*
-- *All models are trained on 8 Titan-XP gpus and tested on a single gpu.*
-
-## Citations
-
-BibTeX reference is as follows.
-
-```latex
-@inproceedings{zhu2019feature,
- title={Feature Selective Anchor-Free Module for Single-Shot Object Detection},
- author={Zhu, Chenchen and He, Yihui and Savvides, Marios},
- booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
- pages={840--849},
- year={2019}
-}
-```
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/trident_resnet.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/trident_resnet.py
deleted file mode 100644
index e6100132b0f4120585da8a309cba4488b4b0ea72..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/backbones/trident_resnet.py
+++ /dev/null
@@ -1,292 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as cp
-from mmcv.cnn import build_conv_layer, build_norm_layer, kaiming_init
-from torch.nn.modules.utils import _pair
-
-from mmdet.models.backbones.resnet import Bottleneck, ResNet
-from mmdet.models.builder import BACKBONES
-
-
-class TridentConv(nn.Module):
- """Trident Convolution Module.
-
- Args:
- in_channels (int): Number of channels in input.
- out_channels (int): Number of channels in output.
- kernel_size (int): Size of convolution kernel.
- stride (int, optional): Convolution stride. Default: 1.
- trident_dilations (tuple[int, int, int], optional): Dilations of
- different trident branch. Default: (1, 2, 3).
- test_branch_idx (int, optional): In inference, all 3 branches will
- be used if `test_branch_idx==-1`, otherwise only branch with
- index `test_branch_idx` will be used. Default: 1.
- bias (bool, optional): Whether to use bias in convolution or not.
- Default: False.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- trident_dilations=(1, 2, 3),
- test_branch_idx=1,
- bias=False):
- super(TridentConv, self).__init__()
- self.num_branch = len(trident_dilations)
- self.with_bias = bias
- self.test_branch_idx = test_branch_idx
- self.stride = _pair(stride)
- self.kernel_size = _pair(kernel_size)
- self.paddings = _pair(trident_dilations)
- self.dilations = trident_dilations
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.bias = bias
-
- self.weight = nn.Parameter(
- torch.Tensor(out_channels, in_channels, *self.kernel_size))
- if bias:
- self.bias = nn.Parameter(torch.Tensor(out_channels))
- else:
- self.bias = None
- self.init_weights()
-
- def init_weights(self):
- kaiming_init(self, distribution='uniform', mode='fan_in')
-
- def extra_repr(self):
- tmpstr = f'in_channels={self.in_channels}'
- tmpstr += f', out_channels={self.out_channels}'
- tmpstr += f', kernel_size={self.kernel_size}'
- tmpstr += f', num_branch={self.num_branch}'
- tmpstr += f', test_branch_idx={self.test_branch_idx}'
- tmpstr += f', stride={self.stride}'
- tmpstr += f', paddings={self.paddings}'
- tmpstr += f', dilations={self.dilations}'
- tmpstr += f', bias={self.bias}'
- return tmpstr
-
- def forward(self, inputs):
- if self.training or self.test_branch_idx == -1:
- outputs = [
- F.conv2d(input, self.weight, self.bias, self.stride, padding,
- dilation) for input, dilation, padding in zip(
- inputs, self.dilations, self.paddings)
- ]
- else:
- assert len(inputs) == 1
- outputs = [
- F.conv2d(inputs[0], self.weight, self.bias, self.stride,
- self.paddings[self.test_branch_idx],
- self.dilations[self.test_branch_idx])
- ]
-
- return outputs
-
-
-# Since TridentNet is defined over ResNet50 and ResNet101, here we
-# only support TridentBottleneckBlock.
-class TridentBottleneck(Bottleneck):
- """BottleBlock for TridentResNet.
-
- Args:
- trident_dilations (tuple[int, int, int]): Dilations of different
- trident branch.
- test_branch_idx (int): In inference, all 3 branches will be used
- if `test_branch_idx==-1`, otherwise only branch with index
- `test_branch_idx` will be used.
- concat_output (bool): Whether to concat the output list to a Tensor.
- `True` only in the last Block.
- """
-
- def __init__(self, trident_dilations, test_branch_idx, concat_output,
- **kwargs):
-
- super(TridentBottleneck, self).__init__(**kwargs)
- self.trident_dilations = trident_dilations
- self.num_branch = len(trident_dilations)
- self.concat_output = concat_output
- self.test_branch_idx = test_branch_idx
- self.conv2 = TridentConv(
- self.planes,
- self.planes,
- kernel_size=3,
- stride=self.conv2_stride,
- bias=False,
- trident_dilations=self.trident_dilations,
- test_branch_idx=test_branch_idx)
-
- def forward(self, x):
-
- def _inner_forward(x):
- num_branch = (
- self.num_branch
- if self.training or self.test_branch_idx == -1 else 1)
- identity = x
- if not isinstance(x, list):
- x = (x, ) * num_branch
- identity = x
- if self.downsample is not None:
- identity = [self.downsample(b) for b in x]
-
- out = [self.conv1(b) for b in x]
- out = [self.norm1(b) for b in out]
- out = [self.relu(b) for b in out]
-
- if self.with_plugins:
- for k in range(len(out)):
- out[k] = self.forward_plugin(out[k],
- self.after_conv1_plugin_names)
-
- out = self.conv2(out)
- out = [self.norm2(b) for b in out]
- out = [self.relu(b) for b in out]
- if self.with_plugins:
- for k in range(len(out)):
- out[k] = self.forward_plugin(out[k],
- self.after_conv2_plugin_names)
-
- out = [self.conv3(b) for b in out]
- out = [self.norm3(b) for b in out]
-
- if self.with_plugins:
- for k in range(len(out)):
- out[k] = self.forward_plugin(out[k],
- self.after_conv3_plugin_names)
-
- out = [
- out_b + identity_b for out_b, identity_b in zip(out, identity)
- ]
- return out
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(_inner_forward, x)
- else:
- out = _inner_forward(x)
-
- out = [self.relu(b) for b in out]
- if self.concat_output:
- out = torch.cat(out, dim=0)
- return out
-
-
-def make_trident_res_layer(block,
- inplanes,
- planes,
- num_blocks,
- stride=1,
- trident_dilations=(1, 2, 3),
- style='pytorch',
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- dcn=None,
- plugins=None,
- test_branch_idx=-1):
- """Build Trident Res Layers."""
-
- downsample = None
- if stride != 1 or inplanes != planes * block.expansion:
- downsample = []
- conv_stride = stride
- downsample.extend([
- build_conv_layer(
- conv_cfg,
- inplanes,
- planes * block.expansion,
- kernel_size=1,
- stride=conv_stride,
- bias=False),
- build_norm_layer(norm_cfg, planes * block.expansion)[1]
- ])
- downsample = nn.Sequential(*downsample)
-
- layers = []
- for i in range(num_blocks):
- layers.append(
- block(
- inplanes=inplanes,
- planes=planes,
- stride=stride if i == 0 else 1,
- trident_dilations=trident_dilations,
- downsample=downsample if i == 0 else None,
- style=style,
- with_cp=with_cp,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- dcn=dcn,
- plugins=plugins,
- test_branch_idx=test_branch_idx,
- concat_output=True if i == num_blocks - 1 else False))
- inplanes = planes * block.expansion
- return nn.Sequential(*layers)
-
-
-@BACKBONES.register_module()
-class TridentResNet(ResNet):
- """The stem layer, stage 1 and stage 2 in Trident ResNet are identical to
- ResNet, while in stage 3, Trident BottleBlock is utilized to replace the
- normal BottleBlock to yield trident output. Different branch shares the
- convolution weight but uses different dilations to achieve multi-scale
- output.
-
- / stage3(b0) \
- x - stem - stage1 - stage2 - stage3(b1) - output
- \ stage3(b2) /
-
- Args:
- depth (int): Depth of resnet, from {50, 101, 152}.
- num_branch (int): Number of branches in TridentNet.
- test_branch_idx (int): In inference, all 3 branches will be used
- if `test_branch_idx==-1`, otherwise only branch with index
- `test_branch_idx` will be used.
- trident_dilations (tuple[int]): Dilations of different trident branch.
- len(trident_dilations) should be equal to num_branch.
- """ # noqa
-
- def __init__(self, depth, num_branch, test_branch_idx, trident_dilations,
- **kwargs):
-
- assert num_branch == len(trident_dilations)
- assert depth in (50, 101, 152)
- super(TridentResNet, self).__init__(depth, **kwargs)
- assert self.num_stages == 3
- self.test_branch_idx = test_branch_idx
- self.num_branch = num_branch
-
- last_stage_idx = self.num_stages - 1
- stride = self.strides[last_stage_idx]
- dilation = trident_dilations
- dcn = self.dcn if self.stage_with_dcn[last_stage_idx] else None
- if self.plugins is not None:
- stage_plugins = self.make_stage_plugins(self.plugins,
- last_stage_idx)
- else:
- stage_plugins = None
- planes = self.base_channels * 2**last_stage_idx
- res_layer = make_trident_res_layer(
- TridentBottleneck,
- inplanes=(self.block.expansion * self.base_channels *
- 2**(last_stage_idx - 1)),
- planes=planes,
- num_blocks=self.stage_blocks[last_stage_idx],
- stride=stride,
- trident_dilations=dilation,
- style=self.style,
- with_cp=self.with_cp,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- dcn=dcn,
- plugins=stage_plugins,
- test_branch_idx=self.test_branch_idx)
-
- layer_name = f'layer{last_stage_idx + 1}'
-
- self.__setattr__(layer_name, res_layer)
- self.res_layers.pop(last_stage_idx)
- self.res_layers.insert(last_stage_idx, layer_name)
-
- self._freeze_stages()
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context_59.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context_59.py
deleted file mode 100644
index 4a8180038be33fba9c3229ee3c017f2f0628544f..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context_59.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_r50-d8.py',
- '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(num_classes=59),
- auxiliary_head=dict(num_classes=59),
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
-optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x512_20k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x512_20k_voc12aug.py
deleted file mode 100644
index cd88154d5e0be1a519e973331e0a14ae8a7de13e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,7 +0,0 @@
-_base_ = [
- '../_base_/models/pspnet_r50-d8.py',
- '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_20k.py'
-]
-model = dict(
- decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/docs/CONDITIONING.md b/spaces/GrandaddyShmax/AudioCraft_Plus/docs/CONDITIONING.md
deleted file mode 100644
index 6e356cb8e9912d3e18fc84598c1acf77c6e7abc5..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/docs/CONDITIONING.md
+++ /dev/null
@@ -1,146 +0,0 @@
-# AudioCraft conditioning modules
-
-AudioCraft provides a
-[modular implementation of conditioning modules](../audiocraft/modules/conditioners.py)
-that can be used with the language model to condition the generation.
-The codebase was developed in order to easily extend the set of modules
-currently supported to easily develop new ways of controlling the generation.
-
-
-## Conditioning methods
-
-For now, we support 3 main types of conditioning within AudioCraft:
-* Text-based conditioning methods
-* Waveform-based conditioning methods
-* Joint embedding conditioning methods for text and audio projected in a shared latent space.
-
-The Language Model relies on 2 core components that handle processing information:
-* The `ConditionProvider` class, that maps metadata to processed conditions leveraging
-all the defined conditioners for the given task.
-* The `ConditionFuser` class, that takes preprocessed conditions and properly fuse the
-conditioning embedding to the language model inputs following a given fusing strategy.
-
-Different conditioners (for text, waveform, joint embeddings...) are provided as torch
-modules in AudioCraft and are used internally in the language model to process the
-conditioning signals and feed them to the language model.
-
-
-## Core concepts
-
-### Conditioners
-
-The `BaseConditioner` torch module is the base implementation for all conditioners in audiocraft.
-
-Each conditioner is expected to implement 2 methods:
-* The `tokenize` method that is used as a preprocessing method that contains all processing
-that can lead to synchronization points (e.g. BPE tokenization with transfer to the GPU).
-The output of the tokenize method will then be used to feed the forward method.
-* The `forward` method that takes the output of the tokenize method and contains the core computation
-to obtain the conditioning embedding along with a mask indicating valid indices (e.g. padding tokens).
-
-### ConditionProvider
-
-The ConditionProvider prepares and provides conditions given a dictionary of conditioners.
-
-Conditioners are specified as a dictionary of attributes and the corresponding conditioner
-providing the processing logic for the given attribute.
-
-Similarly to the conditioners, the condition provider works in two steps to avoid sychronization points:
-* A `tokenize` method that takes a list of conditioning attributes for the batch,
-and run all tokenize steps for the set of conditioners.
-* A `forward` method that takes the output of the tokenize step and run all the forward steps
-for the set of conditioners.
-
-The list of conditioning attributes is passed as a list of `ConditioningAttributes`
-that is presented just below.
-
-### ConditionFuser
-
-Once all conditioning signals have been extracted and processed by the `ConditionProvider`
-as dense embeddings, they remain to be passed to the language model along with the original
-language model inputs.
-
-The `ConditionFuser` handles specifically the logic to combine the different conditions
-to the actual model input, supporting different strategies to combine them.
-
-One can therefore define different strategies to combine or fuse the condition to the input, in particular:
-* Prepending the conditioning signal to the input with the `prepend` strategy,
-* Summing the conditioning signal to the input with the `sum` strategy,
-* Combining the conditioning relying on a cross-attention mechanism with the `cross` strategy,
-* Using input interpolation with the `input_interpolate` strategy.
-
-### SegmentWithAttributes and ConditioningAttributes: From metadata to conditions
-
-The `ConditioningAttributes` dataclass is the base class for metadata
-containing all attributes used for conditioning the language model.
-
-It currently supports the following types of attributes:
-* Text conditioning attributes: Dictionary of textual attributes used for text-conditioning.
-* Wav conditioning attributes: Dictionary of waveform attributes used for waveform-based
-conditioning such as the chroma conditioning.
-* JointEmbed conditioning attributes: Dictionary of text and waveform attributes
-that are expected to be represented in a shared latent space.
-
-These different types of attributes are the attributes that are processed
-by the different conditioners.
-
-`ConditioningAttributes` are extracted from metadata loaded along the audio in the datasets,
-provided that the metadata used by the dataset implements the `SegmentWithAttributes` abstraction.
-
-All metadata-enabled datasets to use for conditioning in AudioCraft inherits
-the [`audiocraft.data.info_dataset.InfoAudioDataset`](../audiocraft/data/info_audio_dataset.py) class
-and the corresponding metadata inherits and implements the `SegmentWithAttributes` abstraction.
-Refer to the [`audiocraft.data.music_dataset.MusicAudioDataset`](../audiocraft/data/music_dataset.py)
-class as an example.
-
-
-## Available conditioners
-
-### Text conditioners
-
-All text conditioners are expected to inherit from the `TextConditioner` class.
-
-AudioCraft currently provides two text conditioners:
-* The `LUTConditioner` that relies on look-up-table of embeddings learned at train time,
-and relying on either no tokenizer or a spacy tokenizer. This conditioner is particularly
-useful for simple experiments and categorical labels.
-* The `T5Conditioner` that relies on a
-[pre-trained T5 model](https://huggingface.co/docs/transformers/model_doc/t5)
-frozen or fine-tuned at train time to extract the text embeddings.
-
-### Waveform conditioners
-
-All waveform conditioners are expected to inherit from the `WaveformConditioner` class and
-consists of conditioning method that takes a waveform as input. The waveform conditioner
-must implement the logic to extract the embedding from the waveform and define the downsampling
-factor from the waveform to the resulting embedding.
-
-The `ChromaStemConditioner` conditioner is a waveform conditioner for the chroma features
-conditioning used by MusicGen. It takes a given waveform, extract relevant stems for melody
-(namely all non drums and bass stems) using a
-[pre-trained Demucs model](https://github.com/facebookresearch/demucs)
-and then extract the chromagram bins from the remaining mix of stems.
-
-### Joint embeddings conditioners
-
-We finally provide support for conditioning based on joint text and audio embeddings through
-the `JointEmbeddingConditioner` class and the `CLAPEmbeddingConditioner` that implements such
-a conditioning method relying on a [pretrained CLAP model](https://github.com/LAION-AI/CLAP).
-
-## Classifier Free Guidance
-
-We provide a Classifier Free Guidance implementation in AudioCraft. With the classifier free
-guidance dropout, all attributes are dropped with the same probability.
-
-## Attribute Dropout
-
-We further provide an attribute dropout strategy. Unlike the classifier free guidance dropout,
-the attribute dropout drops given attributes with a defined probability, allowing the model
-not to expect all conditioning signals to be provided at once.
-
-## Faster computation of conditions
-
-Conditioners that require some heavy computation on the waveform can be cached, in particular
-the `ChromaStemConditioner` or `CLAPEmbeddingConditioner`. You just need to provide the
-`cache_path` parameter to them. We recommend running dummy jobs for filling up the cache quickly.
-An example is provied in the [musicgen.musicgen_melody_32khz grid](../audiocraft/grids/musicgen/musicgen_melody_32khz.py).
\ No newline at end of file
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/streaming.py b/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/streaming.py
deleted file mode 100644
index fdbdf5e90fc0c6560873d66bf273460b38e5ed7e..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/streaming.py
+++ /dev/null
@@ -1,135 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Streaming module API that should be implemented by all Streaming components,
-"""
-
-from contextlib import contextmanager
-import typing as tp
-from torch import nn
-import torch
-
-
-State = tp.Dict[str, torch.Tensor]
-
-
-class StreamingModule(nn.Module):
- """Common API for streaming components.
-
- Each streaming component has a streaming state, which is just a dict[str, Tensor].
- By convention, the first dim of each tensor must be the batch size.
- Don't use dots in the key names, as this would clash with submodules
- (like in state_dict).
-
- If `self._is_streaming` is True, the component should use and remember
- the proper state inside `self._streaming_state`.
-
- To set a streaming component in streaming state, use
-
- with module.streaming():
- ...
-
- This will automatically reset the streaming state when exiting the context manager.
- This also automatically propagates to all streaming children module.
-
- Some module might also implement the `StreamingModule.flush` method, although
- this one is trickier, as all parents module must be StreamingModule and implement
- it as well for it to work properly. See `StreamingSequential` after.
- """
- def __init__(self) -> None:
- super().__init__()
- self._streaming_state: State = {}
- self._is_streaming = False
-
- def _apply_named_streaming(self, fn: tp.Any):
- for name, module in self.named_modules():
- if isinstance(module, StreamingModule):
- fn(name, module)
-
- def _set_streaming(self, streaming: bool):
- def _set_streaming(name, module):
- module._is_streaming = streaming
- self._apply_named_streaming(_set_streaming)
-
- @contextmanager
- def streaming(self):
- """Context manager to enter streaming mode. Reset streaming state on exit.
- """
- self._set_streaming(True)
- try:
- yield
- finally:
- self._set_streaming(False)
- self.reset_streaming()
-
- def reset_streaming(self):
- """Reset the streaming state.
- """
- def _reset(name: str, module: StreamingModule):
- module._streaming_state.clear()
-
- self._apply_named_streaming(_reset)
-
- def get_streaming_state(self) -> State:
- """Return the streaming state, including that of sub-modules.
- """
- state: State = {}
-
- def _add(name: str, module: StreamingModule):
- if name:
- name += "."
- for key, value in module._streaming_state.items():
- state[name + key] = value
-
- self._apply_named_streaming(_add)
- return state
-
- def set_streaming_state(self, state: State):
- """Set the streaming state, including that of sub-modules.
- """
- state = dict(state)
-
- def _set(name: str, module: StreamingModule):
- if name:
- name += "."
- module._streaming_state.clear()
- for key, value in list(state.items()):
- # complexity is not ideal here, but probably fine.
- if key.startswith(name):
- local_key = key[len(name):]
- if '.' not in local_key:
- module._streaming_state[local_key] = value
- del state[key]
-
- self._apply_named_streaming(_set)
- assert len(state) == 0, list(state.keys())
-
- def flush(self, x: tp.Optional[torch.Tensor] = None):
- """Flush any remaining outputs that were waiting for completion.
- Typically, for convolutions, this will add the final padding
- and process the last buffer.
-
- This should take an optional argument `x`, which will be provided
- if a module before this one in the streaming pipeline has already
- spitted out a flushed out buffer.
- """
- if x is None:
- return None
- else:
- return self(x)
-
-
-class StreamingSequential(StreamingModule, nn.Sequential):
- """A streaming compatible alternative of `nn.Sequential`.
- """
- def flush(self, x: tp.Optional[torch.Tensor] = None):
- for module in self:
- if isinstance(module, StreamingModule):
- x = module.flush(x)
- elif x is not None:
- x = module(x)
- return x
diff --git a/spaces/GroveStreet/GTA_SOVITS/diffusion/diffusion_onnx.py b/spaces/GroveStreet/GTA_SOVITS/diffusion/diffusion_onnx.py
deleted file mode 100644
index 1c1e80321de162b5233801efa3423739f7f92bdc..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/diffusion/diffusion_onnx.py
+++ /dev/null
@@ -1,612 +0,0 @@
-from collections import deque
-from functools import partial
-from inspect import isfunction
-import torch.nn.functional as F
-import librosa.sequence
-import numpy as np
-from torch.nn import Conv1d
-from torch.nn import Mish
-import torch
-from torch import nn
-from tqdm import tqdm
-import math
-
-
-def exists(x):
- return x is not None
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def extract(a, t):
- return a[t].reshape((1, 1, 1, 1))
-
-
-def noise_like(shape, device, repeat=False):
- repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
- noise = lambda: torch.randn(shape, device=device)
- return repeat_noise() if repeat else noise()
-
-
-def linear_beta_schedule(timesteps, max_beta=0.02):
- """
- linear schedule
- """
- betas = np.linspace(1e-4, max_beta, timesteps)
- return betas
-
-
-def cosine_beta_schedule(timesteps, s=0.008):
- """
- cosine schedule
- as proposed in https://openreview.net/forum?id=-NEXDKk8gZ
- """
- steps = timesteps + 1
- x = np.linspace(0, steps, steps)
- alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2
- alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
- betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
- return np.clip(betas, a_min=0, a_max=0.999)
-
-
-beta_schedule = {
- "cosine": cosine_beta_schedule,
- "linear": linear_beta_schedule,
-}
-
-
-def extract_1(a, t):
- return a[t].reshape((1, 1, 1, 1))
-
-
-def predict_stage0(noise_pred, noise_pred_prev):
- return (noise_pred + noise_pred_prev) / 2
-
-
-def predict_stage1(noise_pred, noise_list):
- return (noise_pred * 3
- - noise_list[-1]) / 2
-
-
-def predict_stage2(noise_pred, noise_list):
- return (noise_pred * 23
- - noise_list[-1] * 16
- + noise_list[-2] * 5) / 12
-
-
-def predict_stage3(noise_pred, noise_list):
- return (noise_pred * 55
- - noise_list[-1] * 59
- + noise_list[-2] * 37
- - noise_list[-3] * 9) / 24
-
-
-class SinusoidalPosEmb(nn.Module):
- def __init__(self, dim):
- super().__init__()
- self.dim = dim
- self.half_dim = dim // 2
- self.emb = 9.21034037 / (self.half_dim - 1)
- self.emb = torch.exp(torch.arange(self.half_dim) * torch.tensor(-self.emb)).unsqueeze(0)
- self.emb = self.emb.cpu()
-
- def forward(self, x):
- emb = self.emb * x
- emb = torch.cat((emb.sin(), emb.cos()), dim=-1)
- return emb
-
-
-class ResidualBlock(nn.Module):
- def __init__(self, encoder_hidden, residual_channels, dilation):
- super().__init__()
- self.residual_channels = residual_channels
- self.dilated_conv = Conv1d(residual_channels, 2 * residual_channels, 3, padding=dilation, dilation=dilation)
- self.diffusion_projection = nn.Linear(residual_channels, residual_channels)
- self.conditioner_projection = Conv1d(encoder_hidden, 2 * residual_channels, 1)
- self.output_projection = Conv1d(residual_channels, 2 * residual_channels, 1)
-
- def forward(self, x, conditioner, diffusion_step):
- diffusion_step = self.diffusion_projection(diffusion_step).unsqueeze(-1)
- conditioner = self.conditioner_projection(conditioner)
- y = x + diffusion_step
- y = self.dilated_conv(y) + conditioner
-
- gate, filter_1 = torch.split(y, [self.residual_channels, self.residual_channels], dim=1)
-
- y = torch.sigmoid(gate) * torch.tanh(filter_1)
- y = self.output_projection(y)
-
- residual, skip = torch.split(y, [self.residual_channels, self.residual_channels], dim=1)
-
- return (x + residual) / 1.41421356, skip
-
-
-class DiffNet(nn.Module):
- def __init__(self, in_dims, n_layers, n_chans, n_hidden):
- super().__init__()
- self.encoder_hidden = n_hidden
- self.residual_layers = n_layers
- self.residual_channels = n_chans
- self.input_projection = Conv1d(in_dims, self.residual_channels, 1)
- self.diffusion_embedding = SinusoidalPosEmb(self.residual_channels)
- dim = self.residual_channels
- self.mlp = nn.Sequential(
- nn.Linear(dim, dim * 4),
- Mish(),
- nn.Linear(dim * 4, dim)
- )
- self.residual_layers = nn.ModuleList([
- ResidualBlock(self.encoder_hidden, self.residual_channels, 1)
- for i in range(self.residual_layers)
- ])
- self.skip_projection = Conv1d(self.residual_channels, self.residual_channels, 1)
- self.output_projection = Conv1d(self.residual_channels, in_dims, 1)
- nn.init.zeros_(self.output_projection.weight)
-
- def forward(self, spec, diffusion_step, cond):
- x = spec.squeeze(0)
- x = self.input_projection(x) # x [B, residual_channel, T]
- x = F.relu(x)
- # skip = torch.randn_like(x)
- diffusion_step = diffusion_step.float()
- diffusion_step = self.diffusion_embedding(diffusion_step)
- diffusion_step = self.mlp(diffusion_step)
-
- x, skip = self.residual_layers[0](x, cond, diffusion_step)
- # noinspection PyTypeChecker
- for layer in self.residual_layers[1:]:
- x, skip_connection = layer.forward(x, cond, diffusion_step)
- skip = skip + skip_connection
- x = skip / math.sqrt(len(self.residual_layers))
- x = self.skip_projection(x)
- x = F.relu(x)
- x = self.output_projection(x) # [B, 80, T]
- return x.unsqueeze(1)
-
-
-class AfterDiffusion(nn.Module):
- def __init__(self, spec_max, spec_min, v_type='a'):
- super().__init__()
- self.spec_max = spec_max
- self.spec_min = spec_min
- self.type = v_type
-
- def forward(self, x):
- x = x.squeeze(1).permute(0, 2, 1)
- mel_out = (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min
- if self.type == 'nsf-hifigan-log10':
- mel_out = mel_out * 0.434294
- return mel_out.transpose(2, 1)
-
-
-class Pred(nn.Module):
- def __init__(self, alphas_cumprod):
- super().__init__()
- self.alphas_cumprod = alphas_cumprod
-
- def forward(self, x_1, noise_t, t_1, t_prev):
- a_t = extract(self.alphas_cumprod, t_1).cpu()
- a_prev = extract(self.alphas_cumprod, t_prev).cpu()
- a_t_sq, a_prev_sq = a_t.sqrt().cpu(), a_prev.sqrt().cpu()
- x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x_1 - 1 / (
- a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t)
- x_pred = x_1 + x_delta.cpu()
-
- return x_pred
-
-
-class GaussianDiffusion(nn.Module):
- def __init__(self,
- out_dims=128,
- n_layers=20,
- n_chans=384,
- n_hidden=256,
- timesteps=1000,
- k_step=1000,
- max_beta=0.02,
- spec_min=-12,
- spec_max=2):
- super().__init__()
- self.denoise_fn = DiffNet(out_dims, n_layers, n_chans, n_hidden)
- self.out_dims = out_dims
- self.mel_bins = out_dims
- self.n_hidden = n_hidden
- betas = beta_schedule['linear'](timesteps, max_beta=max_beta)
-
- alphas = 1. - betas
- alphas_cumprod = np.cumprod(alphas, axis=0)
- alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
- timesteps, = betas.shape
- self.num_timesteps = int(timesteps)
- self.k_step = k_step
-
- self.noise_list = deque(maxlen=4)
-
- to_torch = partial(torch.tensor, dtype=torch.float32)
-
- self.register_buffer('betas', to_torch(betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
-
- # calculations for posterior q(x_{t-1} | x_t, x_0)
- posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
- # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
- self.register_buffer('posterior_variance', to_torch(posterior_variance))
- # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
- self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
- self.register_buffer('posterior_mean_coef1', to_torch(
- betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
- self.register_buffer('posterior_mean_coef2', to_torch(
- (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
-
- self.register_buffer('spec_min', torch.FloatTensor([spec_min])[None, None, :out_dims])
- self.register_buffer('spec_max', torch.FloatTensor([spec_max])[None, None, :out_dims])
- self.ad = AfterDiffusion(self.spec_max, self.spec_min)
- self.xp = Pred(self.alphas_cumprod)
-
- def q_mean_variance(self, x_start, t):
- mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start
- variance = extract(1. - self.alphas_cumprod, t, x_start.shape)
- log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape)
- return mean, variance, log_variance
-
- def predict_start_from_noise(self, x_t, t, noise):
- return (
- extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
- extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
- )
-
- def q_posterior(self, x_start, x_t, t):
- posterior_mean = (
- extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
- extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
- )
- posterior_variance = extract(self.posterior_variance, t, x_t.shape)
- posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape)
- return posterior_mean, posterior_variance, posterior_log_variance_clipped
-
- def p_mean_variance(self, x, t, cond):
- noise_pred = self.denoise_fn(x, t, cond=cond)
- x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred)
-
- x_recon.clamp_(-1., 1.)
-
- model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
- return model_mean, posterior_variance, posterior_log_variance
-
- @torch.no_grad()
- def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False):
- b, *_, device = *x.shape, x.device
- model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond)
- noise = noise_like(x.shape, device, repeat_noise)
- # no noise when t == 0
- nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
- return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
-
- @torch.no_grad()
- def p_sample_plms(self, x, t, interval, cond, clip_denoised=True, repeat_noise=False):
- """
- Use the PLMS method from
- [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778).
- """
-
- def get_x_pred(x, noise_t, t):
- a_t = extract(self.alphas_cumprod, t)
- a_prev = extract(self.alphas_cumprod, torch.max(t - interval, torch.zeros_like(t)))
- a_t_sq, a_prev_sq = a_t.sqrt(), a_prev.sqrt()
-
- x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x - 1 / (
- a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t)
- x_pred = x + x_delta
-
- return x_pred
-
- noise_list = self.noise_list
- noise_pred = self.denoise_fn(x, t, cond=cond)
-
- if len(noise_list) == 0:
- x_pred = get_x_pred(x, noise_pred, t)
- noise_pred_prev = self.denoise_fn(x_pred, max(t - interval, 0), cond=cond)
- noise_pred_prime = (noise_pred + noise_pred_prev) / 2
- elif len(noise_list) == 1:
- noise_pred_prime = (3 * noise_pred - noise_list[-1]) / 2
- elif len(noise_list) == 2:
- noise_pred_prime = (23 * noise_pred - 16 * noise_list[-1] + 5 * noise_list[-2]) / 12
- else:
- noise_pred_prime = (55 * noise_pred - 59 * noise_list[-1] + 37 * noise_list[-2] - 9 * noise_list[-3]) / 24
-
- x_prev = get_x_pred(x, noise_pred_prime, t)
- noise_list.append(noise_pred)
-
- return x_prev
-
- def q_sample(self, x_start, t, noise=None):
- noise = default(noise, lambda: torch.randn_like(x_start))
- return (
- extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
- extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise
- )
-
- def p_losses(self, x_start, t, cond, noise=None, loss_type='l2'):
- noise = default(noise, lambda: torch.randn_like(x_start))
-
- x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
- x_recon = self.denoise_fn(x_noisy, t, cond)
-
- if loss_type == 'l1':
- loss = (noise - x_recon).abs().mean()
- elif loss_type == 'l2':
- loss = F.mse_loss(noise, x_recon)
- else:
- raise NotImplementedError()
-
- return loss
-
- def org_forward(self,
- condition,
- init_noise=None,
- gt_spec=None,
- infer=True,
- infer_speedup=100,
- method='pndm',
- k_step=1000,
- use_tqdm=True):
- """
- conditioning diffusion, use fastspeech2 encoder output as the condition
- """
- cond = condition
- b, device = condition.shape[0], condition.device
- if not infer:
- spec = self.norm_spec(gt_spec)
- t = torch.randint(0, self.k_step, (b,), device=device).long()
- norm_spec = spec.transpose(1, 2)[:, None, :, :] # [B, 1, M, T]
- return self.p_losses(norm_spec, t, cond=cond)
- else:
- shape = (cond.shape[0], 1, self.out_dims, cond.shape[2])
-
- if gt_spec is None:
- t = self.k_step
- if init_noise is None:
- x = torch.randn(shape, device=device)
- else:
- x = init_noise
- else:
- t = k_step
- norm_spec = self.norm_spec(gt_spec)
- norm_spec = norm_spec.transpose(1, 2)[:, None, :, :]
- x = self.q_sample(x_start=norm_spec, t=torch.tensor([t - 1], device=device).long())
-
- if method is not None and infer_speedup > 1:
- if method == 'dpm-solver':
- from .dpm_solver_pytorch import NoiseScheduleVP, model_wrapper, DPM_Solver
- # 1. Define the noise schedule.
- noise_schedule = NoiseScheduleVP(schedule='discrete', betas=self.betas[:t])
-
- # 2. Convert your discrete-time `model` to the continuous-time
- # noise prediction model. Here is an example for a diffusion model
- # `model` with the noise prediction type ("noise") .
- def my_wrapper(fn):
- def wrapped(x, t, **kwargs):
- ret = fn(x, t, **kwargs)
- if use_tqdm:
- self.bar.update(1)
- return ret
-
- return wrapped
-
- model_fn = model_wrapper(
- my_wrapper(self.denoise_fn),
- noise_schedule,
- model_type="noise", # or "x_start" or "v" or "score"
- model_kwargs={"cond": cond}
- )
-
- # 3. Define dpm-solver and sample by singlestep DPM-Solver.
- # (We recommend singlestep DPM-Solver for unconditional sampling)
- # You can adjust the `steps` to balance the computation
- # costs and the sample quality.
- dpm_solver = DPM_Solver(model_fn, noise_schedule)
-
- steps = t // infer_speedup
- if use_tqdm:
- self.bar = tqdm(desc="sample time step", total=steps)
- x = dpm_solver.sample(
- x,
- steps=steps,
- order=3,
- skip_type="time_uniform",
- method="singlestep",
- )
- if use_tqdm:
- self.bar.close()
- elif method == 'pndm':
- self.noise_list = deque(maxlen=4)
- if use_tqdm:
- for i in tqdm(
- reversed(range(0, t, infer_speedup)), desc='sample time step',
- total=t // infer_speedup,
- ):
- x = self.p_sample_plms(
- x, torch.full((b,), i, device=device, dtype=torch.long),
- infer_speedup, cond=cond
- )
- else:
- for i in reversed(range(0, t, infer_speedup)):
- x = self.p_sample_plms(
- x, torch.full((b,), i, device=device, dtype=torch.long),
- infer_speedup, cond=cond
- )
- else:
- raise NotImplementedError(method)
- else:
- if use_tqdm:
- for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t):
- x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
- else:
- for i in reversed(range(0, t)):
- x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
- x = x.squeeze(1).transpose(1, 2) # [B, T, M]
- return self.denorm_spec(x).transpose(2, 1)
-
- def norm_spec(self, x):
- return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1
-
- def denorm_spec(self, x):
- return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min
-
- def get_x_pred(self, x_1, noise_t, t_1, t_prev):
- a_t = extract(self.alphas_cumprod, t_1)
- a_prev = extract(self.alphas_cumprod, t_prev)
- a_t_sq, a_prev_sq = a_t.sqrt(), a_prev.sqrt()
- x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x_1 - 1 / (
- a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t)
- x_pred = x_1 + x_delta
- return x_pred
-
- def OnnxExport(self, project_name=None, init_noise=None, hidden_channels=256, export_denoise=True, export_pred=True, export_after=True):
- cond = torch.randn([1, self.n_hidden, 10]).cpu()
- if init_noise is None:
- x = torch.randn((1, 1, self.mel_bins, cond.shape[2]), dtype=torch.float32).cpu()
- else:
- x = init_noise
- pndms = 100
-
- org_y_x = self.org_forward(cond, init_noise=x)
-
- device = cond.device
- n_frames = cond.shape[2]
- step_range = torch.arange(0, self.k_step, pndms, dtype=torch.long, device=device).flip(0)
- plms_noise_stage = torch.tensor(0, dtype=torch.long, device=device)
- noise_list = torch.zeros((0, 1, 1, self.mel_bins, n_frames), device=device)
-
- ot = step_range[0]
- ot_1 = torch.full((1,), ot, device=device, dtype=torch.long)
- if export_denoise:
- torch.onnx.export(
- self.denoise_fn,
- (x.cpu(), ot_1.cpu(), cond.cpu()),
- f"{project_name}_denoise.onnx",
- input_names=["noise", "time", "condition"],
- output_names=["noise_pred"],
- dynamic_axes={
- "noise": [3],
- "condition": [2]
- },
- opset_version=16
- )
-
- for t in step_range:
- t_1 = torch.full((1,), t, device=device, dtype=torch.long)
- noise_pred = self.denoise_fn(x, t_1, cond)
- t_prev = t_1 - pndms
- t_prev = t_prev * (t_prev > 0)
- if plms_noise_stage == 0:
- if export_pred:
- torch.onnx.export(
- self.xp,
- (x.cpu(), noise_pred.cpu(), t_1.cpu(), t_prev.cpu()),
- f"{project_name}_pred.onnx",
- input_names=["noise", "noise_pred", "time", "time_prev"],
- output_names=["noise_pred_o"],
- dynamic_axes={
- "noise": [3],
- "noise_pred": [3]
- },
- opset_version=16
- )
-
- x_pred = self.get_x_pred(x, noise_pred, t_1, t_prev)
- noise_pred_prev = self.denoise_fn(x_pred, t_prev, cond=cond)
- noise_pred_prime = predict_stage0(noise_pred, noise_pred_prev)
-
- elif plms_noise_stage == 1:
- noise_pred_prime = predict_stage1(noise_pred, noise_list)
-
- elif plms_noise_stage == 2:
- noise_pred_prime = predict_stage2(noise_pred, noise_list)
-
- else:
- noise_pred_prime = predict_stage3(noise_pred, noise_list)
-
- noise_pred = noise_pred.unsqueeze(0)
-
- if plms_noise_stage < 3:
- noise_list = torch.cat((noise_list, noise_pred), dim=0)
- plms_noise_stage = plms_noise_stage + 1
-
- else:
- noise_list = torch.cat((noise_list[-2:], noise_pred), dim=0)
-
- x = self.get_x_pred(x, noise_pred_prime, t_1, t_prev)
- if export_after:
- torch.onnx.export(
- self.ad,
- x.cpu(),
- f"{project_name}_after.onnx",
- input_names=["x"],
- output_names=["mel_out"],
- dynamic_axes={
- "x": [3]
- },
- opset_version=16
- )
- x = self.ad(x)
-
- print((x == org_y_x).all())
- return x
-
- def forward(self, condition=None, init_noise=None, pndms=None, k_step=None):
- cond = condition
- x = init_noise
-
- device = cond.device
- n_frames = cond.shape[2]
- step_range = torch.arange(0, k_step.item(), pndms.item(), dtype=torch.long, device=device).flip(0)
- plms_noise_stage = torch.tensor(0, dtype=torch.long, device=device)
- noise_list = torch.zeros((0, 1, 1, self.mel_bins, n_frames), device=device)
-
- ot = step_range[0]
- ot_1 = torch.full((1,), ot, device=device, dtype=torch.long)
-
- for t in step_range:
- t_1 = torch.full((1,), t, device=device, dtype=torch.long)
- noise_pred = self.denoise_fn(x, t_1, cond)
- t_prev = t_1 - pndms
- t_prev = t_prev * (t_prev > 0)
- if plms_noise_stage == 0:
- x_pred = self.get_x_pred(x, noise_pred, t_1, t_prev)
- noise_pred_prev = self.denoise_fn(x_pred, t_prev, cond=cond)
- noise_pred_prime = predict_stage0(noise_pred, noise_pred_prev)
-
- elif plms_noise_stage == 1:
- noise_pred_prime = predict_stage1(noise_pred, noise_list)
-
- elif plms_noise_stage == 2:
- noise_pred_prime = predict_stage2(noise_pred, noise_list)
-
- else:
- noise_pred_prime = predict_stage3(noise_pred, noise_list)
-
- noise_pred = noise_pred.unsqueeze(0)
-
- if plms_noise_stage < 3:
- noise_list = torch.cat((noise_list, noise_pred), dim=0)
- plms_noise_stage = plms_noise_stage + 1
-
- else:
- noise_list = torch.cat((noise_list[-2:], noise_pred), dim=0)
-
- x = self.get_x_pred(x, noise_pred_prime, t_1, t_prev)
- x = self.ad(x)
- return x
diff --git a/spaces/Guilherme34/LiminalAI-cpu/app.py b/spaces/Guilherme34/LiminalAI-cpu/app.py
deleted file mode 100644
index 11066764529d18a1d3dd9aa5edde5436481cf346..0000000000000000000000000000000000000000
--- a/spaces/Guilherme34/LiminalAI-cpu/app.py
+++ /dev/null
@@ -1,160 +0,0 @@
-"""
-Stable Diffusion Webui Version 1.32
-https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.3.2
-
-"""
-
-import os
-from sys import executable
-import subprocess
-import pathlib
-import gc
-
-def Gitclone(URI:str,ClonePath:pathlib.Path ) -> int :
- if pathlib.Path.exists(ClonePath):
- return 0
- while True:
- i=subprocess.run([r"git",r"clone",str(URI),str(ClonePath)])
- if(i.returncode == 0 ):
- del i
- gc.collect()
- return 0
- else :
- del i
-def DownLoad(URI:str,DownloadPath:pathlib.Path,DownLoadFileName:str ) -> int:
- while (True):
- i=subprocess.run([r"aria2c",r"-c",r"-x" ,r"16", r"-s",r"16", r"-k" ,r"1M" ,r"-m",r"0",r"--enable-mmap=false",r"--console-log-level=error",r"-d",str(DownloadPath),r"-o",DownLoadFileName,URI]);
- if(i.returncode == 0 ):
- del i
- gc.collect()
- return 0
- else :
- del i
-user_home =pathlib.Path.home().resolve()
-os.chdir(str(user_home))
-#clone stable-diffusion-webui repo
-print("cloning stable-diffusion-webui repo")
-Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui.git",user_home / r"stable-diffusion-webui")
-os.chdir(str(user_home / r"stable-diffusion-webui"))
-os.system("git reset --hard baf6946e06249c5af9851c60171692c44ef633e0") #Version 1.32
-#install extensions
-print("installing extensions")
-Gitclone(r"https://huggingface.co/embed/negative",user_home / r"stable-diffusion-webui" / r"embeddings" / r"negative")
-Gitclone(r"https://huggingface.co/embed/lora",user_home / r"stable-diffusion-webui" / r"models" / r"Lora" / r"positive")
-DownLoad(r"https://huggingface.co/embed/upscale/resolve/main/4x-UltraSharp.pth",user_home / r"stable-diffusion-webui" / r"models" / r"ESRGAN" ,r"4x-UltraSharp.pth")
-while (True):
- i=subprocess.run([r"wget",r"https://raw.githubusercontent.com/camenduru/stable-diffusion-webui-scripts/main/run_n_times.py",r"-O",str(user_home / r"stable-diffusion-webui" / r"scripts" / r"run_n_times.py")])
- if(i.returncode == 0 ):
- del i
- gc.collect()
- break
- else :
- del i
-Gitclone(r"https://github.com/deforum-art/deforum-for-automatic1111-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"deforum-for-automatic1111-webui" )
-Gitclone(r"https://github.com/AlUlkesh/stable-diffusion-webui-images-browser",user_home / r"stable-diffusion-webui" / r"extensions"/ r"stable-diffusion-webui-images-browser")
-Gitclone(r"https://github.com/camenduru/stable-diffusion-webui-huggingface",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-huggingface")
-Gitclone(r"https://github.com/camenduru/sd-civitai-browser",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-civitai-browser")
-Gitclone(r"https://github.com/kohya-ss/sd-webui-additional-networks",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks")
-Gitclone(r"https://github.com/Mikubill/sd-webui-controlnet",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-controlnet")
-Gitclone(r"https://github.com/fkunn1326/openpose-editor",user_home / r"stable-diffusion-webui" / r"extensions" / r"openpose-editor")
-Gitclone(r"https://github.com/jexom/sd-webui-depth-lib",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-depth-lib")
-Gitclone(r"https://github.com/hnmr293/posex",user_home / r"stable-diffusion-webui" / r"extensions" / r"posex")
-Gitclone(r"https://github.com/nonnonstop/sd-webui-3d-open-pose-editor",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-3d-open-pose-editor")
-#中文本地化的请解除下一行的注释
-#Gitclone(r"https://github.com/dtlnor/stable-diffusion-webui-localization-zh_CN.git",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-localization-zh_CN")
-Gitclone(r"https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" , user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-tagcomplete")
-Gitclone(r"https://github.com/camenduru/sd-webui-tunnels",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-tunnels")
-Gitclone(r"https://github.com/etherealxx/batchlinks-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"batchlinks-webui")
-Gitclone(r"https://github.com/catppuccin/stable-diffusion-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-catppuccin")
-Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-rembg")
-Gitclone(r"https://github.com/ashen-sensored/stable-diffusion-webui-two-shot",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-two-shot")
-Gitclone(r"https://github.com/camenduru/sd_webui_stealth_pnginfo",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd_webui_stealth_pnginfo")
-os.chdir(user_home / r"stable-diffusion-webui")
-#download ControlNet models
-print("extensions dolwnload done .\ndownloading ControlNet models")
-dList =[r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_seg_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_ip2p_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_shuffle_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_canny_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1p_sd15_depth_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_inpaint_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_lineart_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_mlsd_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_normalbae_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_openpose_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_scribble_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_seg_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_softedge_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15s2_lineart_anime_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1e_sd15_tile_fp16.yaml",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_style_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_seg_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_openpose_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_keypose_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd14v1.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd15v2.pth",
- r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_zoedepth_sd15v1.pth"]
-for i in range(0,len(dList)): DownLoad(dList[i],user_home / r"stable-diffusion-webui" / r"extensions" / "sd-webui-controlnet" / r"models",pathlib.Path(dList[i]).name)
-del dList
-#download model
-#you can change model download address here
-print("ControlNet models download done.\ndownloading model")
-#Stable Diffusion Checkpoint Model
-#anything version4.5
-DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.5-pruned.ckpt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"anything-v4.5-pruned.ckpt")
-DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.0.vae.pt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"anything-v4.0.vae.pt")
-#Counterfeit-V3.0
-DownLoad(r"https://huggingface.co/gsdf/Counterfeit-V3.0/resolve/main/Counterfeit-V3.0_fp16.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"Counterfeit-V3.0_fp16.safetensors")
-DownLoad(r"https://huggingface.co/Guilherme34/LiminalAI/resolve/main/dreamlookai_stable-diffusion-v1-5_step_5000_db_1bf67a3f_ckp_00a15ae4.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"LiminalAI.safetensors")
-#AbyssOrangeMix2 sfw
-DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_sfw.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"AbyssOrangeMix2_sfw.safetensors")
-DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"orangemix.vae.pt")
-#MeinaPastelV5
-DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Baked%20VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"MeinaPastelV5_BakedVAE.safetensors")
-DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Without%20VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion",r"MeinaPastelV5_WithoutVAE.safetensors")
-
-#Lora Model
-#Better Light
-DownLoad(r"https://civitai.com/api/download/models/39885",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"Better_light.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/39885",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"Better_light.safetensors")
-#LAS
-DownLoad(r"https://civitai.com/api/download/models/21065",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"LAS.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/21065",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"LAS.safetensors")
-#Backlighting
-DownLoad(r"https://civitai.com/api/download/models/39164",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"backlighting.safetensors")
-DownLoad(r"https://civitai.com/api/download/models/39164",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"backlighting.safetensors")
-#GFPGAN Model
-#detection Resnet50
-DownLoad(r"https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"detection_Resnet50_Final.pth")
-#parsing_parsenet
-DownLoad(r"https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"parsing_parsenet.pth")
-#GFPGANv1.4
-DownLoad(r"https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"GFPGANv1.4.pth")
-#strt Stable Diffusion Webui
-print("Done\nStarting Webui...")
-os.chdir(user_home / r"stable-diffusion-webui")
-while True:
- ret=subprocess.run([executable ,user_home / r"stable-diffusion-webui" / r"launch.py",r"--precision",r"full",r"--no-half",r"--no-half-vae",r"--enable-insecure-extension-access",r"--medvram",r"--skip-torch-cuda-test",r"--enable-console-prompts",r"--ui-settings-file="+str(pathlib.Path(__file__).parent /r"config.json")])
- if(ret.returncode == 0 ):
- del ret
- gc.collect()
- else :
- del ret
-del os ,user_home ,pyexecutable ,subprocess
\ No newline at end of file
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/train.py b/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/train.py
deleted file mode 100644
index 7295f159b0427aef89a5944a0d1eb4c23ee85a7f..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/train.py
+++ /dev/null
@@ -1,413 +0,0 @@
-import argparse
-import math
-import random
-import os
-
-import numpy as np
-import torch
-from torch import nn, autograd, optim
-from torch.nn import functional as F
-from torch.utils import data
-import torch.distributed as dist
-from torchvision import transforms, utils
-from tqdm import tqdm
-
-try:
- import wandb
-
-except ImportError:
- wandb = None
-
-from model import Generator, Discriminator
-from dataset import MultiResolutionDataset
-from distributed import (
- get_rank,
- synchronize,
- reduce_loss_dict,
- reduce_sum,
- get_world_size,
-)
-
-
-def data_sampler(dataset, shuffle, distributed):
- if distributed:
- return data.distributed.DistributedSampler(dataset, shuffle=shuffle)
-
- if shuffle:
- return data.RandomSampler(dataset)
-
- else:
- return data.SequentialSampler(dataset)
-
-
-def requires_grad(model, flag=True):
- for p in model.parameters():
- p.requires_grad = flag
-
-
-def accumulate(model1, model2, decay=0.999):
- par1 = dict(model1.named_parameters())
- par2 = dict(model2.named_parameters())
-
- for k in par1.keys():
- par1[k].data.mul_(decay).add_(1 - decay, par2[k].data)
-
-
-def sample_data(loader):
- while True:
- for batch in loader:
- yield batch
-
-
-def d_logistic_loss(real_pred, fake_pred):
- real_loss = F.softplus(-real_pred)
- fake_loss = F.softplus(fake_pred)
-
- return real_loss.mean() + fake_loss.mean()
-
-
-def d_r1_loss(real_pred, real_img):
- grad_real, = autograd.grad(
- outputs=real_pred.sum(), inputs=real_img, create_graph=True
- )
- grad_penalty = grad_real.pow(2).view(grad_real.shape[0], -1).sum(1).mean()
-
- return grad_penalty
-
-
-def g_nonsaturating_loss(fake_pred):
- loss = F.softplus(-fake_pred).mean()
-
- return loss
-
-
-def g_path_regularize(fake_img, latents, mean_path_length, decay=0.01):
- noise = torch.randn_like(fake_img) / math.sqrt(
- fake_img.shape[2] * fake_img.shape[3]
- )
- grad, = autograd.grad(
- outputs=(fake_img * noise).sum(), inputs=latents, create_graph=True
- )
- path_lengths = torch.sqrt(grad.pow(2).sum(2).mean(1))
-
- path_mean = mean_path_length + decay * (path_lengths.mean() - mean_path_length)
-
- path_penalty = (path_lengths - path_mean).pow(2).mean()
-
- return path_penalty, path_mean.detach(), path_lengths
-
-
-def make_noise(batch, latent_dim, n_noise, device):
- if n_noise == 1:
- return torch.randn(batch, latent_dim, device=device)
-
- noises = torch.randn(n_noise, batch, latent_dim, device=device).unbind(0)
-
- return noises
-
-
-def mixing_noise(batch, latent_dim, prob, device):
- if prob > 0 and random.random() < prob:
- return make_noise(batch, latent_dim, 2, device)
-
- else:
- return [make_noise(batch, latent_dim, 1, device)]
-
-
-def set_grad_none(model, targets):
- for n, p in model.named_parameters():
- if n in targets:
- p.grad = None
-
-
-def train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, device):
- loader = sample_data(loader)
-
- pbar = range(args.iter)
-
- if get_rank() == 0:
- pbar = tqdm(pbar, initial=args.start_iter, dynamic_ncols=True, smoothing=0.01)
-
- mean_path_length = 0
-
- d_loss_val = 0
- r1_loss = torch.tensor(0.0, device=device)
- g_loss_val = 0
- path_loss = torch.tensor(0.0, device=device)
- path_lengths = torch.tensor(0.0, device=device)
- mean_path_length_avg = 0
- loss_dict = {}
-
- if args.distributed:
- g_module = generator.module
- d_module = discriminator.module
-
- else:
- g_module = generator
- d_module = discriminator
-
- accum = 0.5 ** (32 / (10 * 1000))
-
- sample_z = torch.randn(args.n_sample, args.latent, device=device)
-
- for idx in pbar:
- i = idx + args.start_iter
-
- if i > args.iter:
- print("Done!")
-
- break
-
- real_img = next(loader)
- real_img = real_img.to(device)
-
- requires_grad(generator, False)
- requires_grad(discriminator, True)
-
- noise = mixing_noise(args.batch, args.latent, args.mixing, device)
- fake_img, _ = generator(noise)
- fake_pred = discriminator(fake_img)
-
- real_pred = discriminator(real_img)
- d_loss = d_logistic_loss(real_pred, fake_pred)
-
- loss_dict["d"] = d_loss
- loss_dict["real_score"] = real_pred.mean()
- loss_dict["fake_score"] = fake_pred.mean()
-
- discriminator.zero_grad()
- d_loss.backward()
- d_optim.step()
-
- d_regularize = i % args.d_reg_every == 0
-
- if d_regularize:
- real_img.requires_grad = True
- real_pred = discriminator(real_img)
- r1_loss = d_r1_loss(real_pred, real_img)
-
- discriminator.zero_grad()
- (args.r1 / 2 * r1_loss * args.d_reg_every + 0 * real_pred[0]).backward()
-
- d_optim.step()
-
- loss_dict["r1"] = r1_loss
-
- requires_grad(generator, True)
- requires_grad(discriminator, False)
-
- noise = mixing_noise(args.batch, args.latent, args.mixing, device)
- fake_img, _ = generator(noise)
- fake_pred = discriminator(fake_img)
- g_loss = g_nonsaturating_loss(fake_pred)
-
- loss_dict["g"] = g_loss
-
- generator.zero_grad()
- g_loss.backward()
- g_optim.step()
-
- g_regularize = i % args.g_reg_every == 0
-
- if g_regularize:
- path_batch_size = max(1, args.batch // args.path_batch_shrink)
- noise = mixing_noise(path_batch_size, args.latent, args.mixing, device)
- fake_img, latents = generator(noise, return_latents=True)
-
- path_loss, mean_path_length, path_lengths = g_path_regularize(
- fake_img, latents, mean_path_length
- )
-
- generator.zero_grad()
- weighted_path_loss = args.path_regularize * args.g_reg_every * path_loss
-
- if args.path_batch_shrink:
- weighted_path_loss += 0 * fake_img[0, 0, 0, 0]
-
- weighted_path_loss.backward()
-
- g_optim.step()
-
- mean_path_length_avg = (
- reduce_sum(mean_path_length).item() / get_world_size()
- )
-
- loss_dict["path"] = path_loss
- loss_dict["path_length"] = path_lengths.mean()
-
- accumulate(g_ema, g_module, accum)
-
- loss_reduced = reduce_loss_dict(loss_dict)
-
- d_loss_val = loss_reduced["d"].mean().item()
- g_loss_val = loss_reduced["g"].mean().item()
- r1_val = loss_reduced["r1"].mean().item()
- path_loss_val = loss_reduced["path"].mean().item()
- real_score_val = loss_reduced["real_score"].mean().item()
- fake_score_val = loss_reduced["fake_score"].mean().item()
- path_length_val = loss_reduced["path_length"].mean().item()
-
- if get_rank() == 0:
- pbar.set_description(
- (
- f"d: {d_loss_val:.4f}; g: {g_loss_val:.4f}; r1: {r1_val:.4f}; "
- f"path: {path_loss_val:.4f}; mean path: {mean_path_length_avg:.4f}"
- )
- )
-
- if wandb and args.wandb:
- wandb.log(
- {
- "Generator": g_loss_val,
- "Discriminator": d_loss_val,
- "R1": r1_val,
- "Path Length Regularization": path_loss_val,
- "Mean Path Length": mean_path_length,
- "Real Score": real_score_val,
- "Fake Score": fake_score_val,
- "Path Length": path_length_val,
- }
- )
-
- if i % 100 == 0:
- with torch.no_grad():
- g_ema.eval()
- sample, _ = g_ema([sample_z])
- utils.save_image(
- sample,
- f"sample/{str(i).zfill(6)}.png",
- nrow=int(args.n_sample ** 0.5),
- normalize=True,
- range=(-1, 1),
- )
-
- if i % 10000 == 0:
- torch.save(
- {
- "g": g_module.state_dict(),
- "d": d_module.state_dict(),
- "g_ema": g_ema.state_dict(),
- "g_optim": g_optim.state_dict(),
- "d_optim": d_optim.state_dict(),
- },
- f"checkpoint/{str(i).zfill(6)}.pt",
- )
-
-
-if __name__ == "__main__":
- device = "cuda"
-
- parser = argparse.ArgumentParser()
-
- parser.add_argument("path", type=str)
- parser.add_argument("--iter", type=int, default=800000)
- parser.add_argument("--batch", type=int, default=16)
- parser.add_argument("--n_sample", type=int, default=64)
- parser.add_argument("--size", type=int, default=256)
- parser.add_argument("--r1", type=float, default=10)
- parser.add_argument("--path_regularize", type=float, default=2)
- parser.add_argument("--path_batch_shrink", type=int, default=2)
- parser.add_argument("--d_reg_every", type=int, default=16)
- parser.add_argument("--g_reg_every", type=int, default=4)
- parser.add_argument("--mixing", type=float, default=0.9)
- parser.add_argument("--ckpt", type=str, default=None)
- parser.add_argument("--lr", type=float, default=0.002)
- parser.add_argument("--channel_multiplier", type=int, default=2)
- parser.add_argument("--wandb", action="store_true")
- parser.add_argument("--local_rank", type=int, default=0)
-
- args = parser.parse_args()
-
- n_gpu = int(os.environ["WORLD_SIZE"]) if "WORLD_SIZE" in os.environ else 1
- args.distributed = n_gpu > 1
-
- if args.distributed:
- torch.cuda.set_device(args.local_rank)
- torch.distributed.init_process_group(backend="nccl", init_method="env://")
- synchronize()
-
- args.latent = 512
- args.n_mlp = 8
-
- args.start_iter = 0
-
- generator = Generator(
- args.size, args.latent, args.n_mlp, channel_multiplier=args.channel_multiplier
- ).to(device)
- discriminator = Discriminator(
- args.size, channel_multiplier=args.channel_multiplier
- ).to(device)
- g_ema = Generator(
- args.size, args.latent, args.n_mlp, channel_multiplier=args.channel_multiplier
- ).to(device)
- g_ema.eval()
- accumulate(g_ema, generator, 0)
-
- g_reg_ratio = args.g_reg_every / (args.g_reg_every + 1)
- d_reg_ratio = args.d_reg_every / (args.d_reg_every + 1)
-
- g_optim = optim.Adam(
- generator.parameters(),
- lr=args.lr * g_reg_ratio,
- betas=(0 ** g_reg_ratio, 0.99 ** g_reg_ratio),
- )
- d_optim = optim.Adam(
- discriminator.parameters(),
- lr=args.lr * d_reg_ratio,
- betas=(0 ** d_reg_ratio, 0.99 ** d_reg_ratio),
- )
-
- if args.ckpt is not None:
- print("load model:", args.ckpt)
-
- ckpt = torch.load(args.ckpt, map_location=lambda storage, loc: storage)
-
- try:
- ckpt_name = os.path.basename(args.ckpt)
- args.start_iter = int(os.path.splitext(ckpt_name)[0])
-
- except ValueError:
- pass
-
- generator.load_state_dict(ckpt["g"])
- discriminator.load_state_dict(ckpt["d"])
- g_ema.load_state_dict(ckpt["g_ema"])
-
- g_optim.load_state_dict(ckpt["g_optim"])
- d_optim.load_state_dict(ckpt["d_optim"])
-
- if args.distributed:
- generator = nn.parallel.DistributedDataParallel(
- generator,
- device_ids=[args.local_rank],
- output_device=args.local_rank,
- broadcast_buffers=False,
- )
-
- discriminator = nn.parallel.DistributedDataParallel(
- discriminator,
- device_ids=[args.local_rank],
- output_device=args.local_rank,
- broadcast_buffers=False,
- )
-
- transform = transforms.Compose(
- [
- transforms.RandomHorizontalFlip(),
- transforms.ToTensor(),
- transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True),
- ]
- )
-
- dataset = MultiResolutionDataset(args.path, transform, args.size)
- loader = data.DataLoader(
- dataset,
- batch_size=args.batch,
- sampler=data_sampler(dataset, shuffle=True, distributed=args.distributed),
- drop_last=True,
- )
-
- if get_rank() == 0 and wandb is not None and args.wandb:
- wandb.init(project="stylegan 2")
-
- train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, device)
diff --git a/spaces/Hallucinate/demo/taming/data/conditional_builder/objects_center_points.py b/spaces/Hallucinate/demo/taming/data/conditional_builder/objects_center_points.py
deleted file mode 100644
index 9a480329cc47fb38a7b8729d424e092b77d40749..0000000000000000000000000000000000000000
--- a/spaces/Hallucinate/demo/taming/data/conditional_builder/objects_center_points.py
+++ /dev/null
@@ -1,168 +0,0 @@
-import math
-import random
-import warnings
-from itertools import cycle
-from typing import List, Optional, Tuple, Callable
-
-from PIL import Image as pil_image, ImageDraw as pil_img_draw, ImageFont
-from more_itertools.recipes import grouper
-from taming.data.conditional_builder.utils import COLOR_PALETTE, WHITE, GRAY_75, BLACK, FULL_CROP, filter_annotations, \
- additional_parameters_string, horizontally_flip_bbox, pad_list, get_circle_size, get_plot_font_size, \
- absolute_bbox, rescale_annotations
-from taming.data.helper_types import BoundingBox, Annotation
-from taming.data.image_transforms import convert_pil_to_tensor
-from torch import LongTensor, Tensor
-
-
-class ObjectsCenterPointsConditionalBuilder:
- def __init__(self, no_object_classes: int, no_max_objects: int, no_tokens: int, encode_crop: bool,
- use_group_parameter: bool, use_additional_parameters: bool):
- self.no_object_classes = no_object_classes
- self.no_max_objects = no_max_objects
- self.no_tokens = no_tokens
- self.encode_crop = encode_crop
- self.no_sections = int(math.sqrt(self.no_tokens))
- self.use_group_parameter = use_group_parameter
- self.use_additional_parameters = use_additional_parameters
-
- @property
- def none(self) -> int:
- return self.no_tokens - 1
-
- @property
- def object_descriptor_length(self) -> int:
- return 2
-
- @property
- def embedding_dim(self) -> int:
- extra_length = 2 if self.encode_crop else 0
- return self.no_max_objects * self.object_descriptor_length + extra_length
-
- def tokenize_coordinates(self, x: float, y: float) -> int:
- """
- Express 2d coordinates with one number.
- Example: assume self.no_tokens = 16, then no_sections = 4:
- 0 0 0 0
- 0 0 # 0
- 0 0 0 0
- 0 0 0 x
- Then the # position corresponds to token 6, the x position to token 15.
- @param x: float in [0, 1]
- @param y: float in [0, 1]
- @return: discrete tokenized coordinate
- """
- x_discrete = int(round(x * (self.no_sections - 1)))
- y_discrete = int(round(y * (self.no_sections - 1)))
- return y_discrete * self.no_sections + x_discrete
-
- def coordinates_from_token(self, token: int) -> (float, float):
- x = token % self.no_sections
- y = token // self.no_sections
- return x / (self.no_sections - 1), y / (self.no_sections - 1)
-
- def bbox_from_token_pair(self, token1: int, token2: int) -> BoundingBox:
- x0, y0 = self.coordinates_from_token(token1)
- x1, y1 = self.coordinates_from_token(token2)
- return x0, y0, x1 - x0, y1 - y0
-
- def token_pair_from_bbox(self, bbox: BoundingBox) -> Tuple[int, int]:
- return self.tokenize_coordinates(bbox[0], bbox[1]), \
- self.tokenize_coordinates(bbox[0] + bbox[2], bbox[1] + bbox[3])
-
- def inverse_build(self, conditional: LongTensor) \
- -> Tuple[List[Tuple[int, Tuple[float, float]]], Optional[BoundingBox]]:
- conditional_list = conditional.tolist()
- crop_coordinates = None
- if self.encode_crop:
- crop_coordinates = self.bbox_from_token_pair(conditional_list[-2], conditional_list[-1])
- conditional_list = conditional_list[:-2]
- table_of_content = grouper(conditional_list, self.object_descriptor_length)
- assert conditional.shape[0] == self.embedding_dim
- return [
- (object_tuple[0], self.coordinates_from_token(object_tuple[1]))
- for object_tuple in table_of_content if object_tuple[0] != self.none
- ], crop_coordinates
-
- def plot(self, conditional: LongTensor, label_for_category_no: Callable[[int], str], figure_size: Tuple[int, int],
- line_width: int = 3, font_size: Optional[int] = None) -> Tensor:
- plot = pil_image.new('RGB', figure_size, WHITE)
- draw = pil_img_draw.Draw(plot)
- circle_size = get_circle_size(figure_size)
- font = ImageFont.truetype('/usr/share/fonts/truetype/lato/Lato-Regular.ttf',
- size=get_plot_font_size(font_size, figure_size))
- width, height = plot.size
- description, crop_coordinates = self.inverse_build(conditional)
- for (representation, (x, y)), color in zip(description, cycle(COLOR_PALETTE)):
- x_abs, y_abs = x * width, y * height
- ann = self.representation_to_annotation(representation)
- label = label_for_category_no(ann.category_no) + ' ' + additional_parameters_string(ann)
- ellipse_bbox = [x_abs - circle_size, y_abs - circle_size, x_abs + circle_size, y_abs + circle_size]
- draw.ellipse(ellipse_bbox, fill=color, width=0)
- draw.text((x_abs, y_abs), label, anchor='md', fill=BLACK, font=font)
- if crop_coordinates is not None:
- draw.rectangle(absolute_bbox(crop_coordinates, width, height), outline=GRAY_75, width=line_width)
- return convert_pil_to_tensor(plot) / 127.5 - 1.
-
- def object_representation(self, annotation: Annotation) -> int:
- modifier = 0
- if self.use_group_parameter:
- modifier |= 1 * (annotation.is_group_of is True)
- if self.use_additional_parameters:
- modifier |= 2 * (annotation.is_occluded is True)
- modifier |= 4 * (annotation.is_depiction is True)
- modifier |= 8 * (annotation.is_inside is True)
- return annotation.category_no + self.no_object_classes * modifier
-
- def representation_to_annotation(self, representation: int) -> Annotation:
- category_no = representation % self.no_object_classes
- modifier = representation // self.no_object_classes
- # noinspection PyTypeChecker
- return Annotation(
- area=None, image_id=None, bbox=None, category_id=None, id=None, source=None, confidence=None,
- category_no=category_no,
- is_group_of=bool((modifier & 1) * self.use_group_parameter),
- is_occluded=bool((modifier & 2) * self.use_additional_parameters),
- is_depiction=bool((modifier & 4) * self.use_additional_parameters),
- is_inside=bool((modifier & 8) * self.use_additional_parameters)
- )
-
- def _crop_encoder(self, crop_coordinates: BoundingBox) -> List[int]:
- return list(self.token_pair_from_bbox(crop_coordinates))
-
- def _make_object_descriptors(self, annotations: List[Annotation]) -> List[Tuple[int, ...]]:
- object_tuples = [
- (self.object_representation(a),
- self.tokenize_coordinates(a.bbox[0] + a.bbox[2] / 2, a.bbox[1] + a.bbox[3] / 2))
- for a in annotations
- ]
- empty_tuple = (self.none, self.none)
- object_tuples = pad_list(object_tuples, empty_tuple, self.no_max_objects)
- return object_tuples
-
- def build(self, annotations: List, crop_coordinates: Optional[BoundingBox] = None, horizontal_flip: bool = False) \
- -> LongTensor:
- if len(annotations) == 0:
- warnings.warn('Did not receive any annotations.')
- if len(annotations) > self.no_max_objects:
- warnings.warn('Received more annotations than allowed.')
- annotations = annotations[:self.no_max_objects]
-
- if not crop_coordinates:
- crop_coordinates = FULL_CROP
-
- random.shuffle(annotations)
- annotations = filter_annotations(annotations, crop_coordinates)
- if self.encode_crop:
- annotations = rescale_annotations(annotations, FULL_CROP, horizontal_flip)
- if horizontal_flip:
- crop_coordinates = horizontally_flip_bbox(crop_coordinates)
- extra = self._crop_encoder(crop_coordinates)
- else:
- annotations = rescale_annotations(annotations, crop_coordinates, horizontal_flip)
- extra = []
-
- object_tuples = self._make_object_descriptors(annotations)
- flattened = [token for tuple_ in object_tuples for token in tuple_] + extra
- assert len(flattened) == self.embedding_dim
- assert all(0 <= value < self.no_tokens for value in flattened)
- return LongTensor(flattened)
diff --git a/spaces/HansSongBin/Hans/README.md b/spaces/HansSongBin/Hans/README.md
deleted file mode 100644
index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000
--- a/spaces/HansSongBin/Hans/README.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: bingo
-emoji: 😊
-colorFrom: red
-colorTo: red
-sdk: docker
-license: mit
-duplicated_from: hf4all/bingo
----
-
-
-
-# Bingo
-
-Bingo,一个让你呼吸顺畅 New Bing。
-
-高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。
-
-
-
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://github.com/weaigc/bingo/blob/main/license)
-
-问题反馈请前往 https://github.com/weaigc/bingo/issues
-
-
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/HarshulNanda/HARM_ML_App_ludwig/dataGenerators/readme.md b/spaces/HarshulNanda/HARM_ML_App_ludwig/dataGenerators/readme.md
deleted file mode 100644
index bbf165251d4e010d03ae93a5c03efdd0952301a6..0000000000000000000000000000000000000000
--- a/spaces/HarshulNanda/HARM_ML_App_ludwig/dataGenerators/readme.md
+++ /dev/null
@@ -1 +0,0 @@
-## Data Generation files
\ No newline at end of file
diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/__init__.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Hello-SimpleAI/chatgpt-detector-qa/app.py b/spaces/Hello-SimpleAI/chatgpt-detector-qa/app.py
deleted file mode 100644
index 0a5d6edb2e22af144850eab2aaf5c145ab12c89a..0000000000000000000000000000000000000000
--- a/spaces/Hello-SimpleAI/chatgpt-detector-qa/app.py
+++ /dev/null
@@ -1,65 +0,0 @@
-import os
-import gradio as gr
-from transformers import pipeline
-
-# auth_token = os.environ.get("access_token")
-pipeline_en = pipeline(task="text-classification", model="Hello-SimpleAI/chatgpt-qa-detector-roberta") # use_auth_token=auth_token
-pipeline_zh = pipeline(task="text-classification", model="Hello-SimpleAI/chatgpt-qa-detector-roberta-chinese")
-
-
-
-def predict_en(q,a):
- res = pipeline_en({"text":q, "text_pair":a})
- return res['label'],res['score']
-
-def predict_zh(q,a):
- res = pipeline_zh({"text":q, "text_pair":a})
- return res['label'],res['score']
-
-
-
-
-with gr.Blocks() as demo:
- gr.Markdown("""
- ## ChatGPT Detector 🔬 (QA version)
- Visit our project on Github: [chatgpt-comparison-detection project](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection)
- 欢迎在 Github 上关注我们的 [ChatGPT 对比与检测项目](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection)
-
- We provide three kinds of detectors, all in Bilingual / 我们提供了三个版本的检测器,且都支持中英文:
- - [**QA version / 问答版** (👈 Current / 当前使用)](https://huggingface.co/spaces/Hello-SimpleAI/chatgpt-detector-qa)
- detect whether an **answer** is generated by ChatGPT for certain **question**, using PLM-based classifiers / 判断某个**问题的回答**是否由ChatGPT生成,使用基于PTM的分类器来开发;
- - [Sinlge-text version / 独立文本版](https://huggingface.co/spaces/Hello-SimpleAI/chatgpt-detector-single)
- detect whether a piece of text is ChatGPT generated, using PLM-based classifiers / 判断**单条文本**是否由ChatGPT生成,使用基于PTM的分类器来开发;
- - [Linguistic version / 语言学版](https://huggingface.co/spaces/Hello-SimpleAI/chatgpt-detector-ling)
- detect whether a piece of text is ChatGPT generated, using linguistic features / 判断**单条文本**是否由ChatGPT生成,使用基于语言学特征的模型来开发;
-
-
- """)
- with gr.Tab("English"):
- gr.Markdown("""
- Note: Providing more text to the `Answer` box can make the prediction more accurate!
- """)
- q1 = gr.Textbox(lines=2, label='Question',value="What stops a restaurant from noting down my credit card info and using it ? No offense to restaurants . Can be generalized to anyone who I give my credit card info to . Explain like I'm five.")
- a1 = gr.Textbox(lines=5, label='Answer',value="There are a few things that can help protect your credit card information from being misused when you give it to a restaurant or any other business:\n\nEncryption: Many businesses use encryption to protect your credit card information when it is being transmitted or stored. This means that the information is transformed into a code that is difficult for anyone to read without the right key.")
- button1 = gr.Button("🤖 Predict!")
- label1 = gr.Textbox(lines=1, label='Predicted Label 🎃')
- score1 = gr.Textbox(lines=1, label='Prob')
- with gr.Tab("中文版"):
- gr.Markdown("""
- 注意: 在`回答`栏中输入更多的文本,可以让预测更准确哦!
- """)
- q2 = gr.Textbox(lines=2, label='问题',value="如何评价 OpenAI 的超级对话模型 ChatGPT ?")
- a2 = gr.Textbox(lines=5, label='回答',value="对于OpenAI大力出奇迹的工作,自然每个人都有自己的看点。我自己最欣赏的地方是ChatGPT如何解决 “AI校正(Alignment)“这个问题。这个问题也是我们课题组这两年在探索的学术问题之一。")
- button2 = gr.Button("🤖 预测!")
- label2 = gr.Textbox(lines=1, label='预测结果 🎃')
- score2 = gr.Textbox(lines=1, label='模型概率')
-
- button1.click(predict_en, inputs=[q1,a1], outputs=[label1,score1])
- button2.click(predict_zh, inputs=[q2,a2], outputs=[label2,score2])
-
- # Page Count
- gr.Markdown("""
-
- """)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/HuggingFaceH4/starchat-playground/app.py b/spaces/HuggingFaceH4/starchat-playground/app.py
deleted file mode 100644
index 3c8f13d00681bc4e8539a12468dab44548a866ad..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceH4/starchat-playground/app.py
+++ /dev/null
@@ -1,456 +0,0 @@
-import datetime
-import os
-import random
-import re
-from io import StringIO
-
-import gradio as gr
-import pandas as pd
-from huggingface_hub import upload_file
-from text_generation import Client
-
-from dialogues import DialogueTemplate
-from share_btn import (community_icon_html, loading_icon_html, share_btn_css,
- share_js)
-
-HF_TOKEN = os.environ.get("HF_TOKEN", None)
-API_TOKEN = os.environ.get("API_TOKEN", None)
-DIALOGUES_DATASET = "HuggingFaceH4/starchat_playground_dialogues"
-
-model2endpoint = {
- "starchat-alpha": "https://api-inference.huggingface.co/models/HuggingFaceH4/starcoderbase-finetuned-oasst1",
- "starchat-beta": "https://api-inference.huggingface.co/models/HuggingFaceH4/starchat-beta",
-}
-model_names = list(model2endpoint.keys())
-
-
-def randomize_seed_generator():
- seed = random.randint(0, 1000000)
- return seed
-
-
-def save_inputs_and_outputs(now, inputs, outputs, generate_kwargs, model):
- buffer = StringIO()
- timestamp = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%f")
- file_name = f"prompts_{timestamp}.jsonl"
- data = {"model": model, "inputs": inputs, "outputs": outputs, "generate_kwargs": generate_kwargs}
- pd.DataFrame([data]).to_json(buffer, orient="records", lines=True)
-
- # Push to Hub
- upload_file(
- path_in_repo=f"{now.date()}/{now.hour}/{file_name}",
- path_or_fileobj=buffer.getvalue().encode(),
- repo_id=DIALOGUES_DATASET,
- token=HF_TOKEN,
- repo_type="dataset",
- )
-
- # Clean and rerun
- buffer.close()
-
-
-def get_total_inputs(inputs, chatbot, preprompt, user_name, assistant_name, sep):
- past = []
- for data in chatbot:
- user_data, model_data = data
-
- if not user_data.startswith(user_name):
- user_data = user_name + user_data
- if not model_data.startswith(sep + assistant_name):
- model_data = sep + assistant_name + model_data
-
- past.append(user_data + model_data.rstrip() + sep)
-
- if not inputs.startswith(user_name):
- inputs = user_name + inputs
-
- total_inputs = preprompt + "".join(past) + inputs + sep + assistant_name.rstrip()
-
- return total_inputs
-
-
-def wrap_html_code(text):
- pattern = r"<.*?>"
- matches = re.findall(pattern, text)
- if len(matches) > 0:
- return f"```{text}```"
- else:
- return text
-
-
-def has_no_history(chatbot, history):
- return not chatbot and not history
-
-
-def generate(
- RETRY_FLAG,
- model_name,
- system_message,
- user_message,
- chatbot,
- history,
- temperature,
- top_k,
- top_p,
- max_new_tokens,
- repetition_penalty,
- do_save=True,
-):
- client = Client(
- model2endpoint[model_name],
- headers={"Authorization": f"Bearer {API_TOKEN}"},
- timeout=60,
- )
- # Don't return meaningless message when the input is empty
- if not user_message:
- print("Empty input")
-
- if not RETRY_FLAG:
- history.append(user_message)
- seed = 42
- else:
- seed = randomize_seed_generator()
-
- past_messages = []
- for data in chatbot:
- user_data, model_data = data
-
- past_messages.extend(
- [{"role": "user", "content": user_data}, {"role": "assistant", "content": model_data.rstrip()}]
- )
-
- if len(past_messages) < 1:
- dialogue_template = DialogueTemplate(
- system=system_message, messages=[{"role": "user", "content": user_message}]
- )
- prompt = dialogue_template.get_inference_prompt()
- else:
- dialogue_template = DialogueTemplate(
- system=system_message, messages=past_messages + [{"role": "user", "content": user_message}]
- )
- prompt = dialogue_template.get_inference_prompt()
-
- generate_kwargs = {
- "temperature": temperature,
- "top_k": top_k,
- "top_p": top_p,
- "max_new_tokens": max_new_tokens,
- }
-
- temperature = float(temperature)
- if temperature < 1e-2:
- temperature = 1e-2
- top_p = float(top_p)
-
- generate_kwargs = dict(
- temperature=temperature,
- max_new_tokens=max_new_tokens,
- top_p=top_p,
- repetition_penalty=repetition_penalty,
- do_sample=True,
- truncate=4096,
- seed=seed,
- stop_sequences=["<|end|>"],
- )
-
- stream = client.generate_stream(
- prompt,
- **generate_kwargs,
- )
-
- output = ""
- for idx, response in enumerate(stream):
- if response.token.special:
- continue
- output += response.token.text
- if idx == 0:
- history.append(" " + output)
- else:
- history[-1] = output
-
- chat = [
- (wrap_html_code(history[i].strip()), wrap_html_code(history[i + 1].strip()))
- for i in range(0, len(history) - 1, 2)
- ]
-
- # chat = [(history[i].strip(), history[i + 1].strip()) for i in range(0, len(history) - 1, 2)]
-
- yield chat, history, user_message, ""
-
- if HF_TOKEN and do_save:
- try:
- now = datetime.datetime.now()
- current_time = now.strftime("%Y-%m-%d %H:%M:%S")
- print(f"[{current_time}] Pushing prompt and completion to the Hub")
- save_inputs_and_outputs(now, prompt, output, generate_kwargs, model_name)
- except Exception as e:
- print(e)
-
- return chat, history, user_message, ""
-
-
-examples = [
- "How can I write a Python function to generate the nth Fibonacci number?",
- "How do I get the current date using shell commands? Explain how it works.",
- "What's the meaning of life?",
- "Write a function in Javascript to reverse words in a given string.",
- "Give the following data {'Name':['Tom', 'Brad', 'Kyle', 'Jerry'], 'Age':[20, 21, 19, 18], 'Height' : [6.1, 5.9, 6.0, 6.1]}. Can you plot one graph with two subplots as columns. The first is a bar graph showing the height of each person. The second is a bargraph showing the age of each person? Draw the graph in seaborn talk mode.",
- "Create a regex to extract dates from logs",
- "How to decode JSON into a typescript object",
- "Write a list into a jsonlines file and save locally",
-]
-
-
-def clear_chat():
- return [], []
-
-
-def delete_last_turn(chat, history):
- if chat and history:
- chat.pop(-1)
- history.pop(-1)
- history.pop(-1)
- return chat, history
-
-
-def process_example(args):
- for [x, y] in generate(args):
- pass
- return [x, y]
-
-
-# Regenerate response
-def retry_last_answer(
- selected_model,
- system_message,
- user_message,
- chat,
- history,
- temperature,
- top_k,
- top_p,
- max_new_tokens,
- repetition_penalty,
- do_save,
-):
- if chat and history:
- # Removing the previous conversation from chat
- chat.pop(-1)
- # Removing bot response from the history
- history.pop(-1)
- # Setting up a flag to capture a retry
- RETRY_FLAG = True
- # Getting last message from user
- user_message = history[-1]
-
- yield from generate(
- RETRY_FLAG,
- selected_model,
- system_message,
- user_message,
- chat,
- history,
- temperature,
- top_k,
- top_p,
- max_new_tokens,
- repetition_penalty,
- do_save,
- )
-
-
-title = """
⭐ StarChat Playground 💬
"""
-custom_css = """
-#banner-image {
- display: block;
- margin-left: auto;
- margin-right: auto;
-}
-
-#chat-message {
- font-size: 14px;
- min-height: 300px;
-}
-"""
-
-with gr.Blocks(analytics_enabled=False, css=custom_css) as demo:
- gr.HTML(title)
-
- with gr.Row():
- with gr.Column():
- gr.Image("thumbnail.png", elem_id="banner-image", show_label=False)
- with gr.Column():
- gr.Markdown(
- """
- 💻 This demo showcases a series of **[StarChat](https://huggingface.co/models?search=huggingfaceh4/starchat)** language models, which are fine-tuned versions of the StarCoder family to act as helpful coding assistants. The base model has 16B parameters and was pretrained on one trillion tokens sourced from 80+ programming languages, GitHub issues, Git commits, and Jupyter notebooks (all permissively licensed).
-
- 📝 For more details, check out our [blog post](https://huggingface.co/blog/starchat-alpha).
-
- ⚠️ **Intended Use**: this app and its [supporting models](https://huggingface.co/models?search=huggingfaceh4/starchat) are provided as educational tools to explain large language model fine-tuning; not to serve as replacement for human expertise.
-
- ⚠️ **Known Failure Modes**: the alpha and beta version of **StarChat** have not been aligned to human preferences with techniques like RLHF, so they can produce problematic outputs (especially when prompted to do so). Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. For example, it may produce code that does not compile or that produces incorrect results. It may also produce code that is vulnerable to security exploits. We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking. For more details on the model's limitations in terms of factuality and biases, see the [model card](https://huggingface.co/HuggingFaceH4/starchat-alpha#bias-risks-and-limitations).
-
- ⚠️ **Data Collection**: by default, we are collecting the prompts entered in this app to further improve and evaluate the models. Do **NOT** share any personal or sensitive information while using the app! You can opt out of this data collection by removing the checkbox below.
- """
- )
-
- with gr.Row():
- do_save = gr.Checkbox(
- value=True,
- label="Store data",
- info="You agree to the storage of your prompt and generated text for research and development purposes:",
- )
-
- with gr.Row():
- selected_model = gr.Radio(choices=model_names, value=model_names[1], label="Select a model")
-
- with gr.Accordion(label="System Prompt", open=False, elem_id="parameters-accordion"):
- system_message = gr.Textbox(
- elem_id="system-message",
- placeholder="Below is a conversation between a human user and a helpful AI coding assistant.",
- show_label=False,
- )
- with gr.Row():
- with gr.Box():
- output = gr.Markdown()
- chatbot = gr.Chatbot(elem_id="chat-message", label="Chat")
-
- with gr.Row():
- with gr.Column(scale=3):
- user_message = gr.Textbox(placeholder="Enter your message here", show_label=False, elem_id="q-input")
- with gr.Row():
- send_button = gr.Button("Send", elem_id="send-btn", visible=True)
-
- regenerate_button = gr.Button("Regenerate", elem_id="retry-btn", visible=True)
-
- delete_turn_button = gr.Button("Delete last turn", elem_id="delete-btn", visible=True)
-
- clear_chat_button = gr.Button("Clear chat", elem_id="clear-btn", visible=True)
-
- with gr.Accordion(label="Parameters", open=False, elem_id="parameters-accordion"):
- temperature = gr.Slider(
- label="Temperature",
- value=0.2,
- minimum=0.0,
- maximum=1.0,
- step=0.1,
- interactive=True,
- info="Higher values produce more diverse outputs",
- )
- top_k = gr.Slider(
- label="Top-k",
- value=50,
- minimum=0.0,
- maximum=100,
- step=1,
- interactive=True,
- info="Sample from a shortlist of top-k tokens",
- )
- top_p = gr.Slider(
- label="Top-p (nucleus sampling)",
- value=0.95,
- minimum=0.0,
- maximum=1,
- step=0.05,
- interactive=True,
- info="Higher values sample more low-probability tokens",
- )
- max_new_tokens = gr.Slider(
- label="Max new tokens",
- value=512,
- minimum=0,
- maximum=1024,
- step=4,
- interactive=True,
- info="The maximum numbers of new tokens",
- )
- repetition_penalty = gr.Slider(
- label="Repetition Penalty",
- value=1.2,
- minimum=0.0,
- maximum=10,
- step=0.1,
- interactive=True,
- info="The parameter for repetition penalty. 1.0 means no penalty.",
- )
- # with gr.Group(elem_id="share-btn-container"):
- # community_icon = gr.HTML(community_icon_html, visible=True)
- # loading_icon = gr.HTML(loading_icon_html, visible=True)
- # share_button = gr.Button("Share to community", elem_id="share-btn", visible=True)
- with gr.Row():
- gr.Examples(
- examples=examples,
- inputs=[user_message],
- cache_examples=False,
- fn=process_example,
- outputs=[output],
- )
-
- history = gr.State([])
- RETRY_FLAG = gr.Checkbox(value=False, visible=False)
-
- # To clear out "message" input textbox and use this to regenerate message
- last_user_message = gr.State("")
-
- user_message.submit(
- generate,
- inputs=[
- RETRY_FLAG,
- selected_model,
- system_message,
- user_message,
- chatbot,
- history,
- temperature,
- top_k,
- top_p,
- max_new_tokens,
- repetition_penalty,
- do_save,
- ],
- outputs=[chatbot, history, last_user_message, user_message],
- )
-
- send_button.click(
- generate,
- inputs=[
- RETRY_FLAG,
- selected_model,
- system_message,
- user_message,
- chatbot,
- history,
- temperature,
- top_k,
- top_p,
- max_new_tokens,
- repetition_penalty,
- do_save,
- ],
- outputs=[chatbot, history, last_user_message, user_message],
- )
-
- regenerate_button.click(
- retry_last_answer,
- inputs=[
- selected_model,
- system_message,
- user_message,
- chatbot,
- history,
- temperature,
- top_k,
- top_p,
- max_new_tokens,
- repetition_penalty,
- do_save,
- ],
- outputs=[chatbot, history, last_user_message, user_message],
- )
-
- delete_turn_button.click(delete_last_turn, [chatbot, history], [chatbot, history])
- clear_chat_button.click(clear_chat, outputs=[chatbot, history])
- selected_model.change(clear_chat, outputs=[chatbot, history])
- # share_button.click(None, [], [], _js=share_js)
-
-demo.queue(concurrency_count=16).launch(debug=True)
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/hubert/hubert_asr.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/hubert/hubert_asr.py
deleted file mode 100644
index dce899c9de3ab68341c0b21bea749a3ee29e0d8a..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/models/hubert/hubert_asr.py
+++ /dev/null
@@ -1,376 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import contextlib
-from argparse import Namespace
-from typing import Any
-
-import torch
-import torch.nn as nn
-from dataclasses import dataclass, field
-from fairseq import checkpoint_utils, tasks, utils
-from fairseq.dataclass import FairseqDataclass
-from fairseq.dataclass.utils import convert_namespace_to_omegaconf
-from fairseq.models import BaseFairseqModel, FairseqEncoder, register_model
-from fairseq.models.hubert.hubert import MASKING_DISTRIBUTION_CHOICES
-from fairseq.tasks import FairseqTask
-from omegaconf import II, MISSING
-
-
-@dataclass
-class HubertAsrConfig(FairseqDataclass):
- w2v_path: str = field(
- default=MISSING, metadata={"help": "path to hubert model"}
- )
- no_pretrained_weights: bool = field(
- default=False,
- metadata={"help": "if true, does not load pretrained weights"},
- )
- dropout_input: float = field(
- default=0.0,
- metadata={"help": "dropout to apply to the input (after feat extr)"},
- )
- final_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout after transformer and before final projection"
- },
- )
- dropout: float = field(
- default=0.0,
- metadata={"help": "dropout probability inside hubert model"},
- )
- attention_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout probability for attention weights "
- "inside hubert model"
- },
- )
- activation_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout probability after activation in FFN "
- "inside hubert model"
- },
- )
-
- # masking
- apply_mask: bool = field(
- default=False, metadata={"help": "apply masking during fine-tuning"}
- )
- mask_length: int = field(
- default=10, metadata={"help": "repeat the mask indices multiple times"}
- )
- mask_prob: float = field(
- default=0.5,
- metadata={
- "help": "probability of replacing a token with mask "
- "(normalized by length)"
- },
- )
- mask_selection: MASKING_DISTRIBUTION_CHOICES = field(
- default="static", metadata={"help": "how to choose masks"}
- )
- mask_other: float = field(
- default=0,
- metadata={
- "help": "secondary mask argument "
- "(used for more complex distributions), "
- "see help in compute_mask_indices"
- },
- )
- no_mask_overlap: bool = field(
- default=False, metadata={"help": "whether to allow masks to overlap"}
- )
-
- # channel masking
- mask_channel_length: int = field(
- default=10,
- metadata={"help": "length of the mask for features (channels)"},
- )
- mask_channel_prob: float = field(
- default=0.0,
- metadata={"help": "probability of replacing a feature with 0"},
- )
- mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field(
- default="static",
- metadata={"help": "how to choose mask length for channel masking"},
- )
- mask_channel_other: float = field(
- default=0,
- metadata={
- "help": "secondary mask argument "
- "(used for more complex distributions), "
- "see help in compute_mask_indices"
- },
- )
- no_mask_channel_overlap: bool = field(
- default=False,
- metadata={"help": "whether to allow channel masks to overlap"},
- )
- freeze_finetune_updates: int = field(
- default=0,
- metadata={"help": "dont finetune hubert for this many updates"},
- )
- feature_grad_mult: float = field(
- default=0.0,
- metadata={"help": "reset feature grad mult in hubert to this"},
- )
- layerdrop: float = field(
- default=0.0,
- metadata={"help": "probability of dropping a layer in hubert"},
- )
- normalize: bool = II("task.normalize")
- data: str = II("task.data")
-
- # this holds the loaded hubert args
- w2v_args: Any = None
-
-
-@dataclass
-class HubertCtcConfig(HubertAsrConfig):
- pass
-
-
-@register_model("hubert_ctc", dataclass=HubertCtcConfig)
-class HubertCtc(BaseFairseqModel):
- def __init__(self, cfg: HubertCtcConfig, w2v_encoder: BaseFairseqModel):
- super().__init__()
- self.cfg = cfg
- self.w2v_encoder = w2v_encoder
-
- def upgrade_state_dict_named(self, state_dict, name):
- super().upgrade_state_dict_named(state_dict, name)
- return state_dict
-
- @classmethod
- def build_model(cls, cfg: HubertCtcConfig, task: FairseqTask):
- """Build a new model instance."""
- w2v_encoder = HubertEncoder(cfg, task.target_dictionary)
- return cls(cfg, w2v_encoder)
-
- def get_normalized_probs(self, net_output, log_probs):
- """Get normalized probabilities (or log probs) from a net's output."""
-
- logits = net_output["encoder_out"]
- if log_probs:
- return utils.log_softmax(logits.float(), dim=-1)
- else:
- return utils.softmax(logits.float(), dim=-1)
-
- def get_logits(self, net_output):
- logits = net_output["encoder_out"]
- padding = net_output["encoder_padding_mask"]
- if padding is not None and padding.any():
- padding = padding.T
- logits[padding][..., 0] = 0
- logits[padding][..., 1:] = float("-inf")
-
- return logits
-
- def forward(self, **kwargs):
- x = self.w2v_encoder(**kwargs)
- return x
-
-
-@dataclass
-class HubertSeq2SeqConfig(HubertAsrConfig):
- decoder_embed_dim: int = field(
- default=768, metadata={"help": "decoder embedding dimension"}
- )
- decoder_ffn_embed_dim: int = field(
- default=3072, metadata={"help": "decoder embedding dimension for FFN"}
- )
- decoder_layers: int = field(
- default=6, metadata={"help": "num of decoder layers"}
- )
- decoder_layerdrop: float = field(
- default=0.0, metadata={"help": "decoder layerdrop chance"}
- )
- decoder_attention_heads: int = field(
- default=4, metadata={"help": "num decoder attention heads"}
- )
- decoder_learned_pos: bool = field(
- default=False,
- metadata={"help": "use learned positional embeddings in the decoder"},
- )
- decoder_normalize_before: bool = field(
- default=False,
- metadata={"help": "apply layernorm before each decoder block"},
- )
- no_token_positional_embeddings: bool = field(
- default=False,
- metadata={
- "help": "if set, disables positional embeddings "
- "(outside self attention)"
- },
- )
- decoder_dropout: float = field(
- default=0.0, metadata={"help": "dropout probability in the decoder"}
- )
- decoder_attention_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout probability for attention weights "
- "inside the decoder"
- },
- )
- decoder_activation_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout probability after activation in FFN "
- "inside the decoder"
- },
- )
- max_target_positions: int = field(
- default=2048, metadata={"help": "max target positions"}
- )
- share_decoder_input_output_embed: bool = field(
- default=False,
- metadata={"help": "share decoder input and output embeddings"},
- )
-
-
-class HubertEncoder(FairseqEncoder):
- def __init__(self, cfg: HubertAsrConfig, tgt_dict=None):
- self.apply_mask = cfg.apply_mask
-
- arg_overrides = {
- "dropout": cfg.dropout,
- "activation_dropout": cfg.activation_dropout,
- "dropout_input": cfg.dropout_input,
- "attention_dropout": cfg.attention_dropout,
- "mask_length": cfg.mask_length,
- "mask_prob": cfg.mask_prob,
- "mask_selection": cfg.mask_selection,
- "mask_other": cfg.mask_other,
- "no_mask_overlap": cfg.no_mask_overlap,
- "mask_channel_length": cfg.mask_channel_length,
- "mask_channel_prob": cfg.mask_channel_prob,
- "mask_channel_selection": cfg.mask_channel_selection,
- "mask_channel_other": cfg.mask_channel_other,
- "no_mask_channel_overlap": cfg.no_mask_channel_overlap,
- "encoder_layerdrop": cfg.layerdrop,
- "feature_grad_mult": cfg.feature_grad_mult,
- }
-
- if cfg.w2v_args is None:
- state = checkpoint_utils.load_checkpoint_to_cpu(
- cfg.w2v_path, arg_overrides
- )
- w2v_args = state.get("cfg", None)
- if w2v_args is None:
- w2v_args = convert_namespace_to_omegaconf(state["args"])
- cfg.w2v_args = w2v_args
- else:
- state = None
- w2v_args = cfg.w2v_args
- if isinstance(w2v_args, Namespace):
- cfg.w2v_args = w2v_args = convert_namespace_to_omegaconf(
- w2v_args
- )
-
- assert cfg.normalize == w2v_args.task.normalize, (
- "Fine-tuning works best when data normalization is the same. "
- "Please check that --normalize is set or unset for "
- "both pre-training and here"
- )
-
- w2v_args.task.data = cfg.data
- task = tasks.setup_task(w2v_args.task)
- if state is not None and "task_state" in state:
- # This will load the stored "dictionaries" object
- task.load_state_dict(state["task_state"])
- model = task.build_model(w2v_args.model)
-
- if state is not None and not cfg.no_pretrained_weights:
- # set strict=False because we omit some modules
- model.load_state_dict(state["model"], strict=False)
-
- model.remove_pretraining_modules()
-
- super().__init__(task.source_dictionary)
-
- d = w2v_args.model.encoder_embed_dim
-
- self.w2v_model = model
-
- self.final_dropout = nn.Dropout(cfg.final_dropout)
- self.freeze_finetune_updates = cfg.freeze_finetune_updates
- self.num_updates = 0
-
- if tgt_dict is not None:
- self.proj = Linear(d, len(tgt_dict))
- elif getattr(cfg, "decoder_embed_dim", d) != d:
- self.proj = Linear(d, cfg.decoder_embed_dim)
- else:
- self.proj = None
-
- def set_num_updates(self, num_updates):
- """Set the number of parameters updates."""
- super().set_num_updates(num_updates)
- self.num_updates = num_updates
-
- def forward(self, source, padding_mask, tbc=True, **kwargs):
-
- w2v_args = {
- "source": source,
- "padding_mask": padding_mask,
- "mask": self.apply_mask and self.training,
- }
-
- ft = self.freeze_finetune_updates <= self.num_updates
-
- with torch.no_grad() if not ft else contextlib.ExitStack():
- x, padding_mask = self.w2v_model.extract_features(**w2v_args)
-
- if tbc:
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- x = self.final_dropout(x)
-
- if self.proj:
- x = self.proj(x)
-
- return {
- "encoder_out": x, # T x B x C
- "encoder_padding_mask": padding_mask, # B x T
- "padding_mask": padding_mask,
- }
-
- def reorder_encoder_out(self, encoder_out, new_order):
- if encoder_out["encoder_out"] is not None:
- encoder_out["encoder_out"] = encoder_out[
- "encoder_out"
- ].index_select(1, new_order)
- if encoder_out["encoder_padding_mask"] is not None:
- encoder_out["encoder_padding_mask"] = encoder_out[
- "encoder_padding_mask"
- ].index_select(0, new_order)
- return encoder_out
-
- def max_positions(self):
- """Maximum input length supported by the encoder."""
- return None
-
- def upgrade_state_dict_named(self, state_dict, name):
- return state_dict
-
-
-def Embedding(num_embeddings, embedding_dim, padding_idx):
- m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
- nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5)
- nn.init.constant_(m.weight[padding_idx], 0)
- return m
-
-
-def Linear(in_features, out_features, bias=True):
- m = nn.Linear(in_features, out_features, bias)
- nn.init.xavier_uniform_(m.weight)
- if bias:
- nn.init.constant_(m.bias, 0.0)
- return m
diff --git a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/__init__.py b/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Illumotion/Koboldcpp/gguf-py/gguf/gguf.py b/spaces/Illumotion/Koboldcpp/gguf-py/gguf/gguf.py
deleted file mode 100644
index fb677a6ed728393ac0b8508128b3647d9062b4cd..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/gguf-py/gguf/gguf.py
+++ /dev/null
@@ -1,1042 +0,0 @@
-#!/usr/bin/env python3
-from __future__ import annotations
-
-import json
-import os
-import shutil
-import struct
-import sys
-import tempfile
-from enum import IntEnum, auto
-from io import BufferedWriter
-from pathlib import Path
-from typing import IO, Any, BinaryIO, Callable, Sequence
-
-import numpy as np
-
-#
-# constants
-#
-
-GGUF_MAGIC = 0x46554747
-GGUF_VERSION = 2
-GGUF_DEFAULT_ALIGNMENT = 32
-
-# general
-KEY_GENERAL_ARCHITECTURE = "general.architecture"
-KEY_GENERAL_QUANTIZATION_VERSION = "general.quantization_version"
-KEY_GENERAL_ALIGNMENT = "general.alignment"
-KEY_GENERAL_NAME = "general.name"
-KEY_GENERAL_AUTHOR = "general.author"
-KEY_GENERAL_URL = "general.url"
-KEY_GENERAL_DESCRIPTION = "general.description"
-KEY_GENERAL_LICENSE = "general.license"
-KEY_GENERAL_SOURCE_URL = "general.source.url"
-KEY_GENERAL_SOURCE_HF_REPO = "general.source.huggingface.repository"
-KEY_GENERAL_FILE_TYPE = "general.file_type"
-
-# LLM
-KEY_CONTEXT_LENGTH = "{arch}.context_length"
-KEY_EMBEDDING_LENGTH = "{arch}.embedding_length"
-KEY_BLOCK_COUNT = "{arch}.block_count"
-KEY_FEED_FORWARD_LENGTH = "{arch}.feed_forward_length"
-KEY_USE_PARALLEL_RESIDUAL = "{arch}.use_parallel_residual"
-KEY_TENSOR_DATA_LAYOUT = "{arch}.tensor_data_layout"
-
-# attention
-KEY_ATTENTION_HEAD_COUNT = "{arch}.attention.head_count"
-KEY_ATTENTION_HEAD_COUNT_KV = "{arch}.attention.head_count_kv"
-KEY_ATTENTION_MAX_ALIBI_BIAS = "{arch}.attention.max_alibi_bias"
-KEY_ATTENTION_CLAMP_KQV = "{arch}.attention.clamp_kqv"
-KEY_ATTENTION_LAYERNORM_EPS = "{arch}.attention.layer_norm_epsilon"
-KEY_ATTENTION_LAYERNORM_RMS_EPS = "{arch}.attention.layer_norm_rms_epsilon"
-
-# RoPE
-KEY_ROPE_DIMENSION_COUNT = "{arch}.rope.dimension_count"
-KEY_ROPE_FREQ_BASE = "{arch}.rope.freq_base"
-KEY_ROPE_SCALE_LINEAR = "{arch}.rope.scale_linear"
-
-# tokenization
-KEY_TOKENIZER_MODEL = "tokenizer.ggml.model"
-KEY_TOKENIZER_LIST = "tokenizer.ggml.tokens"
-KEY_TOKENIZER_TOKEN_TYPE = "tokenizer.ggml.token_type"
-KEY_TOKENIZER_SCORES = "tokenizer.ggml.scores"
-KEY_TOKENIZER_MERGES = "tokenizer.ggml.merges"
-KEY_TOKENIZER_BOS_ID = "tokenizer.ggml.bos_token_id"
-KEY_TOKENIZER_EOS_ID = "tokenizer.ggml.eos_token_id"
-KEY_TOKENIZER_UNK_ID = "tokenizer.ggml.unknown_token_id"
-KEY_TOKENIZER_SEP_ID = "tokenizer.ggml.seperator_token_id"
-KEY_TOKENIZER_PAD_ID = "tokenizer.ggml.padding_token_id"
-KEY_TOKENIZER_HF_JSON = "tokenizer.huggingface.json"
-KEY_TOKENIZER_RWKV = "tokenizer.rwkv.world"
-
-
-#
-# recommended mapping of model tensor names for storage in gguf
-#
-
-
-class MODEL_ARCH(IntEnum):
- LLAMA : int = auto()
- FALCON : int = auto()
- BAICHUAN : int = auto()
- GPT2 : int = auto()
- GPTJ : int = auto()
- GPTNEOX : int = auto()
- MPT : int = auto()
- STARCODER : int = auto()
- PERSIMMON : int = auto()
- REFACT : int = auto()
- BERT : int = auto()
-
-
-class MODEL_TENSOR(IntEnum):
- TOKEN_EMBD : int = auto()
- TOKEN_TYPES : int = auto()
- POS_EMBD : int = auto()
- OUTPUT : int = auto()
- OUTPUT_NORM : int = auto()
- ROPE_FREQS : int = auto()
- ATTN_Q : int = auto()
- ATTN_K : int = auto()
- ATTN_V : int = auto()
- ATTN_QKV : int = auto()
- ATTN_OUT : int = auto()
- ATTN_NORM : int = auto()
- ATTN_NORM_2 : int = auto()
- ATTN_ROT_EMBD: int = auto()
- FFN_GATE : int = auto()
- FFN_DOWN : int = auto()
- FFN_UP : int = auto()
- FFN_NORM : int = auto()
- ATTN_Q_NORM : int = auto()
- ATTN_K_NORM : int = auto()
-
-
-MODEL_ARCH_NAMES: dict[MODEL_ARCH, str] = {
- MODEL_ARCH.LLAMA: "llama",
- MODEL_ARCH.FALCON: "falcon",
- MODEL_ARCH.BAICHUAN: "baichuan",
- MODEL_ARCH.GPT2: "gpt2",
- MODEL_ARCH.GPTJ: "gptj",
- MODEL_ARCH.GPTNEOX: "gptneox",
- MODEL_ARCH.MPT: "mpt",
- MODEL_ARCH.STARCODER: "starcoder",
- MODEL_ARCH.PERSIMMON: "persimmon",
- MODEL_ARCH.REFACT: "refact",
- MODEL_ARCH.BERT: "bert",
-}
-
-TENSOR_NAMES: dict[MODEL_TENSOR, str] = {
- MODEL_TENSOR.TOKEN_EMBD: "token_embd",
- MODEL_TENSOR.TOKEN_TYPES: "token_types",
- MODEL_TENSOR.POS_EMBD: "position_embd",
- MODEL_TENSOR.OUTPUT_NORM: "output_norm",
- MODEL_TENSOR.OUTPUT: "output",
- MODEL_TENSOR.ROPE_FREQS: "rope_freqs",
- MODEL_TENSOR.ATTN_NORM: "blk.{bid}.attn_norm",
- MODEL_TENSOR.ATTN_NORM_2: "blk.{bid}.attn_norm_2",
- MODEL_TENSOR.ATTN_QKV: "blk.{bid}.attn_qkv",
- MODEL_TENSOR.ATTN_Q: "blk.{bid}.attn_q",
- MODEL_TENSOR.ATTN_K: "blk.{bid}.attn_k",
- MODEL_TENSOR.ATTN_V: "blk.{bid}.attn_v",
- MODEL_TENSOR.ATTN_OUT: "blk.{bid}.attn_output",
- MODEL_TENSOR.ATTN_ROT_EMBD: "blk.{bid}.attn_rot_embd",
- MODEL_TENSOR.ATTN_Q_NORM: "blk.{bid}.attn_q_norm",
- MODEL_TENSOR.ATTN_K_NORM: "blk.{bid}.attn_k_norm",
- MODEL_TENSOR.FFN_NORM: "blk.{bid}.ffn_norm",
- MODEL_TENSOR.FFN_GATE: "blk.{bid}.ffn_gate",
- MODEL_TENSOR.FFN_DOWN: "blk.{bid}.ffn_down",
- MODEL_TENSOR.FFN_UP: "blk.{bid}.ffn_up",
-}
-
-MODEL_TENSORS: dict[MODEL_ARCH, list[MODEL_TENSOR]] = {
- MODEL_ARCH.LLAMA: [
- MODEL_TENSOR.TOKEN_EMBD,
- MODEL_TENSOR.OUTPUT_NORM,
- MODEL_TENSOR.OUTPUT,
- MODEL_TENSOR.ROPE_FREQS,
- MODEL_TENSOR.ATTN_NORM,
- MODEL_TENSOR.ATTN_Q,
- MODEL_TENSOR.ATTN_K,
- MODEL_TENSOR.ATTN_V,
- MODEL_TENSOR.ATTN_OUT,
- MODEL_TENSOR.ATTN_ROT_EMBD,
- MODEL_TENSOR.FFN_NORM,
- MODEL_TENSOR.FFN_GATE,
- MODEL_TENSOR.FFN_DOWN,
- MODEL_TENSOR.FFN_UP,
- ],
- MODEL_ARCH.GPTNEOX: [
- MODEL_TENSOR.TOKEN_EMBD,
- MODEL_TENSOR.OUTPUT_NORM,
- MODEL_TENSOR.OUTPUT,
- MODEL_TENSOR.ATTN_NORM,
- MODEL_TENSOR.ATTN_QKV,
- MODEL_TENSOR.ATTN_OUT,
- MODEL_TENSOR.FFN_NORM,
- MODEL_TENSOR.FFN_DOWN,
- MODEL_TENSOR.FFN_UP,
- ],
- MODEL_ARCH.FALCON: [
- MODEL_TENSOR.TOKEN_EMBD,
- MODEL_TENSOR.OUTPUT_NORM,
- MODEL_TENSOR.OUTPUT,
- MODEL_TENSOR.ATTN_NORM,
- MODEL_TENSOR.ATTN_NORM_2,
- MODEL_TENSOR.ATTN_QKV,
- MODEL_TENSOR.ATTN_OUT,
- MODEL_TENSOR.FFN_DOWN,
- MODEL_TENSOR.FFN_UP,
- ],
- MODEL_ARCH.BAICHUAN: [
- MODEL_TENSOR.TOKEN_EMBD,
- MODEL_TENSOR.OUTPUT_NORM,
- MODEL_TENSOR.OUTPUT,
- MODEL_TENSOR.ROPE_FREQS,
- MODEL_TENSOR.ATTN_NORM,
- MODEL_TENSOR.ATTN_Q,
- MODEL_TENSOR.ATTN_K,
- MODEL_TENSOR.ATTN_V,
- MODEL_TENSOR.ATTN_OUT,
- MODEL_TENSOR.ATTN_ROT_EMBD,
- MODEL_TENSOR.FFN_NORM,
- MODEL_TENSOR.FFN_GATE,
- MODEL_TENSOR.FFN_DOWN,
- MODEL_TENSOR.FFN_UP,
- ],
- MODEL_ARCH.STARCODER: [
- MODEL_TENSOR.TOKEN_EMBD,
- MODEL_TENSOR.POS_EMBD,
- MODEL_TENSOR.OUTPUT_NORM,
- MODEL_TENSOR.OUTPUT,
- MODEL_TENSOR.ATTN_NORM,
- MODEL_TENSOR.ATTN_QKV,
- MODEL_TENSOR.ATTN_OUT,
- MODEL_TENSOR.FFN_NORM,
- MODEL_TENSOR.FFN_DOWN,
- MODEL_TENSOR.FFN_UP,
- ],
- MODEL_ARCH.BERT: [
- MODEL_TENSOR.TOKEN_EMBD,
- MODEL_TENSOR.TOKEN_TYPES,
- MODEL_TENSOR.POS_EMBD,
- MODEL_TENSOR.OUTPUT_NORM,
- MODEL_TENSOR.ATTN_NORM,
- MODEL_TENSOR.ATTN_Q,
- MODEL_TENSOR.ATTN_K,
- MODEL_TENSOR.ATTN_V,
- MODEL_TENSOR.ATTN_OUT,
- MODEL_TENSOR.FFN_NORM,
- MODEL_TENSOR.FFN_DOWN,
- MODEL_TENSOR.FFN_UP,
- ],
- MODEL_ARCH.MPT: [
- MODEL_TENSOR.TOKEN_EMBD,
- MODEL_TENSOR.OUTPUT_NORM,
- MODEL_TENSOR.OUTPUT,
- MODEL_TENSOR.ATTN_NORM,
- MODEL_TENSOR.ATTN_QKV,
- MODEL_TENSOR.ATTN_OUT,
- MODEL_TENSOR.FFN_NORM,
- MODEL_TENSOR.FFN_DOWN,
- MODEL_TENSOR.FFN_UP,
- ],
- MODEL_ARCH.GPTJ: [
- MODEL_TENSOR.TOKEN_EMBD,
- MODEL_TENSOR.OUTPUT_NORM,
- MODEL_TENSOR.OUTPUT,
- MODEL_TENSOR.ATTN_NORM,
- MODEL_TENSOR.ATTN_Q,
- MODEL_TENSOR.ATTN_K,
- MODEL_TENSOR.ATTN_V,
- MODEL_TENSOR.ATTN_OUT,
- MODEL_TENSOR.FFN_DOWN,
- MODEL_TENSOR.FFN_UP,
- ],
- MODEL_ARCH.PERSIMMON: [
- MODEL_TENSOR.TOKEN_EMBD,
- MODEL_TENSOR.OUTPUT,
- MODEL_TENSOR.OUTPUT_NORM,
- MODEL_TENSOR.ATTN_NORM,
- MODEL_TENSOR.ATTN_QKV,
- MODEL_TENSOR.ATTN_OUT,
- MODEL_TENSOR.FFN_NORM,
- MODEL_TENSOR.FFN_DOWN,
- MODEL_TENSOR.FFN_UP,
- MODEL_TENSOR.ATTN_Q_NORM,
- MODEL_TENSOR.ATTN_K_NORM,
- MODEL_TENSOR.ATTN_ROT_EMBD,
- ],
- MODEL_ARCH.REFACT: [
- MODEL_TENSOR.TOKEN_EMBD,
- MODEL_TENSOR.OUTPUT_NORM,
- MODEL_TENSOR.OUTPUT,
- MODEL_TENSOR.ATTN_NORM,
- MODEL_TENSOR.ATTN_Q,
- MODEL_TENSOR.ATTN_K,
- MODEL_TENSOR.ATTN_V,
- MODEL_TENSOR.ATTN_OUT,
- MODEL_TENSOR.FFN_NORM,
- MODEL_TENSOR.FFN_GATE,
- MODEL_TENSOR.FFN_DOWN,
- MODEL_TENSOR.FFN_UP,
- ],
- MODEL_ARCH.GPT2: [
- # TODO
- ],
- # TODO
-}
-
-# tensors that will not be serialized
-MODEL_TENSOR_SKIP: dict[MODEL_ARCH, list[MODEL_TENSOR]] = {
- MODEL_ARCH.LLAMA: [
- MODEL_TENSOR.ROPE_FREQS,
- MODEL_TENSOR.ATTN_ROT_EMBD,
- ],
- MODEL_ARCH.BAICHUAN: [
- MODEL_TENSOR.ROPE_FREQS,
- MODEL_TENSOR.ATTN_ROT_EMBD,
- ],
- MODEL_ARCH.PERSIMMON: [
- MODEL_TENSOR.ROPE_FREQS,
- ]
-}
-
-
-class TensorNameMap:
- mappings_cfg: dict[MODEL_TENSOR, tuple[str, ...]] = {
- # Token embeddings
- MODEL_TENSOR.TOKEN_EMBD: (
- "gpt_neox.embed_in", # gptneox
- "transformer.wte", # gpt2 gpt-j mpt refact
- "transformer.word_embeddings", # falcon
- "model.embed_tokens", # llama-hf
- "tok_embeddings", # llama-pth
- "embeddings.word_embeddings", # bert
- "language_model.embedding.word_embeddings", # persimmon
- ),
-
- # Token type embeddings
- MODEL_TENSOR.TOKEN_TYPES: (
- "embeddings.token_type_embeddings", # bert
- ),
-
- # Position embeddings
- MODEL_TENSOR.POS_EMBD: (
- "transformer.wpe", # gpt2
- "embeddings.position_embeddings", # bert
- ),
-
- # Output
- MODEL_TENSOR.OUTPUT: (
- "embed_out", # gptneox
- "lm_head", # gpt2 mpt falcon llama-hf baichuan
- "output", # llama-pth
- "word_embeddings_for_head", # persimmon
- ),
-
- # Output norm
- MODEL_TENSOR.OUTPUT_NORM: (
- "gpt_neox.final_layer_norm", # gptneox
- "transformer.ln_f", # gpt2 gpt-j falcon
- "model.norm", # llama-hf baichuan
- "norm", # llama-pth
- "embeddings.LayerNorm", # bert
- "transformer.norm_f", # mpt
- "ln_f", # refact
- "language_model.encoder.final_layernorm", # persimmon
- ),
-
- # Rope frequencies
- MODEL_TENSOR.ROPE_FREQS: (
- "rope.freqs", # llama-pth
- ),
- }
-
- block_mappings_cfg: dict[MODEL_TENSOR, tuple[str, ...]] = {
- # Attention norm
- MODEL_TENSOR.ATTN_NORM: (
- "gpt_neox.layers.{bid}.input_layernorm", # gptneox
- "transformer.h.{bid}.ln_1", # gpt2 gpt-j refact
- "transformer.blocks.{bid}.norm_1", # mpt
- "transformer.h.{bid}.input_layernorm", # falcon7b
- "transformer.h.{bid}.ln_mlp", # falcon40b
- "model.layers.{bid}.input_layernorm", # llama-hf
- "layers.{bid}.attention_norm", # llama-pth
- "encoder.layer.{bid}.attention.output.LayerNorm", # bert
- "language_model.encoder.layers.{bid}.input_layernorm", # persimmon
- ),
-
- # Attention norm 2
- MODEL_TENSOR.ATTN_NORM_2: (
- "transformer.h.{bid}.ln_attn", # falcon40b
- ),
-
- # Attention query-key-value
- MODEL_TENSOR.ATTN_QKV: (
- "gpt_neox.layers.{bid}.attention.query_key_value", # gptneox
- "transformer.h.{bid}.attn.c_attn", # gpt2
- "transformer.blocks.{bid}.attn.Wqkv", # mpt
- "transformer.h.{bid}.self_attention.query_key_value", # falcon
- "language_model.encoder.layers.{bid}.self_attention.query_key_value", # persimmon
- ),
-
- # Attention query
- MODEL_TENSOR.ATTN_Q: (
- "model.layers.{bid}.self_attn.q_proj", # llama-hf
- "layers.{bid}.attention.wq", # llama-pth
- "encoder.layer.{bid}.attention.self.query", # bert
- "transformer.h.{bid}.attn.q_proj", # gpt-j
- ),
-
- # Attention key
- MODEL_TENSOR.ATTN_K: (
- "model.layers.{bid}.self_attn.k_proj", # llama-hf
- "layers.{bid}.attention.wk", # llama-pth
- "encoder.layer.{bid}.attention.self.key", # bert
- "transformer.h.{bid}.attn.k_proj", # gpt-j
- ),
-
- # Attention value
- MODEL_TENSOR.ATTN_V: (
- "model.layers.{bid}.self_attn.v_proj", # llama-hf
- "layers.{bid}.attention.wv", # llama-pth
- "encoder.layer.{bid}.attention.self.value", # bert
- "transformer.h.{bid}.attn.v_proj", # gpt-j
- ),
-
- # Attention output
- MODEL_TENSOR.ATTN_OUT: (
- "gpt_neox.layers.{bid}.attention.dense", # gptneox
- "transformer.h.{bid}.attn.c_proj", # gpt2 refact
- "transformer.blocks.{bid}.attn.out_proj", # mpt
- "transformer.h.{bid}.self_attention.dense", # falcon
- "model.layers.{bid}.self_attn.o_proj", # llama-hf
- "layers.{bid}.attention.wo", # llama-pth
- "encoder.layer.{bid}.attention.output.dense", # bert
- "transformer.h.{bid}.attn.out_proj", # gpt-j
- "language_model.encoder.layers.{bid}.self_attention.dense" # persimmon
- ),
-
- # Rotary embeddings
- MODEL_TENSOR.ATTN_ROT_EMBD: (
- "model.layers.{bid}.self_attn.rotary_emb.inv_freq", # llama-hf
- "layers.{bid}.attention.inner_attention.rope.freqs", # llama-pth
- ),
-
- # Feed-forward norm
- MODEL_TENSOR.FFN_NORM: (
- "gpt_neox.layers.{bid}.post_attention_layernorm", # gptneox
- "transformer.h.{bid}.ln_2", # gpt2 refact
- "transformer.blocks.{bid}.norm_2", # mpt
- "model.layers.{bid}.post_attention_layernorm", # llama-hf
- "layers.{bid}.ffn_norm", # llama-pth
- "encoder.layer.{bid}.output.LayerNorm", # bert
- "language_model.encoder.layers.{bid}.post_attention_layernorm", # persimmon
- ),
-
- # Feed-forward up
- MODEL_TENSOR.FFN_UP: (
- "gpt_neox.layers.{bid}.mlp.dense_h_to_4h", # gptneox
- "transformer.h.{bid}.mlp.c_fc", # gpt2
- "transformer.blocks.{bid}.ffn.up_proj", # mpt
- "transformer.h.{bid}.mlp.dense_h_to_4h", # falcon
- "model.layers.{bid}.mlp.up_proj", # llama-hf refact
- "layers.{bid}.feed_forward.w3", # llama-pth
- "encoder.layer.{bid}.intermediate.dense", # bert
- "transformer.h.{bid}.mlp.fc_in", # gpt-j
- "language_model.encoder.layers.{bid}.mlp.dense_h_to_4h", # persimmon
- ),
-
- # Feed-forward gate
- MODEL_TENSOR.FFN_GATE: (
- "model.layers.{bid}.mlp.gate_proj", # llama-hf refact
- "layers.{bid}.feed_forward.w1", # llama-pth
- ),
-
- # Feed-forward down
- MODEL_TENSOR.FFN_DOWN: (
- "gpt_neox.layers.{bid}.mlp.dense_4h_to_h", # gptneox
- "transformer.h.{bid}.mlp.c_proj", # gpt2 refact
- "transformer.blocks.{bid}.ffn.down_proj", # mpt
- "transformer.h.{bid}.mlp.dense_4h_to_h", # falcon
- "model.layers.{bid}.mlp.down_proj", # llama-hf
- "layers.{bid}.feed_forward.w2", # llama-pth
- "encoder.layer.{bid}.output.dense", # bert
- "transformer.h.{bid}.mlp.fc_out", # gpt-j
- "language_model.encoder.layers.{bid}.mlp.dense_4h_to_h", # persimmon
- ),
-
- MODEL_TENSOR.ATTN_Q_NORM: (
- "language_model.encoder.layers.{bid}.self_attention.q_layernorm",
- ),
-
- MODEL_TENSOR.ATTN_K_NORM: (
- "language_model.encoder.layers.{bid}.self_attention.k_layernorm",
- ),
-
- MODEL_TENSOR.ROPE_FREQS: (
- "language_model.encoder.layers.{bid}.self_attention.rotary_emb.inv_freq", # persimmon
- )
- }
-
- mapping: dict[str, tuple[MODEL_TENSOR, str]]
-
- def __init__(self, arch: MODEL_ARCH, n_blocks: int):
- self.mapping = {}
- for tensor, keys in self.mappings_cfg.items():
- if tensor not in MODEL_TENSORS[arch]:
- continue
- tensor_name = TENSOR_NAMES[tensor]
- self.mapping[tensor_name] = (tensor, tensor_name)
- for key in keys:
- self.mapping[key] = (tensor, tensor_name)
- for bid in range(n_blocks):
- for tensor, keys in self.block_mappings_cfg.items():
- if tensor not in MODEL_TENSORS[arch]:
- continue
- tensor_name = TENSOR_NAMES[tensor].format(bid = bid)
- self.mapping[tensor_name] = (tensor, tensor_name)
- for key in keys:
- key = key.format(bid = bid)
- self.mapping[key] = (tensor, tensor_name)
-
- def get_type_and_name(self, key: str, try_suffixes: Sequence[str] = ()) -> tuple[MODEL_TENSOR, str] | None:
- result = self.mapping.get(key)
- if result is not None:
- return result
- for suffix in try_suffixes:
- if key.endswith(suffix):
- result = self.mapping.get(key[:-len(suffix)])
- if result is not None:
- return (result[0], result[1] + suffix)
- return None
-
- def get_name(self, key: str, try_suffixes: Sequence[str] = ()) -> str | None:
- result = self.get_type_and_name(key, try_suffixes = try_suffixes)
- if result is None:
- return None
- return result[1]
-
- def get_type(self, key: str, try_suffixes: Sequence[str] = ()) -> MODEL_TENSOR | None:
- result = self.get_type_and_name(key, try_suffixes = try_suffixes)
- if result is None:
- return None
- return result[0]
-
- def __getitem__(self, key: str) -> str:
- try:
- return self.mapping[key][1]
- except KeyError:
- raise KeyError(key)
-
- def __contains__(self, key: str) -> bool:
- return key in self.mapping
-
- def __repr__(self) -> str:
- return repr(self.mapping)
-
-def get_tensor_name_map(arch: MODEL_ARCH, n_blocks: int) -> TensorNameMap:
- return TensorNameMap(arch, n_blocks)
-
-class TokenType(IntEnum):
- NORMAL = 1
- UNKNOWN = 2
- CONTROL = 3
- USER_DEFINED = 4
- UNUSED = 5
- BYTE = 6
-
-#
-# implementation
-#
-
-
-class GGMLQuantizationType(IntEnum):
- F32 = 0
- F16 = 1
- Q4_0 = 2
- Q4_1 = 3
- Q5_0 = 6
- Q5_1 = 7
- Q8_0 = 8
- Q8_1 = 9
- Q2_K = 10
- Q3_K = 11
- Q4_K = 12
- Q5_K = 13
- Q6_K = 14
- Q8_K = 15
-
-
-class GGUFValueType(IntEnum):
- UINT8 = 0
- INT8 = 1
- UINT16 = 2
- INT16 = 3
- UINT32 = 4
- INT32 = 5
- FLOAT32 = 6
- BOOL = 7
- STRING = 8
- ARRAY = 9
- UINT64 = 10
- INT64 = 11
- FLOAT64 = 12
-
- @staticmethod
- def get_type(val):
- if isinstance(val, str) or isinstance(val, bytes) or isinstance(val, bytearray):
- return GGUFValueType.STRING
- elif isinstance(val, list):
- return GGUFValueType.ARRAY
- elif isinstance(val, float):
- return GGUFValueType.FLOAT32
- elif isinstance(val, bool):
- return GGUFValueType.BOOL
- elif isinstance(val, int):
- return GGUFValueType.INT32
- # TODO: need help with 64-bit types in Python
- else:
- print("Unknown type: "+str(type(val)))
- sys.exit()
-
-
-class GGUFWriter:
- fout: BufferedWriter
- arch: str
- offset_tensor = 0
- data_alignment = GGUF_DEFAULT_ALIGNMENT
- kv_data = b""
- kv_data_count = 0
- ti_data = b""
- ti_data_count = 0
- use_temp_file: bool
- temp_file: tempfile.SpooledTemporaryFile[bytes] | None = None
- tensors: list[tuple[np.ndarray[Any, Any], int]]
-
- def __init__(self, path: os.PathLike[str] | str, arch: str, use_temp_file = True):
- self.fout = open(path, "wb")
- self.arch = arch
- self.add_architecture()
- self.use_temp_file = use_temp_file
- self.tensors = []
-
- def write_header_to_file(self):
- self.fout.write(struct.pack(" 0:
- ltype = GGUFValueType.get_type(val[0])
- if not all(GGUFValueType.get_type(i) is ltype for i in val[1:]):
- raise ValueError("All items in a GGUF array should be of the same type")
- self.kv_data += struct.pack(" int:
- return ((x + n - 1) // n) * n
-
- def add_tensor_info(self, name: str, tensor_shape: Sequence[int], tensor_dtype: np.dtype[np.float16] | np.dtype[np.float32], tensor_nbytes: int, raw_dtype: GGMLQuantizationType | None = None):
- assert raw_dtype is not None or tensor_dtype in (np.float32, np.float16), "Only F32 and F16 tensors are supported for now"
-
- encoded_name = name.encode("utf8")
- self.ti_data += struct.pack(" None:
- if not self._try_load_from_tokenizer_json(path):
- self._try_load_from_config_json(path)
-
- def _try_load_from_tokenizer_json(self, path: Path) -> bool:
- tokenizer_file = path / 'tokenizer.json'
- if not tokenizer_file.is_file():
- return False
- with open(tokenizer_file, encoding = 'utf-8') as f:
- tokenizer = json.load(f)
- if self.load_merges:
- merges = tokenizer.get('model', {}).get('merges')
- if isinstance(merges, list) and len(merges) > 0 and isinstance(merges[0], str):
- self.merges = merges
- tokenizer_config_file = path / 'tokenizer_config.json'
- added_tokens = tokenizer.get('added_tokens')
- if added_tokens is None or not tokenizer_config_file.is_file():
- return True
- with open(tokenizer_config_file, encoding = 'utf-8') as f:
- tokenizer_config = json.load(f)
- for typ in self.special_token_types:
- entry = tokenizer_config.get(f'{typ}_token')
- if isinstance(entry, str):
- tc_content = entry
- elif isinstance(entry, dict):
- entry_content = entry.get('content')
- if not isinstance(entry_content, str):
- continue
- tc_content = entry_content
- else:
- continue
- for maybe_token_id in (atok.get('id') for atok in added_tokens if atok.get('content') == tc_content):
- if isinstance(maybe_token_id, int) and maybe_token_id >= 0:
- self.special_token_ids[typ] = maybe_token_id
- break
- return True
-
- def _try_load_from_config_json(self, path: Path) -> bool:
- config_file = path / 'config.json'
- if not config_file.is_file():
- return False
- with open(config_file, encoding = 'utf-8') as f:
- config = json.load(f)
- for typ in self.special_token_types:
- maybe_token_id = config.get(f'{typ}_token_id')
- if isinstance(maybe_token_id, int) and maybe_token_id >= 0:
- self.special_token_ids[typ] = maybe_token_id
- return True
-
- def add_to_gguf(self, gw: GGUFWriter) -> None:
- if len(self.merges) > 0:
- print(f'gguf: Adding {len(self.merges)} merge(s).')
- gw.add_token_merges(self.merges)
- for typ, tokid in self.special_token_ids.items():
- handler: Callable[[int], None] | None = getattr(gw, f'add_{typ}_token_id', None)
- if handler is None:
- print(f'gguf: WARNING: No handler for special token type {typ} with id {tokid} - skipping')
- continue
- print(f'gguf: Setting special token type {typ} to {tokid}')
- handler(tokid)
-
- def __repr__(self) -> str:
- return f''
-
-
-# Example usage:
-if __name__ == "__main__":
- # Example usage with a file
- gguf_writer = GGUFWriter("example.gguf", "llama")
-
- gguf_writer.add_architecture()
- gguf_writer.add_block_count(12)
- gguf_writer.add_uint32("answer", 42) # Write a 32-bit integer
- gguf_writer.add_float32("answer_in_float", 42.0) # Write a 32-bit float
- gguf_writer.add_custom_alignment(64)
-
- tensor1 = np.ones((32,), dtype=np.float32) * 100.0
- tensor2 = np.ones((64,), dtype=np.float32) * 101.0
- tensor3 = np.ones((96,), dtype=np.float32) * 102.0
-
- gguf_writer.add_tensor("tensor1", tensor1)
- gguf_writer.add_tensor("tensor2", tensor2)
- gguf_writer.add_tensor("tensor3", tensor3)
-
- gguf_writer.write_header_to_file()
- gguf_writer.write_kv_data_to_file()
- gguf_writer.write_tensors_to_file()
-
- gguf_writer.close()
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_ddpm_flax.py b/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_ddpm_flax.py
deleted file mode 100644
index e716ea0abaad045b86d902cb41362027092d7349..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_ddpm_flax.py
+++ /dev/null
@@ -1,318 +0,0 @@
-# Copyright 2022 UC Berkeley Team and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# DISCLAIMER: This file is strongly influenced by https://github.com/ermongroup/ddim
-
-import math
-from dataclasses import dataclass
-from typing import Optional, Tuple, Union
-
-import flax
-import jax.numpy as jnp
-from jax import random
-
-from ..configuration_utils import ConfigMixin, FrozenDict, register_to_config
-from ..utils import deprecate
-from .scheduling_utils_flax import (
- _FLAX_COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS,
- FlaxSchedulerMixin,
- FlaxSchedulerOutput,
- broadcast_to_shape_from_left,
-)
-
-
-def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999) -> jnp.ndarray:
- """
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
- (1-beta) over time from t = [0,1].
-
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
- to that part of the diffusion process.
-
-
- Args:
- num_diffusion_timesteps (`int`): the number of betas to produce.
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
- prevent singularities.
-
- Returns:
- betas (`jnp.ndarray`): the betas used by the scheduler to step the model outputs
- """
-
- def alpha_bar(time_step):
- return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2
-
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return jnp.array(betas, dtype=jnp.float32)
-
-
-@flax.struct.dataclass
-class DDPMSchedulerState:
- # setable values
- timesteps: jnp.ndarray
- num_inference_steps: Optional[int] = None
-
- @classmethod
- def create(cls, num_train_timesteps: int):
- return cls(timesteps=jnp.arange(0, num_train_timesteps)[::-1])
-
-
-@dataclass
-class FlaxDDPMSchedulerOutput(FlaxSchedulerOutput):
- state: DDPMSchedulerState
-
-
-class FlaxDDPMScheduler(FlaxSchedulerMixin, ConfigMixin):
- """
- Denoising diffusion probabilistic models (DDPMs) explores the connections between denoising score matching and
- Langevin dynamics sampling.
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- For more details, see the original paper: https://arxiv.org/abs/2006.11239
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- variance_type (`str`):
- options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small`,
- `fixed_small_log`, `fixed_large`, `fixed_large_log`, `learned` or `learned_range`.
- clip_sample (`bool`, default `True`):
- option to clip predicted sample between -1 and 1 for numerical stability.
- prediction_type (`str`, default `epsilon`):
- indicates whether the model predicts the noise (epsilon), or the samples. One of `epsilon`, `sample`.
- `v-prediction` is not supported for this scheduler.
- """
-
- _compatibles = _FLAX_COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy()
- _deprecated_kwargs = ["predict_epsilon"]
-
- @property
- def has_state(self):
- return True
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- trained_betas: Optional[jnp.ndarray] = None,
- variance_type: str = "fixed_small",
- clip_sample: bool = True,
- prediction_type: str = "epsilon",
- **kwargs,
- ):
- message = (
- "Please make sure to instantiate your scheduler with `prediction_type` instead. E.g. `scheduler ="
- " FlaxDDPMScheduler.from_pretrained(, prediction_type='epsilon')`."
- )
- predict_epsilon = deprecate("predict_epsilon", "0.11.0", message, take_from=kwargs)
- if predict_epsilon is not None:
- self.register_to_config(prediction_type="epsilon" if predict_epsilon else "sample")
-
- if trained_betas is not None:
- self.betas = jnp.asarray(trained_betas)
- elif beta_schedule == "linear":
- self.betas = jnp.linspace(beta_start, beta_end, num_train_timesteps, dtype=jnp.float32)
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = jnp.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=jnp.float32) ** 2
- elif beta_schedule == "squaredcos_cap_v2":
- # Glide cosine schedule
- self.betas = betas_for_alpha_bar(num_train_timesteps)
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = jnp.cumprod(self.alphas, axis=0)
- self.one = jnp.array(1.0)
-
- def create_state(self):
- return DDPMSchedulerState.create(num_train_timesteps=self.config.num_train_timesteps)
-
- def set_timesteps(
- self, state: DDPMSchedulerState, num_inference_steps: int, shape: Tuple = ()
- ) -> DDPMSchedulerState:
- """
- Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- state (`DDIMSchedulerState`):
- the `FlaxDDPMScheduler` state data class instance.
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- """
- num_inference_steps = min(self.config.num_train_timesteps, num_inference_steps)
- timesteps = jnp.arange(
- 0, self.config.num_train_timesteps, self.config.num_train_timesteps // num_inference_steps
- )[::-1]
- return state.replace(num_inference_steps=num_inference_steps, timesteps=timesteps)
-
- def _get_variance(self, t, predicted_variance=None, variance_type=None):
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[t - 1] if t > 0 else self.one
-
- # For t > 0, compute predicted variance βt (see formula (6) and (7) from https://arxiv.org/pdf/2006.11239.pdf)
- # and sample from it to get previous sample
- # x_{t-1} ~ N(pred_prev_sample, variance) == add variance to pred_sample
- variance = (1 - alpha_prod_t_prev) / (1 - alpha_prod_t) * self.betas[t]
-
- if variance_type is None:
- variance_type = self.config.variance_type
-
- # hacks - were probably added for training stability
- if variance_type == "fixed_small":
- variance = jnp.clip(variance, a_min=1e-20)
- # for rl-diffuser https://arxiv.org/abs/2205.09991
- elif variance_type == "fixed_small_log":
- variance = jnp.log(jnp.clip(variance, a_min=1e-20))
- elif variance_type == "fixed_large":
- variance = self.betas[t]
- elif variance_type == "fixed_large_log":
- # Glide max_log
- variance = jnp.log(self.betas[t])
- elif variance_type == "learned":
- return predicted_variance
- elif variance_type == "learned_range":
- min_log = variance
- max_log = self.betas[t]
- frac = (predicted_variance + 1) / 2
- variance = frac * max_log + (1 - frac) * min_log
-
- return variance
-
- def step(
- self,
- state: DDPMSchedulerState,
- model_output: jnp.ndarray,
- timestep: int,
- sample: jnp.ndarray,
- key: random.KeyArray,
- return_dict: bool = True,
- **kwargs,
- ) -> Union[FlaxDDPMSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- state (`DDPMSchedulerState`): the `FlaxDDPMScheduler` state data class instance.
- model_output (`jnp.ndarray`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`jnp.ndarray`):
- current instance of sample being created by diffusion process.
- key (`random.KeyArray`): a PRNG key.
- return_dict (`bool`): option for returning tuple rather than FlaxDDPMSchedulerOutput class
-
- Returns:
- [`FlaxDDPMSchedulerOutput`] or `tuple`: [`FlaxDDPMSchedulerOutput`] if `return_dict` is True, otherwise a
- `tuple`. When returning a tuple, the first element is the sample tensor.
-
- """
- message = (
- "Please make sure to instantiate your scheduler with `prediction_type` instead. E.g. `scheduler ="
- " FlaxDDPMScheduler.from_pretrained(, prediction_type='epsilon')`."
- )
- predict_epsilon = deprecate("predict_epsilon", "0.11.0", message, take_from=kwargs)
- if predict_epsilon is not None:
- new_config = dict(self.config)
- new_config["prediction_type"] = "epsilon" if predict_epsilon else "sample"
- self._internal_dict = FrozenDict(new_config)
-
- t = timestep
-
- if model_output.shape[1] == sample.shape[1] * 2 and self.config.variance_type in ["learned", "learned_range"]:
- model_output, predicted_variance = jnp.split(model_output, sample.shape[1], axis=1)
- else:
- predicted_variance = None
-
- # 1. compute alphas, betas
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[t - 1] if t > 0 else self.one
- beta_prod_t = 1 - alpha_prod_t
- beta_prod_t_prev = 1 - alpha_prod_t_prev
-
- # 2. compute predicted original sample from predicted noise also called
- # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
- if self.config.prediction_type == "epsilon":
- pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
- elif self.config.prediction_type == "sample":
- pred_original_sample = model_output
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample` "
- " for the FlaxDDPMScheduler."
- )
-
- # 3. Clip "predicted x_0"
- if self.config.clip_sample:
- pred_original_sample = jnp.clip(pred_original_sample, -1, 1)
-
- # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t
- # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
- pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * self.betas[t]) / beta_prod_t
- current_sample_coeff = self.alphas[t] ** (0.5) * beta_prod_t_prev / beta_prod_t
-
- # 5. Compute predicted previous sample µ_t
- # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
- pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * sample
-
- # 6. Add noise
- variance = 0
- if t > 0:
- key = random.split(key, num=1)
- noise = random.normal(key=key, shape=model_output.shape)
- variance = (self._get_variance(t, predicted_variance=predicted_variance) ** 0.5) * noise
-
- pred_prev_sample = pred_prev_sample + variance
-
- if not return_dict:
- return (pred_prev_sample, state)
-
- return FlaxDDPMSchedulerOutput(prev_sample=pred_prev_sample, state=state)
-
- def add_noise(
- self,
- original_samples: jnp.ndarray,
- noise: jnp.ndarray,
- timesteps: jnp.ndarray,
- ) -> jnp.ndarray:
- sqrt_alpha_prod = self.alphas_cumprod[timesteps] ** 0.5
- sqrt_alpha_prod = sqrt_alpha_prod.flatten()
- sqrt_alpha_prod = broadcast_to_shape_from_left(sqrt_alpha_prod, original_samples.shape)
-
- sqrt_one_minus_alpha_prod = (1 - self.alphas_cumprod[timesteps]) ** 0.5
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
- sqrt_one_minus_alpha_prod = broadcast_to_shape_from_left(sqrt_one_minus_alpha_prod, original_samples.shape)
-
- noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/Jason1112/ML-GUI/app.py b/spaces/Jason1112/ML-GUI/app.py
deleted file mode 100644
index e869a50cfa9f96605d284e23ec65d5c3fcdb3c41..0000000000000000000000000000000000000000
--- a/spaces/Jason1112/ML-GUI/app.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import gradio as gr
-import tkinter as tk
-import pickle
-import dill
-import nltk
-nltk.download('stopwords')
-nltk.download('punkt')
-from nltk.tokenize import word_tokenize
-import re
-from nltk.corpus import stopwords
-from nltk.stem.snowball import SnowballStemmer
-import numpy as np
-import pandas as pd
-stop_words = set(stopwords.words('english'))
-stemmer = SnowballStemmer("english")
-
-# Load model
-y_vectorizer = dill.load(open("y_vectorizer_file_180k_50.pkl", "rb"))
-vectorizer = dill.load(open("vectorizer_180k_50.pkl", "rb"))
-clf = pickle.load(open("fileSVM50.pkl", "rb"))
-
-# Xử lí title và body của question
-def process_title_and_body(title, body):
- body= re.sub(r'', '', body, flags=re.MULTILINE|re.DOTALL)
- body = re.sub('<.*?>', ' ', str(body.encode('utf-8')))
- title=str(title).encode('utf-8')
- question=str(title)+" "+str(title)+" "+str(title)+" "+ body
- question=re.sub(r'[^A-Za-z]+',' ',question)
- words=word_tokenize(str(question.lower()))
- question=' '.join(str(stemmer.stem(j)) for j in words if j not in stop_words and (len(j)!=1 or j=='c'))
- return question
-
-def predict_question(title, body):
- # Chuẩn hóa title và body
- question = process_title_and_body(title, body)
- df222 = pd.DataFrame({"question": [question]})
- # Chuyển câu hỏi về dạng vector
- sample_tfidf = vectorizer.transform(df222['question'])
- # Dự đoán
- sample_pred = clf.predict(sample_tfidf)
- # Đưa nhãn từ dạng vector sang dạng text ban đầu
- result = y_vectorizer.inverse_transform(sample_pred)
- list_result = list(result[0])
- exclude_list = ['firebase', 'gitlab', 'office365', 'scikit-learn', 'selenium-webdriver', 'shiny', 'single-sign-on', 'x86-64']
- result_list = [item for item in list_result if item not in exclude_list]
- return result_list
-
-with gr.Blocks() as app:
- gr.Markdown("Mô hình Machine Learning gán nhãn từ động từ tiêu đề và nội dung câu hỏi")
- title = gr.Text(label="Tiêu đề", info="Nhập tiêu đề câu hỏi")
- body = gr.Text(label="Nội dung", info="Nhập nội dung câu hỏi", lines=5)
- btn_submit = gr.Button(size='sm', variant='primary', value="Tạo nhãn")
- result = gr.Text(label="Nhãn")
- btn_submit.click(
- fn=predict_question,
- inputs=[title, body],
- outputs=result
- )
- gr.Markdown("----Examples----")
- gr.Examples(
- [["Unfortunately MyApp has stopped. How can I solve this?", "I am developing an application, and everytime I run it, I get the message:\
- Unfortunately, MyApp has stopped.\
- What can I do to solve this?\
- About this question - obviously inspired by What is a stack trace, and how can I use it to debug my application errors?, there are lots of questions stating that their application has crashed, without any further detail. This question aims to instruct novice Android programmers on how to try and fix their problems themselves, or ask the right questions."]
- ,["How do I parse JSON in Android?", "How do I parse a JSON feed in Android?"]
- ,["How do I return the response from an asynchronous call?",
- "
How do I return the response/result from a function foo that makes an asynchronous request?
I am trying to return the value from the callback, as well as assigning the result to a local variable inside the function and returning that one, but none of those ways actually return the response — they all return undefined or whatever the initial value of the variable result is.
Example of an asynchronous function that accepts a callback (using jQuery's ajax function):
functionfoo() { var result; $.ajax({ url: '...', success: function(response) { result = response; // return response; // <- I tried that one as well } }); return result; // It always returns `undefined` }
Example using Node.js:
functionfoo() { var result; fs.readFile(\"path/to/file\", function(err, data) { result = data; // return data; // <- I tried that one as well }); return result; // It always returns `undefined` }
Example using the then block of a promise:
functionfoo() { var result; fetch(url).then(function(response) { result = response; // return response; // <- I tried that one as well }); return result; // It always returns `undefined` }
"
- ]
- ],
- [title, body],
- )
-app.launch()
-
-
-
diff --git a/spaces/Jikiwi/sovits-models/hubert/hubert_model_onnx.py b/spaces/Jikiwi/sovits-models/hubert/hubert_model_onnx.py
deleted file mode 100644
index d18f3c2a0fc29592a573a9780308d38f059640b9..0000000000000000000000000000000000000000
--- a/spaces/Jikiwi/sovits-models/hubert/hubert_model_onnx.py
+++ /dev/null
@@ -1,217 +0,0 @@
-import copy
-import random
-from typing import Optional, Tuple
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as t_func
-from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present
-
-
-class Hubert(nn.Module):
- def __init__(self, num_label_embeddings: int = 100, mask: bool = True):
- super().__init__()
- self._mask = mask
- self.feature_extractor = FeatureExtractor()
- self.feature_projection = FeatureProjection()
- self.positional_embedding = PositionalConvEmbedding()
- self.norm = nn.LayerNorm(768)
- self.dropout = nn.Dropout(0.1)
- self.encoder = TransformerEncoder(
- nn.TransformerEncoderLayer(
- 768, 12, 3072, activation="gelu", batch_first=True
- ),
- 12,
- )
- self.proj = nn.Linear(768, 256)
-
- self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_())
- self.label_embedding = nn.Embedding(num_label_embeddings, 256)
-
- def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
- mask = None
- if self.training and self._mask:
- mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2)
- x[mask] = self.masked_spec_embed.to(x.dtype)
- return x, mask
-
- def encode(
- self, x: torch.Tensor, layer: Optional[int] = None
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- x = self.feature_extractor(x)
- x = self.feature_projection(x.transpose(1, 2))
- x, mask = self.mask(x)
- x = x + self.positional_embedding(x)
- x = self.dropout(self.norm(x))
- x = self.encoder(x, output_layer=layer)
- return x, mask
-
- def logits(self, x: torch.Tensor) -> torch.Tensor:
- logits = torch.cosine_similarity(
- x.unsqueeze(2),
- self.label_embedding.weight.unsqueeze(0).unsqueeze(0),
- dim=-1,
- )
- return logits / 0.1
-
-
-class HubertSoft(Hubert):
- def __init__(self):
- super().__init__()
-
- def units(self, wav: torch.Tensor) -> torch.Tensor:
- wav = t_func.pad(wav, ((400 - 320) // 2, (400 - 320) // 2))
- x, _ = self.encode(wav)
- return self.proj(x)
-
- def forward(self, x):
- return self.units(x)
-
-class FeatureExtractor(nn.Module):
- def __init__(self):
- super().__init__()
- self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False)
- self.norm0 = nn.GroupNorm(512, 512)
- self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False)
- self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = t_func.gelu(self.norm0(self.conv0(x)))
- x = t_func.gelu(self.conv1(x))
- x = t_func.gelu(self.conv2(x))
- x = t_func.gelu(self.conv3(x))
- x = t_func.gelu(self.conv4(x))
- x = t_func.gelu(self.conv5(x))
- x = t_func.gelu(self.conv6(x))
- return x
-
-
-class FeatureProjection(nn.Module):
- def __init__(self):
- super().__init__()
- self.norm = nn.LayerNorm(512)
- self.projection = nn.Linear(512, 768)
- self.dropout = nn.Dropout(0.1)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.norm(x)
- x = self.projection(x)
- x = self.dropout(x)
- return x
-
-
-class PositionalConvEmbedding(nn.Module):
- def __init__(self):
- super().__init__()
- self.conv = nn.Conv1d(
- 768,
- 768,
- kernel_size=128,
- padding=128 // 2,
- groups=16,
- )
- self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.conv(x.transpose(1, 2))
- x = t_func.gelu(x[:, :, :-1])
- return x.transpose(1, 2)
-
-
-class TransformerEncoder(nn.Module):
- def __init__(
- self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int
- ) -> None:
- super(TransformerEncoder, self).__init__()
- self.layers = nn.ModuleList(
- [copy.deepcopy(encoder_layer) for _ in range(num_layers)]
- )
- self.num_layers = num_layers
-
- def forward(
- self,
- src: torch.Tensor,
- mask: torch.Tensor = None,
- src_key_padding_mask: torch.Tensor = None,
- output_layer: Optional[int] = None,
- ) -> torch.Tensor:
- output = src
- for layer in self.layers[:output_layer]:
- output = layer(
- output, src_mask=mask, src_key_padding_mask=src_key_padding_mask
- )
- return output
-
-
-def _compute_mask(
- shape: Tuple[int, int],
- mask_prob: float,
- mask_length: int,
- device: torch.device,
- min_masks: int = 0,
-) -> torch.Tensor:
- batch_size, sequence_length = shape
-
- if mask_length < 1:
- raise ValueError("`mask_length` has to be bigger than 0.")
-
- if mask_length > sequence_length:
- raise ValueError(
- f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`"
- )
-
- # compute number of masked spans in batch
- num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random())
- num_masked_spans = max(num_masked_spans, min_masks)
-
- # make sure num masked indices <= sequence_length
- if num_masked_spans * mask_length > sequence_length:
- num_masked_spans = sequence_length // mask_length
-
- # SpecAugment mask to fill
- mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool)
-
- # uniform distribution to sample from, make sure that offset samples are < sequence_length
- uniform_dist = torch.ones(
- (batch_size, sequence_length - (mask_length - 1)), device=device
- )
-
- # get random indices to mask
- mask_indices = torch.multinomial(uniform_dist, num_masked_spans)
-
- # expand masked indices to masked spans
- mask_indices = (
- mask_indices.unsqueeze(dim=-1)
- .expand((batch_size, num_masked_spans, mask_length))
- .reshape(batch_size, num_masked_spans * mask_length)
- )
- offsets = (
- torch.arange(mask_length, device=device)[None, None, :]
- .expand((batch_size, num_masked_spans, mask_length))
- .reshape(batch_size, num_masked_spans * mask_length)
- )
- mask_idxs = mask_indices + offsets
-
- # scatter indices to mask
- mask = mask.scatter(1, mask_idxs, True)
-
- return mask
-
-
-def hubert_soft(
- path: str,
-) -> HubertSoft:
- r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`.
- Args:
- path (str): path of a pretrained model
- """
- hubert = HubertSoft()
- checkpoint = torch.load(path)
- consume_prefix_in_state_dict_if_present(checkpoint, "module.")
- hubert.load_state_dict(checkpoint)
- hubert.eval()
- return hubert
diff --git a/spaces/Joyeux/andite-anything-v4.0/app.py b/spaces/Joyeux/andite-anything-v4.0/app.py
deleted file mode 100644
index 47a2051db6dadeea03edf70d62694fd3e5e88ba7..0000000000000000000000000000000000000000
--- a/spaces/Joyeux/andite-anything-v4.0/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/andite/anything-v4.0").launch()
\ No newline at end of file
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/Kangarroar/ApplioRVC-Inference/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
deleted file mode 100644
index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
+++ /dev/null
@@ -1,97 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import parselmouth
-import numpy as np
-
-
-class PMF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def compute_f0(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0
-
- def compute_f0_uv(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0, uv
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/app.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/app.py
deleted file mode 100644
index dd838c51526612ba0682c4859f9b312d9f4ff28d..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/app.py
+++ /dev/null
@@ -1,173 +0,0 @@
-import whisper
-model = whisper.load_model("small")
-import os
-os.system('pip install voicefixer --upgrade')
-from voicefixer import VoiceFixer
-voicefixer = VoiceFixer()
-import gradio as gr
-import openai
-import torch
-import torchaudio
-from speechbrain.pretrained import SpectralMaskEnhancement
-
-enhance_model = SpectralMaskEnhancement.from_hparams(
-source="speechbrain/metricgan-plus-voicebank",
-savedir="pretrained_models/metricgan-plus-voicebank",
-run_opts={"device":"cuda"},
-)
-
-import re
-import random
-import string
-import librosa
-import numpy as np
-
-from pathlib import Path
-from scipy.io.wavfile import write
-
-from encoder import inference as encoder
-from vocoder.hifigan import inference as gan_vocoder
-from synthesizer.inference import Synthesizer
-
-mes = [
- {"role": "system", "content": "You are my personal assistant. Respond to me only in Chinese."}
-]
-
-res = []
-
-class Mandarin:
- def __init__(self):
- self.encoder_path = "encoder/saved_models/pretrained.pt"
- self.vocoder_path = "vocoder/saved_models/pretrained/g_hifigan.pt"
- self.config_fpath = "vocoder/hifigan/config_16k_.json"
- self.accent = "synthesizer/saved_models/普通话.pt"
-
- synthesizers_cache = {}
- if synthesizers_cache.get(self.accent) is None:
- self.current_synt = Synthesizer(Path(self.accent))
- synthesizers_cache[self.accent] = self.current_synt
- else:
- self.current_synt = synthesizers_cache[self.accent]
-
- encoder.load_model(Path(self.encoder_path))
- gan_vocoder.load_model(Path(self.vocoder_path), self.config_fpath)
-
- def setVoice(self, timbre):
- self.timbre = timbre
- wav, sample_rate, = librosa.load(self.timbre)
-
- encoder_wav = encoder.preprocess_wav(wav, sample_rate)
- self.embed, _, _ = encoder.embed_utterance(encoder_wav, return_partials=True)
-
- def say(self, text):
- texts = filter(None, text.split("\n"))
- punctuation = "!,。、?!,.?::" # punctuate and split/clean text
- processed_texts = []
- for text in texts:
- for processed_text in re.sub(r'[{}]+'.format(punctuation), '\n', text).split('\n'):
- if processed_text:
- processed_texts.append(processed_text.strip())
- texts = processed_texts
- embeds = [self.embed] * len(texts)
-
- specs = self.current_synt.synthesize_spectrograms(texts, embeds)
- spec = np.concatenate(specs, axis=1)
- wav, sample_rate = gan_vocoder.infer_waveform(spec)
-
- return wav, sample_rate
-
-def greet(apikey, upload, audio):
-
- openai.api_key = apikey
-
- # load audio and pad/trim it to fit 30 seconds
- audio = whisper.load_audio(audio)
- audio = whisper.pad_or_trim(audio)
-
- # make log-Mel spectrogram and move to the same device as the model
- mel = whisper.log_mel_spectrogram(audio).to(model.device)
-
- # detect the spoken language
- _, probs = model.detect_language(mel)
- print(f"Detected language: {max(probs, key=probs.get)}")
-
- # decode the audio
- options = whisper.DecodingOptions()
- result = whisper.decode(model, mel, options)
- res.append(result.text)
-
- messages = mes
-
- # chatgpt
- n = len(res)
- content = res[n-1]
- messages.append({"role": "user", "content": content})
-
- completion = openai.ChatCompletion.create(
- model = "gpt-3.5-turbo",
- messages = messages
- )
-
- chat_response = completion.choices[0].message.content
-
- messages.append({"role": "assistant", "content": chat_response})
-
- voice=None
-
- if voice is None:
- voice = Mandarin()
- voice.setVoice(upload)
- voice.say("加载成功")
- wav, sample_rate = voice.say(chat_response)
-
- output_file = "".join( random.sample(string.ascii_lowercase + string.digits, 11) ) + ".wav"
-
- write(output_file, sample_rate, wav.astype(np.float32))
-
- voicefixer.restore(input=output_file, # input wav file path
- output="audio1.wav", # output wav file path
- cuda=True, # whether to use gpu acceleration
- mode = 0) # You can try out mode 0, 1, or 2 to find out the best result
-
- noisy = enhance_model.load_audio(
- "audio1.wav"
- ).unsqueeze(0)
-
- enhanced = enhance_model.enhance_batch(noisy, lengths=torch.tensor([1.]))
- torchaudio.save("enhanced.wav", enhanced.cpu(), 16000)
-
- return [result.text, chat_response, "enhanced.wav"]
-
-c1=gr.Interface(
- fn=greet,
- inputs=[
- gr.Textbox(lines=1, label = "请填写您的OpenAI-API-key", type = "password"),
- gr.Audio(source="upload", label = "请上传您喜欢的声音(wav文件)", type="filepath"),
- gr.Audio(source="microphone", label = "和您的专属AI聊天吧!", type="filepath"),
- ],
- outputs=[
- gr.Textbox(label="Speech to Text"), gr.Textbox(label="ChatGPT Output"), gr.Audio(label="Audio with Custom Voice"),
- ],
- #theme="huggingface",
- #title= "🥳💬💕 - TalktoAI,随时随地,谈天说地!"
- description = "🥳💬💕 - TalktoAI,随时随地,谈天说地! \n\n🤖 - 让有人文关怀的AI造福每一个人!AI向善,文明璀璨!TalktoAI - Enable the future!",
- )
-
-
-c2=gr.Interface(
- fn=greet,
- inputs=[
- gr.Textbox(lines=1, label = "请填写您的OpenAI-API-key", type = "password"),
- gr.Audio(source="microphone", label = "请上传您喜欢的声音,并尽量避免噪音", type="filepath"),
- gr.Audio(source="microphone", label = "和您的专属AI聊天吧!", type="filepath"),
- ],
- outputs=[
- gr.Textbox(label="Speech to Text"), gr.Textbox(label="ChatGPT Output"), gr.Audio(label="Audio with Custom Voice"),
- ],
- #theme="huggingface",
- #title= "🥳💬💕 - TalktoAI,随时随地,谈天说地!"
- description = "🥳💬💕 - TalktoAI,随时随地,谈天说地! \n\n🤖 - 让有人文关怀的AI造福每一个人!AI向善,文明璀璨!TalktoAI - Enable the future!",
- )
-
-demo = gr.TabbedInterface([c1, c2], ["wav文件上传", "麦克风上传"])
-demo.launch()
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/utils/__init__.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/KonradSzafer/HF-QA-Demo/data/stackoverflow_python_dataset.py b/spaces/KonradSzafer/HF-QA-Demo/data/stackoverflow_python_dataset.py
deleted file mode 100644
index f29c8f1a4282f067bbd5c5a48db66e4d999d19f3..0000000000000000000000000000000000000000
--- a/spaces/KonradSzafer/HF-QA-Demo/data/stackoverflow_python_dataset.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from datetime import datetime
-from datasets import load_dataset
-from bs4 import BeautifulSoup
-
-
-def preprocess_dataset():
- """
- Preprocesses the 'koutch/stackoverflow_python' dataset.
-
- Returns:
- datasets.arrow_dataset.Dataset: The preprocessed dataset.
- """
- dataset = load_dataset('koutch/stackoverflow_python', split='train')
- dataset = dataset.filter(
- lambda example:
- example['question_score'] > 100 and
- example['answer_score'] > 5 and
- datetime.strptime(example['answer_date'], '%Y-%m-%dT%H:%M:%SZ').year > 2010
- )
-
- def html2text(example):
- soup = BeautifulSoup(example, 'html.parser')
- return ''.join(soup.findAll(string=True))
-
- def transforms(example):
- example['answer'] = html2text(example['answer_body'])
- example['question'] = html2text(example['question_body'])
- return example
-
- dataset = dataset.map(lambda example: transforms(example))
- dataset = dataset.remove_columns([
- 'question_score', 'question_date', 'question_id',
- 'answer_date', 'answer_id', 'answer_score', 'tags',
- 'question_body', 'answer_body'
- ])
- return dataset
-
-
-def show_info(dataset):
- """
- Print information about the dataset.
-
- Args:
- dataset (datasets.arrow_dataset.Dataset): The dataset.
- """
- print(dataset.info, '\n')
- print(f'dataset len: {len(dataset)}')
- print(f"example question: {dataset[0]['question']}")
- print(f"example answer: {dataset[0]['answer']}")
-
-
-if __name__ == '__main__':
- dataset = preprocess_dataset()
- dataset.push_to_hub('KonradSzafer/stackoverflow_python_preprocessed', private=False)
- show_info(dataset)
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/yolo_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/yolo_head.py
deleted file mode 100644
index 0f63afbbc94353e16e4c67ec5bc0b6cd1200de07..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/yolo_head.py
+++ /dev/null
@@ -1,527 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-# Copyright (c) 2019 Western Digital Corporation or its affiliates.
-
-import copy
-import warnings
-from typing import List, Optional, Sequence, Tuple
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule, is_norm
-from mmengine.model import bias_init_with_prob, constant_init, normal_init
-from mmengine.structures import InstanceData
-from torch import Tensor
-
-from mmdet.registry import MODELS, TASK_UTILS
-from mmdet.utils import (ConfigType, InstanceList, OptConfigType,
- OptInstanceList)
-from ..task_modules.samplers import PseudoSampler
-from ..utils import filter_scores_and_topk, images_to_levels, multi_apply
-from .base_dense_head import BaseDenseHead
-
-
-@MODELS.register_module()
-class YOLOV3Head(BaseDenseHead):
- """YOLOV3Head Paper link: https://arxiv.org/abs/1804.02767.
-
- Args:
- num_classes (int): The number of object classes (w/o background)
- in_channels (Sequence[int]): Number of input channels per scale.
- out_channels (Sequence[int]): The number of output channels per scale
- before the final 1x1 layer. Default: (1024, 512, 256).
- anchor_generator (:obj:`ConfigDict` or dict): Config dict for anchor
- generator.
- bbox_coder (:obj:`ConfigDict` or dict): Config of bounding box coder.
- featmap_strides (Sequence[int]): The stride of each scale.
- Should be in descending order. Defaults to (32, 16, 8).
- one_hot_smoother (float): Set a non-zero value to enable label-smooth
- Defaults to 0.
- conv_cfg (:obj:`ConfigDict` or dict, optional): Config dict for
- convolution layer. Defaults to None.
- norm_cfg (:obj:`ConfigDict` or dict): Dictionary to construct and
- config norm layer. Defaults to dict(type='BN', requires_grad=True).
- act_cfg (:obj:`ConfigDict` or dict): Config dict for activation layer.
- Defaults to dict(type='LeakyReLU', negative_slope=0.1).
- loss_cls (:obj:`ConfigDict` or dict): Config of classification loss.
- loss_conf (:obj:`ConfigDict` or dict): Config of confidence loss.
- loss_xy (:obj:`ConfigDict` or dict): Config of xy coordinate loss.
- loss_wh (:obj:`ConfigDict` or dict): Config of wh coordinate loss.
- train_cfg (:obj:`ConfigDict` or dict, optional): Training config of
- YOLOV3 head. Defaults to None.
- test_cfg (:obj:`ConfigDict` or dict, optional): Testing config of
- YOLOV3 head. Defaults to None.
- """
-
- def __init__(self,
- num_classes: int,
- in_channels: Sequence[int],
- out_channels: Sequence[int] = (1024, 512, 256),
- anchor_generator: ConfigType = dict(
- type='YOLOAnchorGenerator',
- base_sizes=[[(116, 90), (156, 198), (373, 326)],
- [(30, 61), (62, 45), (59, 119)],
- [(10, 13), (16, 30), (33, 23)]],
- strides=[32, 16, 8]),
- bbox_coder: ConfigType = dict(type='YOLOBBoxCoder'),
- featmap_strides: Sequence[int] = (32, 16, 8),
- one_hot_smoother: float = 0.,
- conv_cfg: OptConfigType = None,
- norm_cfg: ConfigType = dict(type='BN', requires_grad=True),
- act_cfg: ConfigType = dict(
- type='LeakyReLU', negative_slope=0.1),
- loss_cls: ConfigType = dict(
- type='CrossEntropyLoss',
- use_sigmoid=True,
- loss_weight=1.0),
- loss_conf: ConfigType = dict(
- type='CrossEntropyLoss',
- use_sigmoid=True,
- loss_weight=1.0),
- loss_xy: ConfigType = dict(
- type='CrossEntropyLoss',
- use_sigmoid=True,
- loss_weight=1.0),
- loss_wh: ConfigType = dict(type='MSELoss', loss_weight=1.0),
- train_cfg: OptConfigType = None,
- test_cfg: OptConfigType = None) -> None:
- super().__init__(init_cfg=None)
- # Check params
- assert (len(in_channels) == len(out_channels) == len(featmap_strides))
-
- self.num_classes = num_classes
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.featmap_strides = featmap_strides
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
- if self.train_cfg:
- self.assigner = TASK_UTILS.build(self.train_cfg['assigner'])
- if train_cfg.get('sampler', None) is not None:
- self.sampler = TASK_UTILS.build(
- self.train_cfg['sampler'], context=self)
- else:
- self.sampler = PseudoSampler()
-
- self.one_hot_smoother = one_hot_smoother
-
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.act_cfg = act_cfg
-
- self.bbox_coder = TASK_UTILS.build(bbox_coder)
-
- self.prior_generator = TASK_UTILS.build(anchor_generator)
-
- self.loss_cls = MODELS.build(loss_cls)
- self.loss_conf = MODELS.build(loss_conf)
- self.loss_xy = MODELS.build(loss_xy)
- self.loss_wh = MODELS.build(loss_wh)
-
- self.num_base_priors = self.prior_generator.num_base_priors[0]
- assert len(
- self.prior_generator.num_base_priors) == len(featmap_strides)
- self._init_layers()
-
- @property
- def num_levels(self) -> int:
- """int: number of feature map levels"""
- return len(self.featmap_strides)
-
- @property
- def num_attrib(self) -> int:
- """int: number of attributes in pred_map, bboxes (4) +
- objectness (1) + num_classes"""
-
- return 5 + self.num_classes
-
- def _init_layers(self) -> None:
- """initialize conv layers in YOLOv3 head."""
- self.convs_bridge = nn.ModuleList()
- self.convs_pred = nn.ModuleList()
- for i in range(self.num_levels):
- conv_bridge = ConvModule(
- self.in_channels[i],
- self.out_channels[i],
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- conv_pred = nn.Conv2d(self.out_channels[i],
- self.num_base_priors * self.num_attrib, 1)
-
- self.convs_bridge.append(conv_bridge)
- self.convs_pred.append(conv_pred)
-
- def init_weights(self) -> None:
- """initialize weights."""
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- normal_init(m, mean=0, std=0.01)
- if is_norm(m):
- constant_init(m, 1)
-
- # Use prior in model initialization to improve stability
- for conv_pred, stride in zip(self.convs_pred, self.featmap_strides):
- bias = conv_pred.bias.reshape(self.num_base_priors, -1)
- # init objectness with prior of 8 objects per feature map
- # refer to https://github.com/ultralytics/yolov3
- nn.init.constant_(bias.data[:, 4],
- bias_init_with_prob(8 / (608 / stride)**2))
- nn.init.constant_(bias.data[:, 5:], bias_init_with_prob(0.01))
-
- def forward(self, x: Tuple[Tensor, ...]) -> tuple:
- """Forward features from the upstream network.
-
- Args:
- x (tuple[Tensor]): Features from the upstream network, each is
- a 4D-tensor.
-
- Returns:
- tuple[Tensor]: A tuple of multi-level predication map, each is a
- 4D-tensor of shape (batch_size, 5+num_classes, height, width).
- """
-
- assert len(x) == self.num_levels
- pred_maps = []
- for i in range(self.num_levels):
- feat = x[i]
- feat = self.convs_bridge[i](feat)
- pred_map = self.convs_pred[i](feat)
- pred_maps.append(pred_map)
-
- return tuple(pred_maps),
-
- def predict_by_feat(self,
- pred_maps: Sequence[Tensor],
- batch_img_metas: Optional[List[dict]],
- cfg: OptConfigType = None,
- rescale: bool = False,
- with_nms: bool = True) -> InstanceList:
- """Transform a batch of output features extracted from the head into
- bbox results. It has been accelerated since PR #5991.
-
- Args:
- pred_maps (Sequence[Tensor]): Raw predictions for a batch of
- images.
- batch_img_metas (list[dict], Optional): Batch image meta info.
- Defaults to None.
- cfg (:obj:`ConfigDict` or dict, optional): Test / postprocessing
- configuration, if None, test_cfg would be used.
- Defaults to None.
- rescale (bool): If True, return boxes in original image space.
- Defaults to False.
- with_nms (bool): If True, do nms before return boxes.
- Defaults to True.
-
- Returns:
- list[:obj:`InstanceData`]: Object detection results of each image
- after the post process. Each item usually contains following keys.
-
- - scores (Tensor): Classification scores, has a shape
- (num_instance, )
- - labels (Tensor): Labels of bboxes, has a shape
- (num_instances, ).
- - bboxes (Tensor): Has a shape (num_instances, 4),
- the last dimension 4 arrange as (x1, y1, x2, y2).
- """
- assert len(pred_maps) == self.num_levels
- cfg = self.test_cfg if cfg is None else cfg
- cfg = copy.deepcopy(cfg)
-
- num_imgs = len(batch_img_metas)
- featmap_sizes = [pred_map.shape[-2:] for pred_map in pred_maps]
-
- mlvl_anchors = self.prior_generator.grid_priors(
- featmap_sizes, device=pred_maps[0].device)
- flatten_preds = []
- flatten_strides = []
- for pred, stride in zip(pred_maps, self.featmap_strides):
- pred = pred.permute(0, 2, 3, 1).reshape(num_imgs, -1,
- self.num_attrib)
- pred[..., :2].sigmoid_()
- flatten_preds.append(pred)
- flatten_strides.append(
- pred.new_tensor(stride).expand(pred.size(1)))
-
- flatten_preds = torch.cat(flatten_preds, dim=1)
- flatten_bbox_preds = flatten_preds[..., :4]
- flatten_objectness = flatten_preds[..., 4].sigmoid()
- flatten_cls_scores = flatten_preds[..., 5:].sigmoid()
- flatten_anchors = torch.cat(mlvl_anchors)
- flatten_strides = torch.cat(flatten_strides)
- flatten_bboxes = self.bbox_coder.decode(flatten_anchors,
- flatten_bbox_preds,
- flatten_strides.unsqueeze(-1))
- results_list = []
- for (bboxes, scores, objectness,
- img_meta) in zip(flatten_bboxes, flatten_cls_scores,
- flatten_objectness, batch_img_metas):
- # Filtering out all predictions with conf < conf_thr
- conf_thr = cfg.get('conf_thr', -1)
- if conf_thr > 0:
- conf_inds = objectness >= conf_thr
- bboxes = bboxes[conf_inds, :]
- scores = scores[conf_inds, :]
- objectness = objectness[conf_inds]
-
- score_thr = cfg.get('score_thr', 0)
- nms_pre = cfg.get('nms_pre', -1)
- scores, labels, keep_idxs, _ = filter_scores_and_topk(
- scores, score_thr, nms_pre)
-
- results = InstanceData(
- scores=scores,
- labels=labels,
- bboxes=bboxes[keep_idxs],
- score_factors=objectness[keep_idxs],
- )
- results = self._bbox_post_process(
- results=results,
- cfg=cfg,
- rescale=rescale,
- with_nms=with_nms,
- img_meta=img_meta)
- results_list.append(results)
- return results_list
-
- def loss_by_feat(
- self,
- pred_maps: Sequence[Tensor],
- batch_gt_instances: InstanceList,
- batch_img_metas: List[dict],
- batch_gt_instances_ignore: OptInstanceList = None) -> dict:
- """Calculate the loss based on the features extracted by the detection
- head.
-
- Args:
- pred_maps (list[Tensor]): Prediction map for each scale level,
- shape (N, num_anchors * num_attrib, H, W)
- batch_gt_instances (list[:obj:`InstanceData`]): Batch of
- gt_instance. It usually includes ``bboxes`` and ``labels``
- attributes.
- batch_img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- batch_gt_instances_ignore (list[:obj:`InstanceData`], optional):
- Batch of gt_instances_ignore. It includes ``bboxes`` attribute
- data that is ignored during training and testing.
- Defaults to None.
-
- Returns:
- dict: A dictionary of loss components.
- """
- num_imgs = len(batch_img_metas)
- device = pred_maps[0][0].device
-
- featmap_sizes = [
- pred_maps[i].shape[-2:] for i in range(self.num_levels)
- ]
- mlvl_anchors = self.prior_generator.grid_priors(
- featmap_sizes, device=device)
- anchor_list = [mlvl_anchors for _ in range(num_imgs)]
-
- responsible_flag_list = []
- for img_id in range(num_imgs):
- responsible_flag_list.append(
- self.responsible_flags(featmap_sizes,
- batch_gt_instances[img_id].bboxes,
- device))
-
- target_maps_list, neg_maps_list = self.get_targets(
- anchor_list, responsible_flag_list, batch_gt_instances)
-
- losses_cls, losses_conf, losses_xy, losses_wh = multi_apply(
- self.loss_by_feat_single, pred_maps, target_maps_list,
- neg_maps_list)
-
- return dict(
- loss_cls=losses_cls,
- loss_conf=losses_conf,
- loss_xy=losses_xy,
- loss_wh=losses_wh)
-
- def loss_by_feat_single(self, pred_map: Tensor, target_map: Tensor,
- neg_map: Tensor) -> tuple:
- """Calculate the loss of a single scale level based on the features
- extracted by the detection head.
-
- Args:
- pred_map (Tensor): Raw predictions for a single level.
- target_map (Tensor): The Ground-Truth target for a single level.
- neg_map (Tensor): The negative masks for a single level.
-
- Returns:
- tuple:
- loss_cls (Tensor): Classification loss.
- loss_conf (Tensor): Confidence loss.
- loss_xy (Tensor): Regression loss of x, y coordinate.
- loss_wh (Tensor): Regression loss of w, h coordinate.
- """
-
- num_imgs = len(pred_map)
- pred_map = pred_map.permute(0, 2, 3,
- 1).reshape(num_imgs, -1, self.num_attrib)
- neg_mask = neg_map.float()
- pos_mask = target_map[..., 4]
- pos_and_neg_mask = neg_mask + pos_mask
- pos_mask = pos_mask.unsqueeze(dim=-1)
- if torch.max(pos_and_neg_mask) > 1.:
- warnings.warn('There is overlap between pos and neg sample.')
- pos_and_neg_mask = pos_and_neg_mask.clamp(min=0., max=1.)
-
- pred_xy = pred_map[..., :2]
- pred_wh = pred_map[..., 2:4]
- pred_conf = pred_map[..., 4]
- pred_label = pred_map[..., 5:]
-
- target_xy = target_map[..., :2]
- target_wh = target_map[..., 2:4]
- target_conf = target_map[..., 4]
- target_label = target_map[..., 5:]
-
- loss_cls = self.loss_cls(pred_label, target_label, weight=pos_mask)
- loss_conf = self.loss_conf(
- pred_conf, target_conf, weight=pos_and_neg_mask)
- loss_xy = self.loss_xy(pred_xy, target_xy, weight=pos_mask)
- loss_wh = self.loss_wh(pred_wh, target_wh, weight=pos_mask)
-
- return loss_cls, loss_conf, loss_xy, loss_wh
-
- def get_targets(self, anchor_list: List[List[Tensor]],
- responsible_flag_list: List[List[Tensor]],
- batch_gt_instances: List[InstanceData]) -> tuple:
- """Compute target maps for anchors in multiple images.
-
- Args:
- anchor_list (list[list[Tensor]]): Multi level anchors of each
- image. The outer list indicates images, and the inner list
- corresponds to feature levels of the image. Each element of
- the inner list is a tensor of shape (num_total_anchors, 4).
- responsible_flag_list (list[list[Tensor]]): Multi level responsible
- flags of each image. Each element is a tensor of shape
- (num_total_anchors, )
- batch_gt_instances (list[:obj:`InstanceData`]): Batch of
- gt_instance. It usually includes ``bboxes`` and ``labels``
- attributes.
-
- Returns:
- tuple: Usually returns a tuple containing learning targets.
- - target_map_list (list[Tensor]): Target map of each level.
- - neg_map_list (list[Tensor]): Negative map of each level.
- """
- num_imgs = len(anchor_list)
-
- # anchor number of multi levels
- num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]]
-
- results = multi_apply(self._get_targets_single, anchor_list,
- responsible_flag_list, batch_gt_instances)
-
- all_target_maps, all_neg_maps = results
- assert num_imgs == len(all_target_maps) == len(all_neg_maps)
- target_maps_list = images_to_levels(all_target_maps, num_level_anchors)
- neg_maps_list = images_to_levels(all_neg_maps, num_level_anchors)
-
- return target_maps_list, neg_maps_list
-
- def _get_targets_single(self, anchors: List[Tensor],
- responsible_flags: List[Tensor],
- gt_instances: InstanceData) -> tuple:
- """Generate matching bounding box prior and converted GT.
-
- Args:
- anchors (List[Tensor]): Multi-level anchors of the image.
- responsible_flags (List[Tensor]): Multi-level responsible flags of
- anchors
- gt_instances (:obj:`InstanceData`): Ground truth of instance
- annotations. It should includes ``bboxes`` and ``labels``
- attributes.
-
- Returns:
- tuple:
- target_map (Tensor): Predication target map of each
- scale level, shape (num_total_anchors,
- 5+num_classes)
- neg_map (Tensor): Negative map of each scale level,
- shape (num_total_anchors,)
- """
- gt_bboxes = gt_instances.bboxes
- gt_labels = gt_instances.labels
- anchor_strides = []
- for i in range(len(anchors)):
- anchor_strides.append(
- torch.tensor(self.featmap_strides[i],
- device=gt_bboxes.device).repeat(len(anchors[i])))
- concat_anchors = torch.cat(anchors)
- concat_responsible_flags = torch.cat(responsible_flags)
-
- anchor_strides = torch.cat(anchor_strides)
- assert len(anchor_strides) == len(concat_anchors) == \
- len(concat_responsible_flags)
- pred_instances = InstanceData(
- priors=concat_anchors, responsible_flags=concat_responsible_flags)
-
- assign_result = self.assigner.assign(pred_instances, gt_instances)
- sampling_result = self.sampler.sample(assign_result, pred_instances,
- gt_instances)
-
- target_map = concat_anchors.new_zeros(
- concat_anchors.size(0), self.num_attrib)
-
- target_map[sampling_result.pos_inds, :4] = self.bbox_coder.encode(
- sampling_result.pos_priors, sampling_result.pos_gt_bboxes,
- anchor_strides[sampling_result.pos_inds])
-
- target_map[sampling_result.pos_inds, 4] = 1
-
- gt_labels_one_hot = F.one_hot(
- gt_labels, num_classes=self.num_classes).float()
- if self.one_hot_smoother != 0: # label smooth
- gt_labels_one_hot = gt_labels_one_hot * (
- 1 - self.one_hot_smoother
- ) + self.one_hot_smoother / self.num_classes
- target_map[sampling_result.pos_inds, 5:] = gt_labels_one_hot[
- sampling_result.pos_assigned_gt_inds]
-
- neg_map = concat_anchors.new_zeros(
- concat_anchors.size(0), dtype=torch.uint8)
- neg_map[sampling_result.neg_inds] = 1
-
- return target_map, neg_map
-
- def responsible_flags(self, featmap_sizes: List[tuple], gt_bboxes: Tensor,
- device: str) -> List[Tensor]:
- """Generate responsible anchor flags of grid cells in multiple scales.
-
- Args:
- featmap_sizes (List[tuple]): List of feature map sizes in multiple
- feature levels.
- gt_bboxes (Tensor): Ground truth boxes, shape (n, 4).
- device (str): Device where the anchors will be put on.
-
- Return:
- List[Tensor]: responsible flags of anchors in multiple level
- """
- assert self.num_levels == len(featmap_sizes)
- multi_level_responsible_flags = []
- for i in range(self.num_levels):
- anchor_stride = self.prior_generator.strides[i]
- feat_h, feat_w = featmap_sizes[i]
- gt_cx = ((gt_bboxes[:, 0] + gt_bboxes[:, 2]) * 0.5).to(device)
- gt_cy = ((gt_bboxes[:, 1] + gt_bboxes[:, 3]) * 0.5).to(device)
- gt_grid_x = torch.floor(gt_cx / anchor_stride[0]).long()
- gt_grid_y = torch.floor(gt_cy / anchor_stride[1]).long()
- # row major indexing
- gt_bboxes_grid_idx = gt_grid_y * feat_w + gt_grid_x
-
- responsible_grid = torch.zeros(
- feat_h * feat_w, dtype=torch.uint8, device=device)
- responsible_grid[gt_bboxes_grid_idx] = 1
-
- responsible_grid = responsible_grid[:, None].expand(
- responsible_grid.size(0),
- self.prior_generator.num_base_priors[i]).contiguous().view(-1)
-
- multi_level_responsible_flags.append(responsible_grid)
- return multi_level_responsible_flags
diff --git a/spaces/Laihiujin/OneFormer/oneformer/utils/misc.py b/spaces/Laihiujin/OneFormer/oneformer/utils/misc.py
deleted file mode 100644
index f2bca7733278c3a4b1f145bd7e5da23683b74961..0000000000000000000000000000000000000000
--- a/spaces/Laihiujin/OneFormer/oneformer/utils/misc.py
+++ /dev/null
@@ -1,197 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/util/misc.py
-"""
-Misc functions, including distributed helpers.
-
-Mostly copy-paste from torchvision references.
-"""
-from typing import List, Optional
-
-import torch
-import torch.distributed as dist
-import torchvision
-from torch import Tensor
-import warnings
-import torch.nn.functional as F
-import math
-
-def inverse_sigmoid(x, eps=1e-3):
- x = x.clamp(min=0, max=1)
- x1 = x.clamp(min=eps)
- x2 = (1 - x).clamp(min=eps)
- return torch.log(x1/x2)
-
-def _no_grad_trunc_normal_(tensor, mean, std, a, b):
- # Cut & paste from PyTorch official master until it's in a few official releases - RW
- # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
- def norm_cdf(x):
- # Computes standard normal cumulative distribution function
- return (1. + math.erf(x / math.sqrt(2.))) / 2.
-
- if (mean < a - 2 * std) or (mean > b + 2 * std):
- warnings.warn("mean is more than 2 std from [a, b] in nn.init.trunc_normal_. "
- "The distribution of values may be incorrect.",
- stacklevel=2)
-
- with torch.no_grad():
- # Values are generated by using a truncated uniform distribution and
- # then using the inverse CDF for the normal distribution.
- # Get upper and lower cdf values
- l = norm_cdf((a - mean) / std)
- u = norm_cdf((b - mean) / std)
-
- # Uniformly fill tensor with values from [l, u], then translate to
- # [2l-1, 2u-1].
- tensor.uniform_(2 * l - 1, 2 * u - 1)
-
- # Use inverse cdf transform for normal distribution to get truncated
- # standard normal
- tensor.erfinv_()
-
- # Transform to proper mean, std
- tensor.mul_(std * math.sqrt(2.))
- tensor.add_(mean)
-
- # Clamp to ensure it's in the proper range
- tensor.clamp_(min=a, max=b)
- return tensor
-
-def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.):
- # type: (Tensor, float, float, float, float) -> Tensor
- r"""Fills the input Tensor with values drawn from a truncated
- normal distribution. The values are effectively drawn from the
- normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`
- with values outside :math:`[a, b]` redrawn until they are within
- the bounds. The method used for generating the random values works
- best when :math:`a \leq \text{mean} \leq b`.
- Args:
- tensor: an n-dimensional `torch.Tensor`
- mean: the mean of the normal distribution
- std: the standard deviation of the normal distribution
- a: the minimum cutoff value
- b: the maximum cutoff value
- Examples:
- >>> w = torch.empty(3, 5)
- >>> nn.init.trunc_normal_(w)
- """
- return _no_grad_trunc_normal_(tensor, mean, std, a, b)
-
-def resize(input,
- size=None,
- scale_factor=None,
- mode='nearest',
- align_corners=None,
- warning=True):
- if warning:
- if size is not None and align_corners:
- input_h, input_w = tuple(int(x) for x in input.shape[2:])
- output_h, output_w = tuple(int(x) for x in size)
- if output_h > input_h or output_w > output_h:
- if ((output_h > 1 and output_w > 1 and input_h > 1
- and input_w > 1) and (output_h - 1) % (input_h - 1)
- and (output_w - 1) % (input_w - 1)):
- warnings.warn(
- f'When align_corners={align_corners}, '
- 'the output would more aligned if '
- f'input size {(input_h, input_w)} is `x+1` and '
- f'out size {(output_h, output_w)} is `nx+1`')
- if isinstance(size, torch.Size):
- size = tuple(int(x) for x in size)
- return F.interpolate(input, size, scale_factor, mode, align_corners)
-
-def _max_by_axis(the_list):
- # type: (List[List[int]]) -> List[int]
- maxes = the_list[0]
- for sublist in the_list[1:]:
- for index, item in enumerate(sublist):
- maxes[index] = max(maxes[index], item)
- return maxes
-
-
-class NestedTensor(object):
- def __init__(self, tensors, mask: Optional[Tensor]):
- self.tensors = tensors
- self.mask = mask
-
- def to(self, device):
- # type: (Device) -> NestedTensor # noqa
- cast_tensor = self.tensors.to(device)
- mask = self.mask
- if mask is not None:
- assert mask is not None
- cast_mask = mask.to(device)
- else:
- cast_mask = None
- return NestedTensor(cast_tensor, cast_mask)
-
- def decompose(self):
- return self.tensors, self.mask
-
- def __repr__(self):
- return str(self.tensors)
-
-
-def nested_tensor_from_tensor_list(tensor_list: List[Tensor]):
- # TODO make this more general
- if tensor_list[0].ndim == 3:
- if torchvision._is_tracing():
- # nested_tensor_from_tensor_list() does not export well to ONNX
- # call _onnx_nested_tensor_from_tensor_list() instead
- return _onnx_nested_tensor_from_tensor_list(tensor_list)
-
- # TODO make it support different-sized images
- max_size = _max_by_axis([list(img.shape) for img in tensor_list])
- # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list]))
- batch_shape = [len(tensor_list)] + max_size
- b, c, h, w = batch_shape
- dtype = tensor_list[0].dtype
- device = tensor_list[0].device
- tensor = torch.zeros(batch_shape, dtype=dtype, device=device)
- mask = torch.ones((b, h, w), dtype=torch.bool, device=device)
- for img, pad_img, m in zip(tensor_list, tensor, mask):
- pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- m[: img.shape[1], : img.shape[2]] = False
- else:
- raise ValueError("not supported")
- return NestedTensor(tensor, mask)
-
-
-# _onnx_nested_tensor_from_tensor_list() is an implementation of
-# nested_tensor_from_tensor_list() that is supported by ONNX tracing.
-@torch.jit.unused
-def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor:
- max_size = []
- for i in range(tensor_list[0].dim()):
- max_size_i = torch.max(
- torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32)
- ).to(torch.int64)
- max_size.append(max_size_i)
- max_size = tuple(max_size)
-
- # work around for
- # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- # m[: img.shape[1], :img.shape[2]] = False
- # which is not yet supported in onnx
- padded_imgs = []
- padded_masks = []
- for img in tensor_list:
- padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))]
- padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0]))
- padded_imgs.append(padded_img)
-
- m = torch.zeros_like(img[0], dtype=torch.int, device=img.device)
- padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1)
- padded_masks.append(padded_mask.to(torch.bool))
-
- tensor = torch.stack(padded_imgs)
- mask = torch.stack(padded_masks)
-
- return NestedTensor(tensor, mask=mask)
-
-
-def is_dist_avail_and_initialized():
- if not dist.is_available():
- return False
- if not dist.is_initialized():
- return False
- return True
diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/replicate.py b/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/replicate.py
deleted file mode 100644
index b71c7b8ed51a1d6c55b1f753bdd8d90bad79bd06..0000000000000000000000000000000000000000
--- a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/replicate.py
+++ /dev/null
@@ -1,94 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : replicate.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import functools
-
-from torch.nn.parallel.data_parallel import DataParallel
-
-__all__ = [
- 'CallbackContext',
- 'execute_replication_callbacks',
- 'DataParallelWithCallback',
- 'patch_replication_callback'
-]
-
-
-class CallbackContext(object):
- pass
-
-
-def execute_replication_callbacks(modules):
- """
- Execute an replication callback `__data_parallel_replicate__` on each module created by original replication.
-
- The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`
-
- Note that, as all modules are isomorphism, we assign each sub-module with a context
- (shared among multiple copies of this module on different devices).
- Through this context, different copies can share some information.
-
- We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback
- of any slave copies.
- """
- master_copy = modules[0]
- nr_modules = len(list(master_copy.modules()))
- ctxs = [CallbackContext() for _ in range(nr_modules)]
-
- for i, module in enumerate(modules):
- for j, m in enumerate(module.modules()):
- if hasattr(m, '__data_parallel_replicate__'):
- m.__data_parallel_replicate__(ctxs[j], i)
-
-
-class DataParallelWithCallback(DataParallel):
- """
- Data Parallel with a replication callback.
-
- An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by
- original `replicate` function.
- The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`
-
- Examples:
- > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
- > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])
- # sync_bn.__data_parallel_replicate__ will be invoked.
- """
-
- def replicate(self, module, device_ids):
- modules = super(DataParallelWithCallback, self).replicate(module, device_ids)
- execute_replication_callbacks(modules)
- return modules
-
-
-def patch_replication_callback(data_parallel):
- """
- Monkey-patch an existing `DataParallel` object. Add the replication callback.
- Useful when you have customized `DataParallel` implementation.
-
- Examples:
- > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
- > sync_bn = DataParallel(sync_bn, device_ids=[0, 1])
- > patch_replication_callback(sync_bn)
- # this is equivalent to
- > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)
- > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])
- """
-
- assert isinstance(data_parallel, DataParallel)
-
- old_replicate = data_parallel.replicate
-
- @functools.wraps(old_replicate)
- def new_replicate(module, device_ids):
- modules = old_replicate(module, device_ids)
- execute_replication_callbacks(modules)
- return modules
-
- data_parallel.replicate = new_replicate
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/GroundingDINO/groundingdino/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Makiing/coolb-in-gtest/src/components/user-menu.tsx b/spaces/Makiing/coolb-in-gtest/src/components/user-menu.tsx
deleted file mode 100644
index 9bd1edc9cf9f39b63629b021f0c1186b1a7c1341..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/src/components/user-menu.tsx
+++ /dev/null
@@ -1,113 +0,0 @@
-'use client'
-
-import { useEffect, useState } from 'react'
-import Image from 'next/image'
-import { toast } from 'react-hot-toast'
-import { Button } from '@/components/ui/button'
-import pkg from '../../package.json'
-import {
- DropdownMenu,
- DropdownMenuContent,
- DropdownMenuItem,
- DropdownMenuSeparator,
- DropdownMenuTrigger
-} from '@/components/ui/dropdown-menu'
-import { IconCopy, IconExternalLink, IconGitHub } from '@/components/ui/icons'
-import SettingIcon from '@/assets/images/settings.svg'
-import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard'
-
-export function UserMenu() {
- const [host, setHost] = useState('')
- const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 })
- useEffect(() => {
- setHost(location.host)
- }, [])
-
- useEffect(() => {
- if (isCopied) {
- toast.success('复制成功')
- }
- }, [isCopied])
- return (
-
- A library to train large neural networks across the internet. Imagine training one huge transformer on thousands of computers from universities, companies, and volunteers.
-
-
- With transfer learning, these large models can harness nearly unlimited raw data to improve performance on both academic benchmarks and solve new unexpected tasks.
-
-
-
- That said, training large neural networks isn't cheap. The hardware used for the previous largest language model costs over $25 million. A single training run for GPT-3 will set you back at least $4.6M in cloud GPUs. As a result, researchers can't contribute to state-of-the-art deep learning models and practitioners can't build applications without being supported by a megacorporation. If we want the future of AI to be bright, it can't be private.
-
-
-
-
-
-
-
- What is hivemind?
-
-
- Hivemind is a library for decentralized training of large neural networks. In a nutshell, you want to train a neural network, but all you have is a bunch of enthusiasts with unreliable computers that communicate over the internet. Any peer may fail or leave at any time, but the training must go on. To meet this objective, hivemind models use a specialized layer type: the Decentralized Mixture of Experts (DMoE). Here's how it works:
-
-
-
-
-
-
-
-
- In a hivemind experiment, all peers:
-
-
- host one or more experts depending on their hardware;
-
- run asynchronous training, calling experts from other peers,
-
- form a Distributed Hash Table to discover each other's experts
-
- - the same type of protocol that powers BitTorrent file sharing.
-
-
-
Hivemind uses Kademlia-based DHT that can scale to tens of thousands of peers with logarithmic search complexity.
-
-
-
-
-
-
-
- On each forward pass, a peer first determines what "speciality" of experts is needed to process the current inputs using a small "gating function" module. Then it finds k (e.g. 4) most suitable experts from other peers in the network using the DHT protocol. Finally, it sends forward pass requests to the selected experts, collects their outputs and averages them for the final prediction. Compared to traditional architectures, the Mixture-of-Experts needs much less bandwidth as every input is only sent to a small fraction of all experts.
-
-
-
-
-
- More importantly, the decentralized Mixture-of-Experts layers are inherently fault-tolerant: if some of the chosen experts fail to respond, the model will simply average the remaining ones and call that dropout. In the event that all k experts fail simultaneously, a peer will backtrack and find another k experts across the DHT. Finally, since every input is likely to be processed by different experts, hivemind peers run several asynchronous training batches to better utilize their hardware.
-
-
-
-
-
-
-
-
- What is hivemind for?
-
-
-
-
- Hivemind is designed for you to:
-
-
- run crowdsourced deep learning using compute from volunteers or decentralized participants;
-
- train neural networks on multiple servers with varying compute, bandwidth and reliability;
-
- [to be announced] join a worldwide open deep learning experiment.
-
-
-
- Conversely, here's what it isn't for:
-
-
- splitting your model between 2-3 servers that you fully control: use torch.distributed.rpc;
-
- distributed training for a reliable, uniform and highly connected cluster: use DeepSpeed;
-
- training small More specifically, models that fit into a single worker's memory. models with dynamically allocated of in-house workers: use torch.elastic.
-
-
-
-
-
- Hivemind v0.8 is in the early alpha stage: the core functionality to train
- decentralized models is there, but the inferface is still in active development.
- If you want to try hivemind for yourself or contribute to its development,
- take a look at the quickstart tutorial.
- Feel free to contact us on github with any questions, feedback and issues.
-
-
-
-
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/diffusion_decoder.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/diffusion_decoder.py
deleted file mode 100644
index 0d3cf7698a7334b4cfc8d9bdd0f5f6ee3059189d..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/diffusion_decoder.py
+++ /dev/null
@@ -1,415 +0,0 @@
-import math
-import random
-from abc import abstractmethod
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch import autocast
-
-from TTS.tts.layers.tortoise.arch_utils import AttentionBlock, normalization
-
-
-def is_latent(t):
- return t.dtype == torch.float
-
-
-def is_sequence(t):
- return t.dtype == torch.long
-
-
-def timestep_embedding(timesteps, dim, max_period=10000):
- """
- Create sinusoidal timestep embeddings.
-
- :param timesteps: a 1-D Tensor of N indices, one per batch element.
- These may be fractional.
- :param dim: the dimension of the output.
- :param max_period: controls the minimum frequency of the embeddings.
- :return: an [N x dim] Tensor of positional embeddings.
- """
- half = dim // 2
- freqs = torch.exp(-math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half).to(
- device=timesteps.device
- )
- args = timesteps[:, None].float() * freqs[None]
- embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
- if dim % 2:
- embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
- return embedding
-
-
-class TimestepBlock(nn.Module):
- @abstractmethod
- def forward(self, x, emb):
- """
- Apply the module to `x` given `emb` timestep embeddings.
- """
-
-
-class TimestepEmbedSequential(nn.Sequential, TimestepBlock):
- def forward(self, x, emb):
- for layer in self:
- if isinstance(layer, TimestepBlock):
- x = layer(x, emb)
- else:
- x = layer(x)
- return x
-
-
-class ResBlock(TimestepBlock):
- def __init__(
- self,
- channels,
- emb_channels,
- dropout,
- out_channels=None,
- dims=2,
- kernel_size=3,
- efficient_config=True,
- use_scale_shift_norm=False,
- ):
- super().__init__()
- self.channels = channels
- self.emb_channels = emb_channels
- self.dropout = dropout
- self.out_channels = out_channels or channels
- self.use_scale_shift_norm = use_scale_shift_norm
- padding = {1: 0, 3: 1, 5: 2}[kernel_size]
- eff_kernel = 1 if efficient_config else 3
- eff_padding = 0 if efficient_config else 1
-
- self.in_layers = nn.Sequential(
- normalization(channels),
- nn.SiLU(),
- nn.Conv1d(channels, self.out_channels, eff_kernel, padding=eff_padding),
- )
-
- self.emb_layers = nn.Sequential(
- nn.SiLU(),
- nn.Linear(
- emb_channels,
- 2 * self.out_channels if use_scale_shift_norm else self.out_channels,
- ),
- )
- self.out_layers = nn.Sequential(
- normalization(self.out_channels),
- nn.SiLU(),
- nn.Dropout(p=dropout),
- nn.Conv1d(self.out_channels, self.out_channels, kernel_size, padding=padding),
- )
-
- if self.out_channels == channels:
- self.skip_connection = nn.Identity()
- else:
- self.skip_connection = nn.Conv1d(channels, self.out_channels, eff_kernel, padding=eff_padding)
-
- def forward(self, x, emb):
- h = self.in_layers(x)
- emb_out = self.emb_layers(emb).type(h.dtype)
- while len(emb_out.shape) < len(h.shape):
- emb_out = emb_out[..., None]
- if self.use_scale_shift_norm:
- out_norm, out_rest = self.out_layers[0], self.out_layers[1:]
- scale, shift = torch.chunk(emb_out, 2, dim=1)
- h = out_norm(h) * (1 + scale) + shift
- h = out_rest(h)
- else:
- h = h + emb_out
- h = self.out_layers(h)
- return self.skip_connection(x) + h
-
-
-class DiffusionLayer(TimestepBlock):
- def __init__(self, model_channels, dropout, num_heads):
- super().__init__()
- self.resblk = ResBlock(
- model_channels,
- model_channels,
- dropout,
- model_channels,
- dims=1,
- use_scale_shift_norm=True,
- )
- self.attn = AttentionBlock(model_channels, num_heads, relative_pos_embeddings=True)
-
- def forward(self, x, time_emb):
- y = self.resblk(x, time_emb)
- return self.attn(y)
-
-
-class DiffusionTts(nn.Module):
- def __init__(
- self,
- model_channels=512,
- num_layers=8,
- in_channels=100,
- in_latent_channels=512,
- in_tokens=8193,
- out_channels=200, # mean and variance
- dropout=0,
- use_fp16=False,
- num_heads=16,
- # Parameters for regularization.
- layer_drop=0.1,
- unconditioned_percentage=0.1, # This implements a mechanism similar to what is used in classifier-free training.
- ):
- super().__init__()
-
- self.in_channels = in_channels
- self.model_channels = model_channels
- self.out_channels = out_channels
- self.dropout = dropout
- self.num_heads = num_heads
- self.unconditioned_percentage = unconditioned_percentage
- self.enable_fp16 = use_fp16
- self.layer_drop = layer_drop
-
- self.inp_block = nn.Conv1d(in_channels, model_channels, 3, 1, 1)
- self.time_embed = nn.Sequential(
- nn.Linear(model_channels, model_channels),
- nn.SiLU(),
- nn.Linear(model_channels, model_channels),
- )
-
- # Either code_converter or latent_converter is used, depending on what type of conditioning data is fed.
- # This model is meant to be able to be trained on both for efficiency purposes - it is far less computationally
- # complex to generate tokens, while generating latents will normally mean propagating through a deep autoregressive
- # transformer network.
- self.code_embedding = nn.Embedding(in_tokens, model_channels)
- self.code_converter = nn.Sequential(
- AttentionBlock(model_channels, num_heads, relative_pos_embeddings=True),
- AttentionBlock(model_channels, num_heads, relative_pos_embeddings=True),
- AttentionBlock(model_channels, num_heads, relative_pos_embeddings=True),
- )
- self.code_norm = normalization(model_channels)
- self.latent_conditioner = nn.Sequential(
- nn.Conv1d(in_latent_channels, model_channels, 3, padding=1),
- AttentionBlock(model_channels, num_heads, relative_pos_embeddings=True),
- AttentionBlock(model_channels, num_heads, relative_pos_embeddings=True),
- AttentionBlock(model_channels, num_heads, relative_pos_embeddings=True),
- AttentionBlock(model_channels, num_heads, relative_pos_embeddings=True),
- )
- self.contextual_embedder = nn.Sequential(
- nn.Conv1d(in_channels, model_channels, 3, padding=1, stride=2),
- nn.Conv1d(model_channels, model_channels * 2, 3, padding=1, stride=2),
- AttentionBlock(
- model_channels * 2,
- num_heads,
- relative_pos_embeddings=True,
- do_checkpoint=False,
- ),
- AttentionBlock(
- model_channels * 2,
- num_heads,
- relative_pos_embeddings=True,
- do_checkpoint=False,
- ),
- AttentionBlock(
- model_channels * 2,
- num_heads,
- relative_pos_embeddings=True,
- do_checkpoint=False,
- ),
- AttentionBlock(
- model_channels * 2,
- num_heads,
- relative_pos_embeddings=True,
- do_checkpoint=False,
- ),
- AttentionBlock(
- model_channels * 2,
- num_heads,
- relative_pos_embeddings=True,
- do_checkpoint=False,
- ),
- )
- self.unconditioned_embedding = nn.Parameter(torch.randn(1, model_channels, 1))
- self.conditioning_timestep_integrator = TimestepEmbedSequential(
- DiffusionLayer(model_channels, dropout, num_heads),
- DiffusionLayer(model_channels, dropout, num_heads),
- DiffusionLayer(model_channels, dropout, num_heads),
- )
-
- self.integrating_conv = nn.Conv1d(model_channels * 2, model_channels, kernel_size=1)
- self.mel_head = nn.Conv1d(model_channels, in_channels, kernel_size=3, padding=1)
-
- self.layers = nn.ModuleList(
- [DiffusionLayer(model_channels, dropout, num_heads) for _ in range(num_layers)]
- + [
- ResBlock(
- model_channels,
- model_channels,
- dropout,
- dims=1,
- use_scale_shift_norm=True,
- )
- for _ in range(3)
- ]
- )
-
- self.out = nn.Sequential(
- normalization(model_channels),
- nn.SiLU(),
- nn.Conv1d(model_channels, out_channels, 3, padding=1),
- )
-
- def get_grad_norm_parameter_groups(self):
- groups = {
- "minicoder": list(self.contextual_embedder.parameters()),
- "layers": list(self.layers.parameters()),
- "code_converters": list(self.code_embedding.parameters())
- + list(self.code_converter.parameters())
- + list(self.latent_conditioner.parameters())
- + list(self.latent_conditioner.parameters()),
- "timestep_integrator": list(self.conditioning_timestep_integrator.parameters())
- + list(self.integrating_conv.parameters()),
- "time_embed": list(self.time_embed.parameters()),
- }
- return groups
-
- def get_conditioning(self, conditioning_input):
- speech_conditioning_input = (
- conditioning_input.unsqueeze(1) if len(conditioning_input.shape) == 3 else conditioning_input
- )
- conds = []
- for j in range(speech_conditioning_input.shape[1]):
- conds.append(self.contextual_embedder(speech_conditioning_input[:, j]))
- conds = torch.cat(conds, dim=-1)
- conds = conds.mean(dim=-1)
- return conds
-
- def timestep_independent(
- self,
- aligned_conditioning,
- conditioning_latent,
- expected_seq_len,
- return_code_pred,
- ):
- # Shuffle aligned_latent to BxCxS format
- if is_latent(aligned_conditioning):
- aligned_conditioning = aligned_conditioning.permute(0, 2, 1)
-
- cond_scale, cond_shift = torch.chunk(conditioning_latent, 2, dim=1)
- if is_latent(aligned_conditioning):
- code_emb = self.latent_conditioner(aligned_conditioning)
- else:
- code_emb = self.code_embedding(aligned_conditioning).permute(0, 2, 1)
- code_emb = self.code_converter(code_emb)
- code_emb = self.code_norm(code_emb) * (1 + cond_scale.unsqueeze(-1)) + cond_shift.unsqueeze(-1)
-
- unconditioned_batches = torch.zeros((code_emb.shape[0], 1, 1), device=code_emb.device)
- # Mask out the conditioning branch for whole batch elements, implementing something similar to classifier-free guidance.
- if self.training and self.unconditioned_percentage > 0:
- unconditioned_batches = (
- torch.rand((code_emb.shape[0], 1, 1), device=code_emb.device) < self.unconditioned_percentage
- )
- code_emb = torch.where(
- unconditioned_batches,
- self.unconditioned_embedding.repeat(aligned_conditioning.shape[0], 1, 1),
- code_emb,
- )
- expanded_code_emb = F.interpolate(code_emb, size=expected_seq_len, mode="nearest")
-
- if not return_code_pred:
- return expanded_code_emb
- else:
- mel_pred = self.mel_head(expanded_code_emb)
- # Multiply mel_pred by !unconditioned_branches, which drops the gradient on unconditioned branches. This is because we don't want that gradient being used to train parameters through the codes_embedder as it unbalances contributions to that network from the MSE loss.
- mel_pred = mel_pred * unconditioned_batches.logical_not()
- return expanded_code_emb, mel_pred
-
- def forward(
- self,
- x,
- timesteps,
- aligned_conditioning=None,
- conditioning_latent=None,
- precomputed_aligned_embeddings=None,
- conditioning_free=False,
- return_code_pred=False,
- ):
- """
- Apply the model to an input batch.
-
- :param x: an [N x C x ...] Tensor of inputs.
- :param timesteps: a 1-D batch of timesteps.
- :param aligned_conditioning: an aligned latent or sequence of tokens providing useful data about the sample to be produced.
- :param conditioning_latent: a pre-computed conditioning latent; see get_conditioning().
- :param precomputed_aligned_embeddings: Embeddings returned from self.timestep_independent()
- :param conditioning_free: When set, all conditioning inputs (including tokens and conditioning_input) will not be considered.
- :return: an [N x C x ...] Tensor of outputs.
- """
- assert precomputed_aligned_embeddings is not None or (
- aligned_conditioning is not None and conditioning_latent is not None
- )
- assert not (
- return_code_pred and precomputed_aligned_embeddings is not None
- ) # These two are mutually exclusive.
-
- unused_params = []
- if conditioning_free:
- code_emb = self.unconditioned_embedding.repeat(x.shape[0], 1, x.shape[-1])
- unused_params.extend(list(self.code_converter.parameters()) + list(self.code_embedding.parameters()))
- unused_params.extend(list(self.latent_conditioner.parameters()))
- else:
- if precomputed_aligned_embeddings is not None:
- code_emb = precomputed_aligned_embeddings
- else:
- code_emb, mel_pred = self.timestep_independent(
- aligned_conditioning, conditioning_latent, x.shape[-1], True
- )
- if is_latent(aligned_conditioning):
- unused_params.extend(
- list(self.code_converter.parameters()) + list(self.code_embedding.parameters())
- )
- else:
- unused_params.extend(list(self.latent_conditioner.parameters()))
-
- unused_params.append(self.unconditioned_embedding)
-
- time_emb = self.time_embed(timestep_embedding(timesteps, self.model_channels))
- code_emb = self.conditioning_timestep_integrator(code_emb, time_emb)
- x = self.inp_block(x)
- x = torch.cat([x, code_emb], dim=1)
- x = self.integrating_conv(x)
- for i, lyr in enumerate(self.layers):
- # Do layer drop where applicable. Do not drop first and last layers.
- if (
- self.training
- and self.layer_drop > 0
- and i != 0
- and i != (len(self.layers) - 1)
- and random.random() < self.layer_drop
- ):
- unused_params.extend(list(lyr.parameters()))
- else:
- # First and last blocks will have autocast disabled for improved precision.
- with autocast(x.device.type, enabled=self.enable_fp16 and i != 0):
- x = lyr(x, time_emb)
-
- x = x.float()
- out = self.out(x)
-
- # Involve probabilistic or possibly unused parameters in loss so we don't get DDP errors.
- extraneous_addition = 0
- for p in unused_params:
- extraneous_addition = extraneous_addition + p.mean()
- out = out + extraneous_addition * 0
-
- if return_code_pred:
- return out, mel_pred
- return out
-
-
-if __name__ == "__main__":
- clip = torch.randn(2, 100, 400)
- aligned_latent = torch.randn(2, 388, 512)
- aligned_sequence = torch.randint(0, 8192, (2, 100))
- cond = torch.randn(2, 100, 400)
- ts = torch.LongTensor([600, 600])
- model = DiffusionTts(512, layer_drop=0.3, unconditioned_percentage=0.5)
- # Test with latent aligned conditioning
- # o = model(clip, ts, aligned_latent, cond)
- # Test with sequence aligned conditioning
- o = model(clip, ts, aligned_sequence, cond)
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/utils/downloaders.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/utils/downloaders.py
deleted file mode 100644
index 104dc7b94e17b1d7f828103d2396d6c5115b628a..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/utils/downloaders.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import os
-from typing import Optional
-
-from TTS.utils.download import download_kaggle_dataset, download_url, extract_archive
-
-
-def download_ljspeech(path: str):
- """Download and extract LJSpeech dataset
-
- Args:
- path (str): path to the directory where the dataset will be stored.
- """
- os.makedirs(path, exist_ok=True)
- url = "https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2"
- download_url(url, path)
- basename = os.path.basename(url)
- archive = os.path.join(path, basename)
- print(" > Extracting archive file...")
- extract_archive(archive)
-
-
-def download_vctk(path: str, use_kaggle: Optional[bool] = False):
- """Download and extract VCTK dataset.
-
- Args:
- path (str): path to the directory where the dataset will be stored.
-
- use_kaggle (bool, optional): Downloads vctk dataset from kaggle. Is generally faster. Defaults to False.
- """
- if use_kaggle:
- download_kaggle_dataset("mfekadu/english-multispeaker-corpus-for-voice-cloning", "VCTK", path)
- else:
- os.makedirs(path, exist_ok=True)
- url = "https://datashare.ed.ac.uk/bitstream/handle/10283/3443/VCTK-Corpus-0.92.zip"
- download_url(url, path)
- basename = os.path.basename(url)
- archive = os.path.join(path, basename)
- print(" > Extracting archive file...")
- extract_archive(archive)
-
-
-def download_tweb(path: str):
- """Download and extract Tweb dataset
-
- Args:
- path (str): Path to the directory where the dataset will be stored.
- """
- download_kaggle_dataset("bryanpark/the-world-english-bible-speech-dataset", "TWEB", path)
-
-
-def download_libri_tts(path: str, subset: Optional[str] = "all"):
- """Download and extract libri tts dataset.
-
- Args:
- path (str): Path to the directory where the dataset will be stored.
-
- subset (str, optional): Name of the subset to download. If you only want to download a certain
- portion specify it here. Defaults to 'all'.
- """
-
- subset_dict = {
- "libri-tts-clean-100": "http://www.openslr.org/resources/60/train-clean-100.tar.gz",
- "libri-tts-clean-360": "http://www.openslr.org/resources/60/train-clean-360.tar.gz",
- "libri-tts-other-500": "http://www.openslr.org/resources/60/train-other-500.tar.gz",
- "libri-tts-dev-clean": "http://www.openslr.org/resources/60/dev-clean.tar.gz",
- "libri-tts-dev-other": "http://www.openslr.org/resources/60/dev-other.tar.gz",
- "libri-tts-test-clean": "http://www.openslr.org/resources/60/test-clean.tar.gz",
- "libri-tts-test-other": "http://www.openslr.org/resources/60/test-other.tar.gz",
- }
-
- os.makedirs(path, exist_ok=True)
- if subset == "all":
- for sub, val in subset_dict.items():
- print(f" > Downloading {sub}...")
- download_url(val, path)
- basename = os.path.basename(val)
- archive = os.path.join(path, basename)
- print(" > Extracting archive file...")
- extract_archive(archive)
- print(" > All subsets downloaded")
- else:
- url = subset_dict[subset]
- download_url(url, path)
- basename = os.path.basename(url)
- archive = os.path.join(path, basename)
- print(" > Extracting archive file...")
- extract_archive(archive)
-
-
-def download_thorsten_de(path: str):
- """Download and extract Thorsten german male voice dataset.
-
- Args:
- path (str): Path to the directory where the dataset will be stored.
- """
- os.makedirs(path, exist_ok=True)
- url = "https://www.openslr.org/resources/95/thorsten-de_v02.tgz"
- download_url(url, path)
- basename = os.path.basename(url)
- archive = os.path.join(path, basename)
- print(" > Extracting archive file...")
- extract_archive(archive)
-
-
-def download_mailabs(path: str, language: str = "english"):
- """Download and extract Mailabs dataset.
-
- Args:
- path (str): Path to the directory where the dataset will be stored.
-
- language (str): Language subset to download. Defaults to english.
- """
- language_dict = {
- "english": "https://data.solak.de/data/Training/stt_tts/en_US.tgz",
- "german": "https://data.solak.de/data/Training/stt_tts/de_DE.tgz",
- "french": "https://data.solak.de/data/Training/stt_tts/fr_FR.tgz",
- "italian": "https://data.solak.de/data/Training/stt_tts/it_IT.tgz",
- "spanish": "https://data.solak.de/data/Training/stt_tts/es_ES.tgz",
- }
- os.makedirs(path, exist_ok=True)
- url = language_dict[language]
- download_url(url, path)
- basename = os.path.basename(url)
- archive = os.path.join(path, basename)
- print(" > Extracting archive file...")
- extract_archive(archive)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/adodbapi/test/is64bit.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/adodbapi/test/is64bit.py
deleted file mode 100644
index 39834540d908c2413e33c0a07caf103f1dca3ac7..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/adodbapi/test/is64bit.py
+++ /dev/null
@@ -1,41 +0,0 @@
-"""is64bit.Python() --> boolean value of detected Python word size. is64bit.os() --> os build version"""
-import sys
-
-
-def Python():
- if sys.platform == "cli": # IronPython
- import System
-
- return System.IntPtr.Size == 8
- else:
- try:
- return sys.maxsize > 2147483647
- except AttributeError:
- return sys.maxint > 2147483647
-
-
-def os():
- import platform
-
- pm = platform.machine()
- if pm != ".." and pm.endswith("64"): # recent Python (not Iron)
- return True
- else:
- import os
-
- if "PROCESSOR_ARCHITEW6432" in os.environ:
- return True # 32 bit program running on 64 bit Windows
- try:
- return os.environ["PROCESSOR_ARCHITECTURE"].endswith(
- "64"
- ) # 64 bit Windows 64 bit program
- except IndexError:
- pass # not Windows
- try:
- return "64" in platform.architecture()[0] # this often works in Linux
- except:
- return False # is an older version of Python, assume also an older os (best we can guess)
-
-
-if __name__ == "__main__":
- print("is64bit.Python() =", Python(), "is64bit.os() =", os())
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/parse_c_type.h b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/parse_c_type.h
deleted file mode 100644
index 84e4ef85659eb63e6453d8af9f024f1866182342..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/parse_c_type.h
+++ /dev/null
@@ -1,181 +0,0 @@
-
-/* This part is from file 'cffi/parse_c_type.h'. It is copied at the
- beginning of C sources generated by CFFI's ffi.set_source(). */
-
-typedef void *_cffi_opcode_t;
-
-#define _CFFI_OP(opcode, arg) (_cffi_opcode_t)(opcode | (((uintptr_t)(arg)) << 8))
-#define _CFFI_GETOP(cffi_opcode) ((unsigned char)(uintptr_t)cffi_opcode)
-#define _CFFI_GETARG(cffi_opcode) (((intptr_t)cffi_opcode) >> 8)
-
-#define _CFFI_OP_PRIMITIVE 1
-#define _CFFI_OP_POINTER 3
-#define _CFFI_OP_ARRAY 5
-#define _CFFI_OP_OPEN_ARRAY 7
-#define _CFFI_OP_STRUCT_UNION 9
-#define _CFFI_OP_ENUM 11
-#define _CFFI_OP_FUNCTION 13
-#define _CFFI_OP_FUNCTION_END 15
-#define _CFFI_OP_NOOP 17
-#define _CFFI_OP_BITFIELD 19
-#define _CFFI_OP_TYPENAME 21
-#define _CFFI_OP_CPYTHON_BLTN_V 23 // varargs
-#define _CFFI_OP_CPYTHON_BLTN_N 25 // noargs
-#define _CFFI_OP_CPYTHON_BLTN_O 27 // O (i.e. a single arg)
-#define _CFFI_OP_CONSTANT 29
-#define _CFFI_OP_CONSTANT_INT 31
-#define _CFFI_OP_GLOBAL_VAR 33
-#define _CFFI_OP_DLOPEN_FUNC 35
-#define _CFFI_OP_DLOPEN_CONST 37
-#define _CFFI_OP_GLOBAL_VAR_F 39
-#define _CFFI_OP_EXTERN_PYTHON 41
-
-#define _CFFI_PRIM_VOID 0
-#define _CFFI_PRIM_BOOL 1
-#define _CFFI_PRIM_CHAR 2
-#define _CFFI_PRIM_SCHAR 3
-#define _CFFI_PRIM_UCHAR 4
-#define _CFFI_PRIM_SHORT 5
-#define _CFFI_PRIM_USHORT 6
-#define _CFFI_PRIM_INT 7
-#define _CFFI_PRIM_UINT 8
-#define _CFFI_PRIM_LONG 9
-#define _CFFI_PRIM_ULONG 10
-#define _CFFI_PRIM_LONGLONG 11
-#define _CFFI_PRIM_ULONGLONG 12
-#define _CFFI_PRIM_FLOAT 13
-#define _CFFI_PRIM_DOUBLE 14
-#define _CFFI_PRIM_LONGDOUBLE 15
-
-#define _CFFI_PRIM_WCHAR 16
-#define _CFFI_PRIM_INT8 17
-#define _CFFI_PRIM_UINT8 18
-#define _CFFI_PRIM_INT16 19
-#define _CFFI_PRIM_UINT16 20
-#define _CFFI_PRIM_INT32 21
-#define _CFFI_PRIM_UINT32 22
-#define _CFFI_PRIM_INT64 23
-#define _CFFI_PRIM_UINT64 24
-#define _CFFI_PRIM_INTPTR 25
-#define _CFFI_PRIM_UINTPTR 26
-#define _CFFI_PRIM_PTRDIFF 27
-#define _CFFI_PRIM_SIZE 28
-#define _CFFI_PRIM_SSIZE 29
-#define _CFFI_PRIM_INT_LEAST8 30
-#define _CFFI_PRIM_UINT_LEAST8 31
-#define _CFFI_PRIM_INT_LEAST16 32
-#define _CFFI_PRIM_UINT_LEAST16 33
-#define _CFFI_PRIM_INT_LEAST32 34
-#define _CFFI_PRIM_UINT_LEAST32 35
-#define _CFFI_PRIM_INT_LEAST64 36
-#define _CFFI_PRIM_UINT_LEAST64 37
-#define _CFFI_PRIM_INT_FAST8 38
-#define _CFFI_PRIM_UINT_FAST8 39
-#define _CFFI_PRIM_INT_FAST16 40
-#define _CFFI_PRIM_UINT_FAST16 41
-#define _CFFI_PRIM_INT_FAST32 42
-#define _CFFI_PRIM_UINT_FAST32 43
-#define _CFFI_PRIM_INT_FAST64 44
-#define _CFFI_PRIM_UINT_FAST64 45
-#define _CFFI_PRIM_INTMAX 46
-#define _CFFI_PRIM_UINTMAX 47
-#define _CFFI_PRIM_FLOATCOMPLEX 48
-#define _CFFI_PRIM_DOUBLECOMPLEX 49
-#define _CFFI_PRIM_CHAR16 50
-#define _CFFI_PRIM_CHAR32 51
-
-#define _CFFI__NUM_PRIM 52
-#define _CFFI__UNKNOWN_PRIM (-1)
-#define _CFFI__UNKNOWN_FLOAT_PRIM (-2)
-#define _CFFI__UNKNOWN_LONG_DOUBLE (-3)
-
-#define _CFFI__IO_FILE_STRUCT (-1)
-
-
-struct _cffi_global_s {
- const char *name;
- void *address;
- _cffi_opcode_t type_op;
- void *size_or_direct_fn; // OP_GLOBAL_VAR: size, or 0 if unknown
- // OP_CPYTHON_BLTN_*: addr of direct function
-};
-
-struct _cffi_getconst_s {
- unsigned long long value;
- const struct _cffi_type_context_s *ctx;
- int gindex;
-};
-
-struct _cffi_struct_union_s {
- const char *name;
- int type_index; // -> _cffi_types, on a OP_STRUCT_UNION
- int flags; // _CFFI_F_* flags below
- size_t size;
- int alignment;
- int first_field_index; // -> _cffi_fields array
- int num_fields;
-};
-#define _CFFI_F_UNION 0x01 // is a union, not a struct
-#define _CFFI_F_CHECK_FIELDS 0x02 // complain if fields are not in the
- // "standard layout" or if some are missing
-#define _CFFI_F_PACKED 0x04 // for CHECK_FIELDS, assume a packed struct
-#define _CFFI_F_EXTERNAL 0x08 // in some other ffi.include()
-#define _CFFI_F_OPAQUE 0x10 // opaque
-
-struct _cffi_field_s {
- const char *name;
- size_t field_offset;
- size_t field_size;
- _cffi_opcode_t field_type_op;
-};
-
-struct _cffi_enum_s {
- const char *name;
- int type_index; // -> _cffi_types, on a OP_ENUM
- int type_prim; // _CFFI_PRIM_xxx
- const char *enumerators; // comma-delimited string
-};
-
-struct _cffi_typename_s {
- const char *name;
- int type_index; /* if opaque, points to a possibly artificial
- OP_STRUCT which is itself opaque */
-};
-
-struct _cffi_type_context_s {
- _cffi_opcode_t *types;
- const struct _cffi_global_s *globals;
- const struct _cffi_field_s *fields;
- const struct _cffi_struct_union_s *struct_unions;
- const struct _cffi_enum_s *enums;
- const struct _cffi_typename_s *typenames;
- int num_globals;
- int num_struct_unions;
- int num_enums;
- int num_typenames;
- const char *const *includes;
- int num_types;
- int flags; /* future extension */
-};
-
-struct _cffi_parse_info_s {
- const struct _cffi_type_context_s *ctx;
- _cffi_opcode_t *output;
- unsigned int output_size;
- size_t error_location;
- const char *error_message;
-};
-
-struct _cffi_externpy_s {
- const char *name;
- size_t size_of_result;
- void *reserved1, *reserved2;
-};
-
-#ifdef _CFFI_INTERNAL
-static int parse_c_type(struct _cffi_parse_info_s *info, const char *input);
-static int search_in_globals(const struct _cffi_type_context_s *ctx,
- const char *search, size_t search_len);
-static int search_in_struct_unions(const struct _cffi_type_context_s *ctx,
- const char *search, size_t search_len);
-#endif
diff --git a/spaces/asd123Xiao/kafuu_chino_sovits4.0/modules/modules.py b/spaces/asd123Xiao/kafuu_chino_sovits4.0/modules/modules.py
deleted file mode 100644
index 54290fd207b25e93831bd21005990ea137e6b50e..0000000000000000000000000000000000000000
--- a/spaces/asd123Xiao/kafuu_chino_sovits4.0/modules/modules.py
+++ /dev/null
@@ -1,342 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import modules.commons as commons
-from modules.commons import init_weights, get_padding
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
diff --git a/spaces/ashercn97/AsherTesting/api-examples/api-example-chat-stream.py b/spaces/ashercn97/AsherTesting/api-examples/api-example-chat-stream.py
deleted file mode 100644
index 14f6f9d66e0b2a35ed213933d3faa75bc80ae620..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/api-examples/api-example-chat-stream.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import asyncio
-import json
-import sys
-
-try:
- import websockets
-except ImportError:
- print("Websockets package not found. Make sure it's installed.")
-
-# For local streaming, the websockets are hosted without ssl - ws://
-HOST = 'localhost:5005'
-URI = f'ws://{HOST}/api/v1/chat-stream'
-
-# For reverse-proxied streaming, the remote will likely host with ssl - wss://
-# URI = 'wss://your-uri-here.trycloudflare.com/api/v1/stream'
-
-
-async def run(user_input, history):
- # Note: the selected defaults change from time to time.
- request = {
- 'user_input': user_input,
- 'max_new_tokens': 250,
- 'history': history,
- 'mode': 'instruct', # Valid options: 'chat', 'chat-instruct', 'instruct'
- 'character': 'Example',
- 'instruction_template': 'Vicuna-v1.1', # Will get autodetected if unset
- # 'context_instruct': '', # Optional
- 'your_name': 'You',
-
- 'regenerate': False,
- '_continue': False,
- 'stop_at_newline': False,
- 'chat_generation_attempts': 1,
- 'chat-instruct_command': 'Continue the chat dialogue below. Write a single reply for the character "<|character|>".\n\n<|prompt|>',
-
- # Generation params. If 'preset' is set to different than 'None', the values
- # in presets/preset-name.yaml are used instead of the individual numbers.
- 'preset': 'None',
- 'do_sample': True,
- 'temperature': 0.7,
- 'top_p': 0.1,
- 'typical_p': 1,
- 'epsilon_cutoff': 0, # In units of 1e-4
- 'eta_cutoff': 0, # In units of 1e-4
- 'tfs': 1,
- 'top_a': 0,
- 'repetition_penalty': 1.18,
- 'repetition_penalty_range': 0,
- 'top_k': 40,
- 'min_length': 0,
- 'no_repeat_ngram_size': 0,
- 'num_beams': 1,
- 'penalty_alpha': 0,
- 'length_penalty': 1,
- 'early_stopping': False,
- 'mirostat_mode': 0,
- 'mirostat_tau': 5,
- 'mirostat_eta': 0.1,
-
- 'seed': -1,
- 'add_bos_token': True,
- 'truncation_length': 2048,
- 'ban_eos_token': False,
- 'skip_special_tokens': True,
- 'stopping_strings': []
- }
-
- async with websockets.connect(URI, ping_interval=None) as websocket:
- await websocket.send(json.dumps(request))
-
- while True:
- incoming_data = await websocket.recv()
- incoming_data = json.loads(incoming_data)
-
- match incoming_data['event']:
- case 'text_stream':
- yield incoming_data['history']
- case 'stream_end':
- return
-
-
-async def print_response_stream(user_input, history):
- cur_len = 0
- async for new_history in run(user_input, history):
- cur_message = new_history['visible'][-1][1][cur_len:]
- cur_len += len(cur_message)
- print(cur_message, end='')
- sys.stdout.flush() # If we don't flush, we won't see tokens in realtime.
-
-
-if __name__ == '__main__':
- user_input = "Please give me a step-by-step guide on how to plant a tree in my backyard."
-
- # Basic example
- history = {'internal': [], 'visible': []}
-
- # "Continue" example. Make sure to set '_continue' to True above
- # arr = [user_input, 'Surely, here is']
- # history = {'internal': [arr], 'visible': [arr]}
-
- asyncio.run(print_response_stream(user_input, history))
diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/launch.py b/spaces/awaawawawa/iurf7irfuyytruyyugb/launch.py
deleted file mode 100644
index d4028b5c57649b1840a0bfefca6c2384afd500b9..0000000000000000000000000000000000000000
--- a/spaces/awaawawawa/iurf7irfuyytruyyugb/launch.py
+++ /dev/null
@@ -1,181 +0,0 @@
-# this scripts installs necessary requirements and launches main program in webui.py
-import subprocess
-import os
-import sys
-import importlib.util
-import shlex
-import platform
-
-dir_repos = "repositories"
-python = sys.executable
-git = os.environ.get('GIT', "git")
-index_url = os.environ.get('INDEX_URL', "")
-
-
-def extract_arg(args, name):
- return [x for x in args if x != name], name in args
-
-
-def run(command, desc=None, errdesc=None):
- if desc is not None:
- print(desc)
-
- result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
-
- if result.returncode != 0:
-
- message = f"""{errdesc or 'Error running command'}.
-Command: {command}
-Error code: {result.returncode}
-stdout: {result.stdout.decode(encoding="utf8", errors="ignore") if len(result.stdout)>0 else ''}
-stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.stderr)>0 else ''}
-"""
- raise RuntimeError(message)
-
- return result.stdout.decode(encoding="utf8", errors="ignore")
-
-
-def check_run(command):
- result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
- return result.returncode == 0
-
-
-def is_installed(package):
- try:
- spec = importlib.util.find_spec(package)
- except ModuleNotFoundError:
- return False
-
- return spec is not None
-
-
-def repo_dir(name):
- return os.path.join(dir_repos, name)
-
-
-def run_python(code, desc=None, errdesc=None):
- return run(f'"{python}" -c "{code}"', desc, errdesc)
-
-
-def run_pip(args, desc=None):
- index_url_line = f' --index-url {index_url}' if index_url != '' else ''
- return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
-
-
-def check_run_python(code):
- return check_run(f'"{python}" -c "{code}"')
-
-
-def git_clone(url, dir, name, commithash=None):
- # TODO clone into temporary dir and move if successful
-
- if os.path.exists(dir):
- if commithash is None:
- return
-
- current_hash = run(f'"{git}" -C {dir} rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}").strip()
- if current_hash == commithash:
- return
-
- run(f'"{git}" -C {dir} fetch', f"Fetching updates for {name}...", f"Couldn't fetch {name}")
- run(f'"{git}" -C {dir} checkout {commithash}', f"Checking out commit for {name} with hash: {commithash}...", f"Couldn't checkout commit {commithash} for {name}")
- return
-
- run(f'"{git}" clone "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}")
-
- if commithash is not None:
- run(f'"{git}" -C {dir} checkout {commithash}', None, "Couldn't checkout {name}'s hash: {commithash}")
-
-
-def prepare_enviroment():
- torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113")
- requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
- commandline_args = os.environ.get('COMMANDLINE_ARGS', "")
-
- gfpgan_package = os.environ.get('GFPGAN_PACKAGE', "git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379")
- clip_package = os.environ.get('CLIP_PACKAGE', "git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1")
- deepdanbooru_package = os.environ.get('DEEPDANBOORU_PACKAGE', "git+https://github.com/KichangKim/DeepDanbooru.git@edf73df4cdaeea2cf00e9ac08bd8a9026b7a7b26")
-
- xformers_windows_package = os.environ.get('XFORMERS_WINDOWS_PACKAGE', 'https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl')
-
- stable_diffusion_repo = os.environ.get('STABLE_DIFFUSION_REPO', "https://github.com/CompVis/stable-diffusion.git")
- taming_transformers_repo = os.environ.get('TAMING_REANSFORMERS_REPO', "https://github.com/CompVis/taming-transformers.git")
- k_diffusion_repo = os.environ.get('K_DIFFUSION_REPO', 'https://github.com/crowsonkb/k-diffusion.git')
- codeformer_repo = os.environ.get('CODEFORMET_REPO', 'https://github.com/sczhou/CodeFormer.git')
- blip_repo = os.environ.get('BLIP_REPO', 'https://github.com/salesforce/BLIP.git')
-
- stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc")
- taming_transformers_commit_hash = os.environ.get('TAMING_TRANSFORMERS_COMMIT_HASH', "24268930bf1dce879235a7fddd0b2355b84d7ea6")
- k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "f4e99857772fc3a126ba886aadf795a332774878")
- codeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', "c5b4593074ba6214284d6acd5f1719b6c5d739af")
- blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9")
-
- args = shlex.split(commandline_args)
-
- args, skip_torch_cuda_test = extract_arg(args, '--skip-torch-cuda-test')
- args, reinstall_xformers = extract_arg(args, '--reinstall-xformers')
- xformers = '--xformers' in args
- deepdanbooru = '--deepdanbooru' in args
- ngrok = '--ngrok' in args
-
- try:
- commit = run(f"{git} rev-parse HEAD").strip()
- except Exception:
- commit = ""
-
- print(f"Python {sys.version}")
- print(f"Commit hash: {commit}")
-
- if not is_installed("torch") or not is_installed("torchvision"):
- run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch")
-
- if not skip_torch_cuda_test:
- run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
-
- if not is_installed("gfpgan"):
- run_pip(f"install {gfpgan_package}", "gfpgan")
-
- if not is_installed("clip"):
- run_pip(f"install {clip_package}", "clip")
-
- if (not is_installed("xformers") or reinstall_xformers) and xformers and platform.python_version().startswith("3.10"):
- if platform.system() == "Windows":
- run_pip(f"install -U -I --no-deps {xformers_windows_package}", "xformers")
- elif platform.system() == "Linux":
- run_pip("install xformers", "xformers")
-
- if not is_installed("deepdanbooru") and deepdanbooru:
- run_pip(f"install {deepdanbooru_package}#egg=deepdanbooru[tensorflow] tensorflow==2.10.0 tensorflow-io==0.27.0", "deepdanbooru")
-
- if not is_installed("pyngrok") and ngrok:
- run_pip("install pyngrok", "ngrok")
-
- os.makedirs(dir_repos, exist_ok=True)
-
- git_clone(stable_diffusion_repo, repo_dir('stable-diffusion'), "Stable Diffusion", stable_diffusion_commit_hash)
- git_clone(taming_transformers_repo, repo_dir('taming-transformers'), "Taming Transformers", taming_transformers_commit_hash)
- git_clone(k_diffusion_repo, repo_dir('k-diffusion'), "K-diffusion", k_diffusion_commit_hash)
- git_clone(codeformer_repo, repo_dir('CodeFormer'), "CodeFormer", codeformer_commit_hash)
- git_clone(blip_repo, repo_dir('BLIP'), "BLIP", blip_commit_hash)
-
- if not is_installed("lpips"):
- run_pip(f"install -r {os.path.join(repo_dir('CodeFormer'), 'requirements.txt')}", "requirements for CodeFormer")
-
- run_pip(f"install -r {requirements_file}", "requirements for Web UI")
-
- sys.argv += args
-
- if "--exit" in args:
- print("Exiting because of --exit argument")
- exit(0)
-
-
-def start_webui():
- print(f"Launching Web UI with arguments: {' '.join(sys.argv[1:])}")
- import webui
- webui.webui()
-
-
-if __name__ == "__main__":
- prepare_enviroment()
- start_webui()
diff --git a/spaces/awacke1/Bloom.QA.Translation.LLM.AI/app.py b/spaces/awacke1/Bloom.QA.Translation.LLM.AI/app.py
deleted file mode 100644
index d2d9c78c4e68b4ed216d7e864835d863a3d0aecf..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Bloom.QA.Translation.LLM.AI/app.py
+++ /dev/null
@@ -1,95 +0,0 @@
-# -*- coding: utf-8 -*-
-
-import gradio as gr
-import requests
-import os
-import json #
-
-##Bloom
-API_URL = "https://api-inference.huggingface.co/models/bigscience/bloom"
-# HF_TOKEN = os.environ["HF_TOKEN"]
-# headers = {"Authorization": f"Bearer {HF_TOKEN}"}
-
-def translate(prompt_ , from_lang, to_lang, input_prompt = "translate this", seed = 42):
-
- prompt = f"Instruction : Given an {from_lang} input sentence translate it into {to_lang} sentence. \n input : \"{prompt_}\" \n {to_lang} : "
- if len(prompt) == 0:
- prompt = input_prompt
-
- json_ = {
- "inputs": prompt,
- "parameters": {
- "top_p": 0.9,
- "temperature": 1.1,
- "max_new_tokens": 250,
- "return_full_text": False,
- "do_sample": False,
- "seed": seed,
- "early_stopping": False,
- "length_penalty": 0.0,
- "eos_token_id": None,
- },
- "options": {
- "use_cache": True,
- "wait_for_model": True,
- },
- }
- response = requests.request("POST", API_URL, json=json_) # headers=headers
- # output = response.json()
- output = json.loads(response.content.decode("utf-8"))
- output_tmp = output[0]['generated_text']
- solution = output_tmp.split(f"\n{to_lang}:")[0]
-
-
- if '\n\n' in solution:
- final_solution = solution.split("\n\n")[0]
- else:
- final_solution = solution
- return final_solution
-
-demo = gr.Blocks()
-
-with demo:
- gr.Markdown("
Translate with Bloom
")
- gr.Markdown('''
-## Model Details
-BLOOM is an autoregressive Large Language Model (LLM), trained to continue text
-from a prompt on vast amounts of text data using industrial-scale computational
-resources. As such, it is able to output coherent text in 46 languages and 13
-programming languages that is hardly distinguishable from text written by humans.
-BLOOM can also be instructed to perform text tasks it hasn't been explicitly trained
-for, by casting them as text generation tasks.
-
-## Project Details
-In this project we are going to explore the translation capabitlies of "BLOOM".
-
-## How to use
-At the moment this space has only capacity to translate between English, Spanish and Hindi languages.
-from languange is the languge you put in text box and to langauge is to what language you are intended to translate.
-Select from language from the drop down.
-Select to language from the drop down.
-
-people are encouraged to improve this space by contributing.
-
-this space is created by [Kishore](https://www.linkedin.com/in/kishore-kunisetty-925a3919a/) inorder to participate in [EuroPython22](https://huggingface.co/EuroPython2022)
-please like the project to support my contribution to EuroPython22. 😊
-''')
- with gr.Row():
- from_lang = gr.Dropdown(['English', 'Spanish', 'Hindi' , 'Bangla'],
- value='English',
- label='select From language : ')
- to_lang = gr.Dropdown(['English', 'Spanish', 'Hindi'],
- value='Hindi',
- label= 'select to Language : ')
-
- input_prompt = gr.Textbox(label="Enter the sentence : ",
- value=f"Instruction: ... \ninput: \"from sentence\" \n{to_lang} :",
- lines=6)
-
- generated_txt = gr.Textbox(lines=3)
-
- b1 = gr.Button("translate")
- b1.click(translate,inputs=[ input_prompt, from_lang, to_lang], outputs=generated_txt)
-
-demo.launch(enable_queue=True, debug=True)
-
diff --git a/spaces/awacke1/CardEvolution-PlayingBoard/README.md b/spaces/awacke1/CardEvolution-PlayingBoard/README.md
deleted file mode 100644
index 8dcce5d45d000d49d8ad1b751a1a4890d993ad66..0000000000000000000000000000000000000000
--- a/spaces/awacke1/CardEvolution-PlayingBoard/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: CardEvolution PlayingBoard
-emoji: 🏢
-colorFrom: indigo
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/ClinicalTerminologyNER-Refactored/app.py b/spaces/awacke1/ClinicalTerminologyNER-Refactored/app.py
deleted file mode 100644
index fd97bf2a8592b219ba1c2d4c94187d984e63d114..0000000000000000000000000000000000000000
--- a/spaces/awacke1/ClinicalTerminologyNER-Refactored/app.py
+++ /dev/null
@@ -1,268 +0,0 @@
-import gradio as gr
-import pandas as pd
-import json
-from collections import defaultdict
-
-# Create tokenizer for biomed model
-from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification
-tokenizer = AutoTokenizer.from_pretrained("d4data/biomedical-ner-all") # https://huggingface.co/d4data/biomedical-ner-all?text=asthma
-model = AutoModelForTokenClassification.from_pretrained("d4data/biomedical-ner-all")
-pipe = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
-
-# Matplotlib for entity graph
-import matplotlib.pyplot as plt
-plt.switch_backend("Agg")
-
-# Load examples from JSON
-import os
-
-# Load terminology datasets:
-basedir = os.path.dirname(__file__)
-#dataLOINC = pd.read_csv(basedir + "\\" + f'LoincTableCore.csv')
-#dataPanels = pd.read_csv(basedir + "\\" + f'PanelsAndForms-ACW1208Labeled.csv')
-#dataSNOMED = pd.read_csv(basedir + "\\" + f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t')
-#dataOMS = pd.read_csv(basedir + "\\" + f'SnomedOMS.csv')
-#dataICD10 = pd.read_csv(basedir + "\\" + f'ICD10Diagnosis.csv')
-
-dataLOINC = pd.read_csv(f'LoincTableCore.csv')
-dataPanels = pd.read_csv(f'PanelsAndForms-ACW1208Labeled.csv')
-dataSNOMED = pd.read_csv(f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t')
-dataOMS = pd.read_csv(f'SnomedOMS.csv')
-dataICD10 = pd.read_csv(f'ICD10Diagnosis.csv')
-
-dir_path = os.path.dirname(os.path.realpath(__file__))
-EXAMPLES = {}
-#with open(dir_path + "\\" + "examples.json", "r") as f:
-with open("examples.json", "r") as f:
- example_json = json.load(f)
- EXAMPLES = {x["text"]: x["label"] for x in example_json}
-
-def MatchLOINC(name):
- #basedir = os.path.dirname(__file__)
- pd.set_option("display.max_rows", None)
- #data = pd.read_csv(basedir + "\\" + f'LoincTableCore.csv')
- data = dataLOINC
- swith=data.loc[data['COMPONENT'].str.contains(name, case=False, na=False)]
- return swith
-
-def MatchLOINCPanelsandForms(name):
- #basedir = os.path.dirname(__file__)
- #data = pd.read_csv(basedir + "\\" + f'PanelsAndForms-ACW1208Labeled.csv')
- data = dataPanels
- # Assessment Name:
- #swith=data.loc[data['ParentName'].str.contains(name, case=False, na=False)]
- # Assessment Question:
- swith=data.loc[data['LoincName'].str.contains(name, case=False, na=False)]
- return swith
-
-def MatchSNOMED(name):
- #basedir = os.path.dirname(__file__)
- #data = pd.read_csv(basedir + "\\" + f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t')
- data = dataSNOMED
- swith=data.loc[data['term'].str.contains(name, case=False, na=False)]
- return swith
-
-def MatchOMS(name):
- #basedir = os.path.dirname(__file__)
- #data = pd.read_csv(basedir + "\\" + f'SnomedOMS.csv')
- data = dataOMS
- swith=data.loc[data['SNOMED CT'].str.contains(name, case=False, na=False)]
- return swith
-
-def MatchICD10(name):
- #basedir = os.path.dirname(__file__)
- #data = pd.read_csv(basedir + "\\" + f'ICD10Diagnosis.csv')
- data = dataICD10
- swith=data.loc[data['Description'].str.contains(name, case=False, na=False)]
- return swith
-
-def SaveResult(text, outputfileName):
- #try:
- basedir = os.path.dirname(__file__)
- savePath = outputfileName
- print("Saving: " + text + " to " + savePath)
- from os.path import exists
- file_exists = exists(savePath)
- if file_exists:
- with open(outputfileName, "a") as f: #append
- #for line in text:
- f.write(str(text.replace("\n"," ")))
- f.write('\n')
- else:
- with open(outputfileName, "w") as f: #write
- #for line in text:
- f.write(str(text.replace("\n"," ")))
- f.write('\n')
- #except ValueError as err:
- # raise ValueError("File Save Error in SaveResult \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None
-
- return
-
-def loadFile(filename):
- try:
- basedir = os.path.dirname(__file__)
- loadPath = basedir + "\\" + filename
-
- print("Loading: " + loadPath)
-
- from os.path import exists
- file_exists = exists(loadPath)
-
- if file_exists:
- with open(loadPath, "r") as f: #read
- contents = f.read()
- print(contents)
- return contents
-
- except ValueError as err:
- raise ValueError("File Save Error in SaveResult \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None
-
- return ""
-
-def get_today_filename():
- from datetime import datetime
- date = datetime.now().strftime("%Y_%m_%d-%I.%M.%S.%p")
- #print(f"filename_{date}") 'filename_2023_01_12-03-29-22_AM'
- return f"MedNER_{date}.csv"
-
-def get_base(filename):
- basedir = os.path.dirname(__file__)
- loadPath = basedir + "\\" + filename
- #print("Loading: " + loadPath)
- return loadPath
-
-def group_by_entity(raw):
- outputFile = get_base(get_today_filename())
- out = defaultdict(int)
-
- for ent in raw:
- out[ent["entity_group"]] += 1
- myEntityGroup = ent["entity_group"]
- print("Found entity group type: " + myEntityGroup)
-
- if (myEntityGroup in ['Sign_symptom', 'Detailed_description', 'History', 'Activity', 'Medication' ]):
- eterm = ent["word"].replace('#','')
- minlength = 3
- if len(eterm) > minlength:
- print("Found eterm: " + eterm)
- eterm.replace("#","")
- g1=MatchLOINC(eterm)
- g2=MatchLOINCPanelsandForms(eterm)
- g3=MatchSNOMED(eterm)
- g4=MatchOMS(eterm)
- g5=MatchICD10(eterm)
- sAll = ""
-
- print("Saving to output file " + outputFile)
- # Create harmonisation output format of input to output code, name, Text
-
- try: # 18 fields, output to labeled CSV dataset for results teaching on scored regret changes to action plan with data inputs
- col = " 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19"
-
- #LOINC
- g11 = g1['LOINC_NUM'].to_string().replace(","," ").replace("\n"," ")
- g12 = g1['COMPONENT'].to_string().replace(","," ").replace("\n"," ")
- s1 = ("LOINC," + myEntityGroup + "," + eterm + ",questions of ," + g12 + "," + g11 + ", Label,Value, Label,Value, Label,Value ")
- if g11 != 'Series([] )': SaveResult(s1, outputFile)
-
- #LOINC Panels
- g21 = g2['Loinc'].to_string().replace(","," ").replace("\n"," ")
- g22 = g2['LoincName'].to_string().replace(","," ").replace("\n"," ")
- g23 = g2['ParentLoinc'].to_string().replace(","," ").replace("\n"," ")
- g24 = g2['ParentName'].to_string().replace(","," ").replace("\n"," ")
- # s2 = ("LOINC Panel," + myEntityGroup + "," + eterm + ",name of ," + g22 + "," + g21 + ", and Parent codes of ," + g23 + ", with Parent names of ," + g24 + ", Label,Value ")
- s2 = ("LOINC Panel," + myEntityGroup + "," + eterm + ",name of ," + g22 + "," + g21 + "," + g24 + ", and Parent codes of ," + g23 + "," + ", Label,Value ")
- if g21 != 'Series([] )': SaveResult(s2, outputFile)
-
- #SNOMED
- g31 = g3['conceptId'].to_string().replace(","," ").replace("\n"," ").replace("\l"," ").replace("\r"," ")
- g32 = g3['term'].to_string().replace(","," ").replace("\n"," ").replace("\l"," ").replace("\r"," ")
- s3 = ("SNOMED Concept," + myEntityGroup + "," + eterm + ",terms of ," + g32 + "," + g31 + ", Label,Value, Label,Value, Label,Value ")
- if g31 != 'Series([] )': SaveResult(s3, outputFile)
-
- #OMS
- g41 = g4['Omaha Code'].to_string().replace(","," ").replace("\n"," ")
- g42 = g4['SNOMED CT concept ID'].to_string().replace(","," ").replace("\n"," ")
- g43 = g4['SNOMED CT'].to_string().replace(","," ").replace("\n"," ")
- g44 = g4['PR'].to_string().replace(","," ").replace("\n"," ")
- g45 = g4['S&S'].to_string().replace(","," ").replace("\n"," ")
- s4 = ("OMS," + myEntityGroup + "," + eterm + ",concepts of ," + g44 + "," + g45 + ", and SNOMED codes of ," + g43 + ", and OMS problem of ," + g42 + ", and OMS Sign Symptom of ," + g41)
- if g41 != 'Series([] )': SaveResult(s4, outputFile)
-
- #ICD10
- g51 = g5['Code'].to_string().replace(","," ").replace("\n"," ")
- g52 = g5['Description'].to_string().replace(","," ").replace("\n"," ")
- s5 = ("ICD10," + myEntityGroup + "," + eterm + ",descriptions of ," + g52 + "," + g51 + ", Label,Value, Label,Value, Label,Value ")
- if g51 != 'Series([] )': SaveResult(s5, outputFile)
-
- except ValueError as err:
- raise ValueError("Error in group by entity \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None
-
- return outputFile
-
-
-def plot_to_figure(grouped):
- fig = plt.figure()
- plt.bar(x=list(grouped.keys()), height=list(grouped.values()))
- plt.margins(0.2)
- plt.subplots_adjust(bottom=0.4)
- plt.xticks(rotation=90)
- return fig
-
-
-def ner(text):
- raw = pipe(text)
- ner_content = {
- "text": text,
- "entities": [
- {
- "entity": x["entity_group"],
- "word": x["word"],
- "score": x["score"],
- "start": x["start"],
- "end": x["end"],
- }
- for x in raw
- ],
- }
-
- outputFile = group_by_entity(raw)
- label = EXAMPLES.get(text, "Unknown")
- outputDataframe = pd.read_csv(outputFile)
- return (ner_content, outputDataframe, outputFile)
-
-demo = gr.Blocks()
-with demo:
- gr.Markdown(
- """
- # 🩺⚕️NLP Clinical Ontology Biomedical NER
- """
- )
- input = gr.Textbox(label="Note text", value="")
-
- with gr.Tab("Biomedical Entity Recognition"):
- output=[
- gr.HighlightedText(label="NER", combine_adjacent=True),
- #gr.JSON(label="Entity Counts"),
- #gr.Label(label="Rating"),
- #gr.Plot(label="Bar"),
- gr.Dataframe(label="Dataframe"),
- gr.File(label="File"),
- ]
- examples=list(EXAMPLES.keys())
- gr.Examples(examples, inputs=input)
- input.change(fn=ner, inputs=input, outputs=output)
-
- with gr.Tab("Clinical Terminology Resolution"):
- with gr.Row(variant="compact"):
- btnLOINC = gr.Button("LOINC")
- btnPanels = gr.Button("Panels")
- btnSNOMED = gr.Button("SNOMED")
- btnOMS = gr.Button("OMS")
- btnICD10 = gr.Button("ICD10")
-
- examples=list(EXAMPLES.keys())
- gr.Examples(examples, inputs=input)
- input.change(fn=ner, inputs=input, outputs=output)
-#layout="vertical"
-demo.launch(debug=True)
diff --git a/spaces/awacke1/Graph.Model.Feedback/README.md b/spaces/awacke1/Graph.Model.Feedback/README.md
deleted file mode 100644
index 2f22ee6e6962fe875afba58a82e8783c8abf282c..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Graph.Model.Feedback/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Graph.Model.Feedback
-emoji: 📊
-colorFrom: gray
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/Streamlit.Graphviz.Stories.JSONL/README.md b/spaces/awacke1/Streamlit.Graphviz.Stories.JSONL/README.md
deleted file mode 100644
index 5d255252c6816408ab70f2eca07b6e31962ce32d..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Streamlit.Graphviz.Stories.JSONL/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Streamlit.Graphviz.Stories.JSONL
-emoji: 🚀
-colorFrom: blue
-colorTo: purple
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/scripts/calc_losses_on_images.py b/spaces/bankholdup/stylegan_petbreeder/e4e/scripts/calc_losses_on_images.py
deleted file mode 100644
index 32b6bcee854da7ae357daf82bd986f30db9fb72c..0000000000000000000000000000000000000000
--- a/spaces/bankholdup/stylegan_petbreeder/e4e/scripts/calc_losses_on_images.py
+++ /dev/null
@@ -1,87 +0,0 @@
-from argparse import ArgumentParser
-import os
-import json
-import sys
-from tqdm import tqdm
-import numpy as np
-import torch
-from torch.utils.data import DataLoader
-import torchvision.transforms as transforms
-
-sys.path.append(".")
-sys.path.append("..")
-
-from criteria.lpips.lpips import LPIPS
-from datasets.gt_res_dataset import GTResDataset
-
-
-def parse_args():
- parser = ArgumentParser(add_help=False)
- parser.add_argument('--mode', type=str, default='lpips', choices=['lpips', 'l2'])
- parser.add_argument('--data_path', type=str, default='results')
- parser.add_argument('--gt_path', type=str, default='gt_images')
- parser.add_argument('--workers', type=int, default=4)
- parser.add_argument('--batch_size', type=int, default=4)
- parser.add_argument('--is_cars', action='store_true')
- args = parser.parse_args()
- return args
-
-
-def run(args):
- resize_dims = (256, 256)
- if args.is_cars:
- resize_dims = (192, 256)
- transform = transforms.Compose([transforms.Resize(resize_dims),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
-
- print('Loading dataset')
- dataset = GTResDataset(root_path=args.data_path,
- gt_dir=args.gt_path,
- transform=transform)
-
- dataloader = DataLoader(dataset,
- batch_size=args.batch_size,
- shuffle=False,
- num_workers=int(args.workers),
- drop_last=True)
-
- if args.mode == 'lpips':
- loss_func = LPIPS(net_type='alex')
- elif args.mode == 'l2':
- loss_func = torch.nn.MSELoss()
- else:
- raise Exception('Not a valid mode!')
- loss_func.cuda()
-
- global_i = 0
- scores_dict = {}
- all_scores = []
- for result_batch, gt_batch in tqdm(dataloader):
- for i in range(args.batch_size):
- loss = float(loss_func(result_batch[i:i + 1].cuda(), gt_batch[i:i + 1].cuda()))
- all_scores.append(loss)
- im_path = dataset.pairs[global_i][0]
- scores_dict[os.path.basename(im_path)] = loss
- global_i += 1
-
- all_scores = list(scores_dict.values())
- mean = np.mean(all_scores)
- std = np.std(all_scores)
- result_str = 'Average loss is {:.2f}+-{:.2f}'.format(mean, std)
- print('Finished with ', args.data_path)
- print(result_str)
-
- out_path = os.path.join(os.path.dirname(args.data_path), 'inference_metrics')
- if not os.path.exists(out_path):
- os.makedirs(out_path)
-
- with open(os.path.join(out_path, 'stat_{}.txt'.format(args.mode)), 'w') as f:
- f.write(result_str)
- with open(os.path.join(out_path, 'scores_{}.json'.format(args.mode)), 'w') as f:
- json.dump(scores_dict, f)
-
-
-if __name__ == '__main__':
- args = parse_args()
- run(args)
diff --git a/spaces/bigcode/santa-explains-code/README.md b/spaces/bigcode/santa-explains-code/README.md
deleted file mode 100644
index b2ae80477880cb4a6ccfbd23db9de497e7d1dc27..0000000000000000000000000000000000000000
--- a/spaces/bigcode/santa-explains-code/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Santa Explains Code
-emoji: 🎅
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.0.24
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: codeparrot/code-explainer
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bigscience-data/process-pipeline-visualizer/README.md b/spaces/bigscience-data/process-pipeline-visualizer/README.md
deleted file mode 100644
index 0e3fa2fa6b93598aec08342d14bd885059797f72..0000000000000000000000000000000000000000
--- a/spaces/bigscience-data/process-pipeline-visualizer/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Process Pipeline Visualizer
-emoji: 👁
-colorFrom: pink
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bioriAsaeru/text-to-voice/Get Crysis 3 Dx10 Patch Rar and Boost Your Gaming Performance.md b/spaces/bioriAsaeru/text-to-voice/Get Crysis 3 Dx10 Patch Rar and Boost Your Gaming Performance.md
deleted file mode 100644
index 5cba0cf570d0449c1a961ba2ac045d696c8d9729..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Get Crysis 3 Dx10 Patch Rar and Boost Your Gaming Performance.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
Earth, 2019. A team of US scientists makes a frightening discovery on an island in the South China Sea, and all contact with them is lost when the North Korean government quickly seals off the area.The United States responds by dispatching an elite team of nanotech-suited Delta Force operatives, and tensions quickly rise to boiling point. But when a massive alien ship reveals itself in the middle of the island, generating an immense force sphere that freezes a vast portion of the area, the two warring factions must unite, fighting epic battles through tropical jungle, flash-frozen wastes, and finally in the heart of the alien ship itself for the ultimate zero-G showdown.
THIS DOES NOT WORK WITH ANY OTHER VERSION OF CRYSIS THAN 1.1!! you heard right, this mini-mod unlocks all dx10 graphical features even if you have DX9 Hardware (ati 1950, nvidia 7900...) and an DX9 OS (windows xp...). All you have to do is SWITCH EVERY OPTION IN THE ADVANCED GRAPHICS MENU TO HIGH! Thats all!
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/birdortyedi/cifr-pytorch/layers/blocks.py b/spaces/birdortyedi/cifr-pytorch/layers/blocks.py
deleted file mode 100644
index d22f56799f533e50db21ab12c4d83adc966161ce..0000000000000000000000000000000000000000
--- a/spaces/birdortyedi/cifr-pytorch/layers/blocks.py
+++ /dev/null
@@ -1,93 +0,0 @@
-from torch import nn
-
-from layers.normalization import AdaIN
-
-
-class DestyleResBlock(nn.Module):
- def __init__(self, channels_out, kernel_size, channels_in=None, stride=1, dilation=1, padding=1, use_dropout=False):
- super(DestyleResBlock, self).__init__()
-
- # uses 1x1 convolutions for downsampling
- if not channels_in or channels_in == channels_out:
- channels_in = channels_out
- self.projection = None
- else:
- self.projection = nn.Conv2d(channels_in, channels_out, kernel_size=1, stride=stride, dilation=1)
- self.use_dropout = use_dropout
-
- self.conv1 = nn.Conv2d(channels_in, channels_out, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation)
- self.lrelu1 = nn.LeakyReLU(negative_slope=0.2, inplace=True)
- self.conv2 = nn.Conv2d(channels_out, channels_out, kernel_size=kernel_size, stride=1, padding=padding, dilation=dilation)
- self.adain = AdaIN()
- if self.use_dropout:
- self.dropout = nn.Dropout()
- self.lrelu2 = nn.LeakyReLU(negative_slope=0.2, inplace=True)
-
- def forward(self, x, feat):
- residual = x
- out = self.conv1(x)
- out = self.lrelu1(out)
- out = self.conv2(out)
- _, _, h, w = out.size()
- out = self.adain(out, feat)
- if self.use_dropout:
- out = self.dropout(out)
- if self.projection:
- residual = self.projection(x)
- out = out + residual
- out = self.lrelu2(out)
- return out
-
-
-class ResBlock(nn.Module):
- def __init__(self, channels_out, kernel_size, channels_in=None, stride=1, dilation=1, padding=1, use_dropout=False):
- super(ResBlock, self).__init__()
-
- # uses 1x1 convolutions for downsampling
- if not channels_in or channels_in == channels_out:
- channels_in = channels_out
- self.projection = None
- else:
- self.projection = nn.Conv2d(channels_in, channels_out, kernel_size=1, stride=stride, dilation=1)
- self.use_dropout = use_dropout
-
- self.conv1 = nn.Conv2d(channels_in, channels_out, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation)
- self.lrelu1 = nn.LeakyReLU(negative_slope=0.2, inplace=True)
- self.conv2 = nn.Conv2d(channels_out, channels_out, kernel_size=kernel_size, stride=1, padding=padding, dilation=dilation)
- self.n2 = nn.BatchNorm2d(channels_out)
- if self.use_dropout:
- self.dropout = nn.Dropout()
- self.lrelu2 = nn.LeakyReLU(negative_slope=0.2, inplace=True)
-
- def forward(self, x):
- residual = x
- out = self.conv1(x)
- out = self.lrelu1(out)
- out = self.conv2(out)
- # out = self.n2(out)
- if self.use_dropout:
- out = self.dropout(out)
- if self.projection:
- residual = self.projection(x)
- out = out + residual
- out = self.lrelu2(out)
- return out
-
-
-class Destyler(nn.Module):
- def __init__(self, in_features, num_features):
- super(Destyler, self).__init__()
- self.fc1 = nn.Linear(in_features, num_features)
- self.fc2 = nn.Linear(num_features, num_features)
- self.fc3 = nn.Linear(num_features, num_features)
- self.fc4 = nn.Linear(num_features, num_features)
- self.fc5 = nn.Linear(num_features, num_features)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.fc2(x)
- x = self.fc3(x)
- x = self.fc4(x)
- x = self.fc5(x)
- return x
-
diff --git a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/datasets/dataset_512.py b/spaces/birsardar/stable-diffusion-mat-outpainting-primer/datasets/dataset_512.py
deleted file mode 100644
index 27fc1ce862f1b00e427670d393d70bec56d063da..0000000000000000000000000000000000000000
--- a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/datasets/dataset_512.py
+++ /dev/null
@@ -1,286 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import cv2
-import os
-import numpy as np
-import zipfile
-import PIL.Image
-import json
-import torch
-import dnnlib
-import random
-
-try:
- import pyspng
-except ImportError:
- pyspng = None
-
-from datasets.mask_generator_512 import RandomMask
-
-#----------------------------------------------------------------------------
-
-class Dataset(torch.utils.data.Dataset):
- def __init__(self,
- name, # Name of the dataset.
- raw_shape, # Shape of the raw image data (NCHW).
- max_size = None, # Artificially limit the size of the dataset. None = no limit. Applied before xflip.
- use_labels = False, # Enable conditioning labels? False = label dimension is zero.
- xflip = False, # Artificially double the size of the dataset via x-flips. Applied after max_size.
- random_seed = 0, # Random seed to use when applying max_size.
- ):
- self._name = name
- self._raw_shape = list(raw_shape)
- self._use_labels = use_labels
- self._raw_labels = None
- self._label_shape = None
-
- # Apply max_size.
- self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64)
- if (max_size is not None) and (self._raw_idx.size > max_size):
- np.random.RandomState(random_seed).shuffle(self._raw_idx)
- self._raw_idx = np.sort(self._raw_idx[:max_size])
-
- # Apply xflip.
- self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8)
- if xflip:
- self._raw_idx = np.tile(self._raw_idx, 2)
- self._xflip = np.concatenate([self._xflip, np.ones_like(self._xflip)])
-
- def _get_raw_labels(self):
- if self._raw_labels is None:
- self._raw_labels = self._load_raw_labels() if self._use_labels else None
- if self._raw_labels is None:
- self._raw_labels = np.zeros([self._raw_shape[0], 0], dtype=np.float32)
- assert isinstance(self._raw_labels, np.ndarray)
- assert self._raw_labels.shape[0] == self._raw_shape[0]
- assert self._raw_labels.dtype in [np.float32, np.int64]
- if self._raw_labels.dtype == np.int64:
- assert self._raw_labels.ndim == 1
- assert np.all(self._raw_labels >= 0)
- return self._raw_labels
-
- def close(self): # to be overridden by subclass
- pass
-
- def _load_raw_image(self, raw_idx): # to be overridden by subclass
- raise NotImplementedError
-
- def _load_raw_labels(self): # to be overridden by subclass
- raise NotImplementedError
-
- def __getstate__(self):
- return dict(self.__dict__, _raw_labels=None)
-
- def __del__(self):
- try:
- self.close()
- except:
- pass
-
- def __len__(self):
- return self._raw_idx.size
-
- def __getitem__(self, idx):
- image = self._load_raw_image(self._raw_idx[idx])
- assert isinstance(image, np.ndarray)
- assert list(image.shape) == self.image_shape
- assert image.dtype == np.uint8
- if self._xflip[idx]:
- assert image.ndim == 3 # CHW
- image = image[:, :, ::-1]
- return image.copy(), self.get_label(idx)
-
- def get_label(self, idx):
- label = self._get_raw_labels()[self._raw_idx[idx]]
- if label.dtype == np.int64:
- onehot = np.zeros(self.label_shape, dtype=np.float32)
- onehot[label] = 1
- label = onehot
- return label.copy()
-
- def get_details(self, idx):
- d = dnnlib.EasyDict()
- d.raw_idx = int(self._raw_idx[idx])
- d.xflip = (int(self._xflip[idx]) != 0)
- d.raw_label = self._get_raw_labels()[d.raw_idx].copy()
- return d
-
- @property
- def name(self):
- return self._name
-
- @property
- def image_shape(self):
- return list(self._raw_shape[1:])
-
- @property
- def num_channels(self):
- assert len(self.image_shape) == 3 # CHW
- return self.image_shape[0]
-
- @property
- def resolution(self):
- assert len(self.image_shape) == 3 # CHW
- assert self.image_shape[1] == self.image_shape[2]
- return self.image_shape[1]
-
- @property
- def label_shape(self):
- if self._label_shape is None:
- raw_labels = self._get_raw_labels()
- if raw_labels.dtype == np.int64:
- self._label_shape = [int(np.max(raw_labels)) + 1]
- else:
- self._label_shape = raw_labels.shape[1:]
- return list(self._label_shape)
-
- @property
- def label_dim(self):
- assert len(self.label_shape) == 1
- return self.label_shape[0]
-
- @property
- def has_labels(self):
- return any(x != 0 for x in self.label_shape)
-
- @property
- def has_onehot_labels(self):
- return self._get_raw_labels().dtype == np.int64
-
-
-#----------------------------------------------------------------------------
-
-
-class ImageFolderMaskDataset(Dataset):
- def __init__(self,
- path, # Path to directory or zip.
- resolution = None, # Ensure specific resolution, None = highest available.
- hole_range=[0,1],
- **super_kwargs, # Additional arguments for the Dataset base class.
- ):
- self._path = path
- self._zipfile = None
- self._hole_range = hole_range
-
- if os.path.isdir(self._path):
- self._type = 'dir'
- self._all_fnames = {os.path.relpath(os.path.join(root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files}
- elif self._file_ext(self._path) == '.zip':
- self._type = 'zip'
- self._all_fnames = set(self._get_zipfile().namelist())
- else:
- raise IOError('Path must point to a directory or zip')
-
- PIL.Image.init()
- self._image_fnames = sorted(fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION)
- if len(self._image_fnames) == 0:
- raise IOError('No image files found in the specified path')
-
- name = os.path.splitext(os.path.basename(self._path))[0]
- raw_shape = [len(self._image_fnames)] + list(self._load_raw_image(0).shape)
- if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution):
- raise IOError('Image files do not match the specified resolution')
- super().__init__(name=name, raw_shape=raw_shape, **super_kwargs)
-
- @staticmethod
- def _file_ext(fname):
- return os.path.splitext(fname)[1].lower()
-
- def _get_zipfile(self):
- assert self._type == 'zip'
- if self._zipfile is None:
- self._zipfile = zipfile.ZipFile(self._path)
- return self._zipfile
-
- def _open_file(self, fname):
- if self._type == 'dir':
- return open(os.path.join(self._path, fname), 'rb')
- if self._type == 'zip':
- return self._get_zipfile().open(fname, 'r')
- return None
-
- def close(self):
- try:
- if self._zipfile is not None:
- self._zipfile.close()
- finally:
- self._zipfile = None
-
- def __getstate__(self):
- return dict(super().__getstate__(), _zipfile=None)
-
- def _load_raw_image(self, raw_idx):
- fname = self._image_fnames[raw_idx]
- with self._open_file(fname) as f:
- if pyspng is not None and self._file_ext(fname) == '.png':
- image = pyspng.load(f.read())
- else:
- image = np.array(PIL.Image.open(f))
- if image.ndim == 2:
- image = image[:, :, np.newaxis] # HW => HWC
-
- # for grayscale image
- if image.shape[2] == 1:
- image = np.repeat(image, 3, axis=2)
-
- # restricted to 512x512
- res = 512
- H, W, C = image.shape
- if H < res or W < res:
- top = 0
- bottom = max(0, res - H)
- left = 0
- right = max(0, res - W)
- image = cv2.copyMakeBorder(image, top, bottom, left, right, cv2.BORDER_REFLECT)
- H, W, C = image.shape
- h = random.randint(0, H - res)
- w = random.randint(0, W - res)
- image = image[h:h+res, w:w+res, :]
-
- image = np.ascontiguousarray(image.transpose(2, 0, 1)) # HWC => CHW
-
- return image
-
- def _load_raw_labels(self):
- fname = 'labels.json'
- if fname not in self._all_fnames:
- return None
- with self._open_file(fname) as f:
- labels = json.load(f)['labels']
- if labels is None:
- return None
- labels = dict(labels)
- labels = [labels[fname.replace('\\', '/')] for fname in self._image_fnames]
- labels = np.array(labels)
- labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim])
- return labels
-
- def __getitem__(self, idx):
- image = self._load_raw_image(self._raw_idx[idx])
-
- assert isinstance(image, np.ndarray)
- assert list(image.shape) == self.image_shape
- assert image.dtype == np.uint8
- if self._xflip[idx]:
- assert image.ndim == 3 # CHW
- image = image[:, :, ::-1]
- mask = RandomMask(image.shape[-1], hole_range=self._hole_range) # hole as 0, reserved as 1
- return image.copy(), mask, self.get_label(idx)
-
-
-if __name__ == '__main__':
- res = 512
- dpath = '/data/liwenbo/datasets/Places365/standard/val_large'
- D = ImageFolderMaskDataset(path=dpath)
- print(D.__len__())
- for i in range(D.__len__()):
- print(i)
- a, b, c = D.__getitem__(i)
- if a.shape != (3, 512, 512):
- print(i, a.shape)
diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/test_seanet.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/test_seanet.py
deleted file mode 100644
index e5c51b340a2f94fb2828b14daf83d5fad645073d..0000000000000000000000000000000000000000
--- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/test_seanet.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-
-import pytest
-import torch
-
-from audiocraft.modules.seanet import SEANetEncoder, SEANetDecoder, SEANetResnetBlock
-from audiocraft.modules import StreamableConv1d, StreamableConvTranspose1d
-
-
-class TestSEANetModel:
-
- def test_base(self):
- encoder = SEANetEncoder()
- decoder = SEANetDecoder()
-
- x = torch.randn(1, 1, 24000)
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def test_causal(self):
- encoder = SEANetEncoder(causal=True)
- decoder = SEANetDecoder(causal=True)
- x = torch.randn(1, 1, 24000)
-
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def test_conv_skip_connection(self):
- encoder = SEANetEncoder(true_skip=False)
- decoder = SEANetDecoder(true_skip=False)
-
- x = torch.randn(1, 1, 24000)
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def test_seanet_encoder_decoder_final_act(self):
- encoder = SEANetEncoder(true_skip=False)
- decoder = SEANetDecoder(true_skip=False, final_activation='Tanh')
-
- x = torch.randn(1, 1, 24000)
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def _check_encoder_blocks_norm(self, encoder: SEANetEncoder, n_disable_blocks: int, norm: str):
- n_blocks = 0
- for layer in encoder.model:
- if isinstance(layer, StreamableConv1d):
- n_blocks += 1
- assert layer.conv.norm_type == 'none' if n_blocks <= n_disable_blocks else norm
- elif isinstance(layer, SEANetResnetBlock):
- for resnet_layer in layer.block:
- if isinstance(resnet_layer, StreamableConv1d):
- # here we add + 1 to n_blocks as we increment n_blocks just after the block
- assert resnet_layer.conv.norm_type == 'none' if (n_blocks + 1) <= n_disable_blocks else norm
-
- def test_encoder_disable_norm(self):
- n_residuals = [0, 1, 3]
- disable_blocks = [0, 1, 2, 3, 4, 5, 6]
- norms = ['weight_norm', 'none']
- for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms):
- encoder = SEANetEncoder(n_residual_layers=n_res, norm=norm,
- disable_norm_outer_blocks=disable_blocks)
- self._check_encoder_blocks_norm(encoder, disable_blocks, norm)
-
- def _check_decoder_blocks_norm(self, decoder: SEANetDecoder, n_disable_blocks: int, norm: str):
- n_blocks = 0
- for layer in decoder.model:
- if isinstance(layer, StreamableConv1d):
- n_blocks += 1
- assert layer.conv.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm
- elif isinstance(layer, StreamableConvTranspose1d):
- n_blocks += 1
- assert layer.convtr.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm
- elif isinstance(layer, SEANetResnetBlock):
- for resnet_layer in layer.block:
- if isinstance(resnet_layer, StreamableConv1d):
- assert resnet_layer.conv.norm_type == 'none' \
- if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm
-
- def test_decoder_disable_norm(self):
- n_residuals = [0, 1, 3]
- disable_blocks = [0, 1, 2, 3, 4, 5, 6]
- norms = ['weight_norm', 'none']
- for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms):
- decoder = SEANetDecoder(n_residual_layers=n_res, norm=norm,
- disable_norm_outer_blocks=disable_blocks)
- self._check_decoder_blocks_norm(decoder, disable_blocks, norm)
-
- def test_disable_norm_raises_exception(self):
- # Invalid disable_norm_outer_blocks values raise exceptions
- with pytest.raises(AssertionError):
- SEANetEncoder(disable_norm_outer_blocks=-1)
-
- with pytest.raises(AssertionError):
- SEANetEncoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7)
-
- with pytest.raises(AssertionError):
- SEANetDecoder(disable_norm_outer_blocks=-1)
-
- with pytest.raises(AssertionError):
- SEANetDecoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7)
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/doc/RELEASE_2020_04.md b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/doc/RELEASE_2020_04.md
deleted file mode 100644
index 2fab6ae78e887c630ad94e71aa6e946115c61593..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/doc/RELEASE_2020_04.md
+++ /dev/null
@@ -1,6 +0,0 @@
-# DensePose Confidence Estimation and Model Zoo Improvements
-
-* [DensePose models with confidence estimation](doc/DENSEPOSE_IUV.md#ModelZooConfidence)
-* [Panoptic FPN and DeepLabV3 head implementation](doc/DENSEPOSE_IUV.md#ModelZooDeepLabV3)
-* Test time augmentations for DensePose
-* New evaluation metric (GPSm) that yields more reliable scores
diff --git a/spaces/camel-ai/camel-agents/README.md b/spaces/camel-ai/camel-agents/README.md
deleted file mode 100644
index dacaca7a3c4c5168021412d333bf4e659c55f686..0000000000000000000000000000000000000000
--- a/spaces/camel-ai/camel-agents/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Camel Agents
-emoji: ⚡
-colorFrom: blue
-colorTo: purple
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/caslabs/midi-autocompletion/musicautobot/__init__.py b/spaces/caslabs/midi-autocompletion/musicautobot/__init__.py
deleted file mode 100644
index 0c86b2a866cddca4d5fdfe123d31ddc724907695..0000000000000000000000000000000000000000
--- a/spaces/caslabs/midi-autocompletion/musicautobot/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .utils.setup_musescore import setup_musescore
-
-setup_musescore()
\ No newline at end of file
diff --git a/spaces/cc38300/ConstructionGPT-SL/search.py b/spaces/cc38300/ConstructionGPT-SL/search.py
deleted file mode 100644
index 05fadb622f08d12fd2d5c55e3c31908efc1f1778..0000000000000000000000000000000000000000
--- a/spaces/cc38300/ConstructionGPT-SL/search.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import streamlit as st
-import openai
-
-from database import get_redis_connection, get_redis_results
-from config import INDEX_NAME, COMPLETIONS_MODEL
-
-# initialise Redis connection
-
-client = get_redis_connection()
-
-### SEARCH APP
-
-st.set_page_config(
- page_title="Streamlit Search - Demo",
- page_icon=":robot:"
-)
-
-st.title('Formula 1 Search')
-st.subheader("Search for any Formula 1 rule questions you have")
-
-prompt = st.text_input("Enter your search here","", key="input")
-
-if st.button('Submit', key='generationSubmit'):
- result_df = get_redis_results(client,prompt,INDEX_NAME)
-
- # Build a prompt to provide the original query, the result and ask to summarise for the user
- summary_prompt = '''Summarise this result in a bulleted list to answer the search query a customer has sent.
- Search query: SEARCH_QUERY_HERE
- Search result: SEARCH_RESULT_HERE
- Summary:
- '''
- summary_prepped = summary_prompt.replace('SEARCH_QUERY_HERE',prompt).replace('SEARCH_RESULT_HERE',result_df['result'][0])
- summary = openai.Completion.create(engine=COMPLETIONS_MODEL,prompt=summary_prepped,max_tokens=500)
-
- # Response provided by GPT-3
- st.write(summary['choices'][0]['text'])
-
- # Option to display raw table instead of summary from GPT-3
- #st.table(result_df)
\ No newline at end of file
diff --git a/spaces/cccc-c/bingo/src/components/external-link.tsx b/spaces/cccc-c/bingo/src/components/external-link.tsx
deleted file mode 100644
index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000
--- a/spaces/cccc-c/bingo/src/components/external-link.tsx
+++ /dev/null
@@ -1,30 +0,0 @@
-export function ExternalLink({
- href,
- children
-}: {
- href: string
- children: React.ReactNode
-}) {
- return (
-
- {children}
-
-
- )
-}
diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/generation_utils.py b/spaces/chendl/compositional_test/transformers/src/transformers/generation_utils.py
deleted file mode 100644
index 31cff9749463d941fded3390ef48a998bcdc3158..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/src/transformers/generation_utils.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# coding=utf-8
-# Copyright 2020 The Google AI Language Team Authors, Facebook AI Research authors and The HuggingFace Inc. team.
-# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import warnings
-
-from .generation import GenerationMixin
-
-
-class GenerationMixin(GenerationMixin):
- # warning at import time
- warnings.warn(
- "Importing `GenerationMixin` from `src/transformers/generation_utils.py` is deprecated and will "
- "be removed in Transformers v5. Import as `from transformers import GenerationMixin` instead.",
- FutureWarning,
- )
diff --git a/spaces/chriscelaya/streaming_chat_gpt-3.5-turbo_langchain/README.md b/spaces/chriscelaya/streaming_chat_gpt-3.5-turbo_langchain/README.md
deleted file mode 100644
index f58491d77eb5841f8b7910a887c1de379c3542e7..0000000000000000000000000000000000000000
--- a/spaces/chriscelaya/streaming_chat_gpt-3.5-turbo_langchain/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Streaming Chat With Gpt-3.5-turbo using Langchain
-emoji: 🚀
-colorFrom: white
-colorTo: purple
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: lukestanley/streaming_chat_with_gpt-3.5-turbo_using_langchain_sorta
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/utils/display.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/utils/display.py
deleted file mode 100644
index 730ca65347ad348964b7aa8c78aa16517c63bd4a..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/utils/display.py
+++ /dev/null
@@ -1,186 +0,0 @@
-import json
-import pkgutil
-import textwrap
-from typing import Callable, Dict, Optional
-import uuid
-
-from .plugin_registry import PluginRegistry
-from .mimebundle import spec_to_mimebundle
-from .schemapi import validate_jsonschema
-
-
-# ==============================================================================
-# Renderer registry
-# ==============================================================================
-MimeBundleType = Dict[str, object]
-RendererType = Callable[..., MimeBundleType]
-
-
-class RendererRegistry(PluginRegistry[RendererType]):
- entrypoint_err_messages = {
- "notebook": textwrap.dedent(
- """
- To use the 'notebook' renderer, you must install the vega package
- and the associated Jupyter extension.
- See https://altair-viz.github.io/getting_started/installation.html
- for more information.
- """
- ),
- "altair_viewer": textwrap.dedent(
- """
- To use the 'altair_viewer' renderer, you must install the altair_viewer
- package; see http://github.com/altair-viz/altair_viewer/
- for more information.
- """
- ),
- }
-
- def set_embed_options(
- self,
- defaultStyle=None,
- renderer=None,
- width=None,
- height=None,
- padding=None,
- scaleFactor=None,
- actions=None,
- **kwargs,
- ):
- """Set options for embeddings of Vega & Vega-Lite charts.
-
- Options are fully documented at https://github.com/vega/vega-embed.
- Similar to the `enable()` method, this can be used as either
- a persistent global switch, or as a temporary local setting using
- a context manager (i.e. a `with` statement).
-
- Parameters
- ----------
- defaultStyle : bool or string
- Specify a default stylesheet for embed actions.
- renderer : string
- The renderer to use for the view. One of "canvas" (default) or "svg"
- width : integer
- The view width in pixels
- height : integer
- The view height in pixels
- padding : integer
- The view padding in pixels
- scaleFactor : number
- The number by which to multiply the width and height (default 1)
- of an exported PNG or SVG image.
- actions : bool or dict
- Determines if action links ("Export as PNG/SVG", "View Source",
- "View Vega" (only for Vega-Lite), "Open in Vega Editor") are
- included with the embedded view. If the value is true, all action
- links will be shown and none if the value is false. This property
- can take a key-value mapping object that maps keys (export, source,
- compiled, editor) to boolean values for determining if
- each action link should be shown.
- **kwargs :
- Additional options are passed directly to embed options.
- """
- options = {
- "defaultStyle": defaultStyle,
- "renderer": renderer,
- "width": width,
- "height": height,
- "padding": padding,
- "scaleFactor": scaleFactor,
- "actions": actions,
- }
- kwargs.update({key: val for key, val in options.items() if val is not None})
- return self.enable(None, embed_options=kwargs)
-
-
-# ==============================================================================
-# VegaLite v1/v2 renderer logic
-# ==============================================================================
-
-
-class Displayable:
- """A base display class for VegaLite v1/v2.
-
- This class takes a VegaLite v1/v2 spec and does the following:
-
- 1. Optionally validates the spec against a schema.
- 2. Uses the RendererPlugin to grab a renderer and call it when the
- IPython/Jupyter display method (_repr_mimebundle_) is called.
-
- The spec passed to this class must be fully schema compliant and already
- have the data portion of the spec fully processed and ready to serialize.
- In practice, this means, the data portion of the spec should have been passed
- through appropriate data model transformers.
- """
-
- renderers: Optional[RendererRegistry] = None
- schema_path = ("altair", "")
-
- def __init__(self, spec, validate=False):
- # type: (dict, bool) -> None
- self.spec = spec
- self.validate = validate
- self._validate()
-
- def _validate(self):
- # type: () -> None
- """Validate the spec against the schema."""
- data = pkgutil.get_data(*self.schema_path)
- assert data is not None
- schema_dict = json.loads(data.decode("utf-8"))
- validate_jsonschema(
- self.spec,
- schema_dict,
- )
-
- def _repr_mimebundle_(self, include=None, exclude=None):
- """Return a MIME bundle for display in Jupyter frontends."""
- if self.renderers is not None:
- return self.renderers.get()(self.spec)
- else:
- return {}
-
-
-def default_renderer_base(spec, mime_type, str_repr, **options):
- """A default renderer for Vega or VegaLite that works for modern frontends.
-
- This renderer works with modern frontends (JupyterLab, nteract) that know
- how to render the custom VegaLite MIME type listed above.
- """
- assert isinstance(spec, dict)
- bundle = {}
- metadata = {}
-
- bundle[mime_type] = spec
- bundle["text/plain"] = str_repr
- if options:
- metadata[mime_type] = options
- return bundle, metadata
-
-
-def json_renderer_base(spec, str_repr, **options):
- """A renderer that returns a MIME type of application/json.
-
- In JupyterLab/nteract this is rendered as a nice JSON tree.
- """
- return default_renderer_base(
- spec, mime_type="application/json", str_repr=str_repr, **options
- )
-
-
-class HTMLRenderer:
- """Object to render charts as HTML, with a unique output div each time"""
-
- def __init__(self, output_div="altair-viz-{}", **kwargs):
- self._output_div = output_div
- self.kwargs = kwargs
-
- @property
- def output_div(self):
- return self._output_div.format(uuid.uuid4().hex)
-
- def __call__(self, spec, **metadata):
- kwargs = self.kwargs.copy()
- kwargs.update(metadata)
- return spec_to_mimebundle(
- spec, format="html", output_div=self.output_div, **kwargs
- )
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/concurrency.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/concurrency.py
deleted file mode 100644
index 31b878d5df8a5d1c9687a8fdbf2f377e844d7f48..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fastapi/concurrency.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from contextlib import AsyncExitStack as AsyncExitStack # noqa
-from contextlib import asynccontextmanager as asynccontextmanager
-from typing import AsyncGenerator, ContextManager, TypeVar
-
-import anyio
-from anyio import CapacityLimiter
-from starlette.concurrency import iterate_in_threadpool as iterate_in_threadpool # noqa
-from starlette.concurrency import run_in_threadpool as run_in_threadpool # noqa
-from starlette.concurrency import ( # noqa
- run_until_first_complete as run_until_first_complete,
-)
-
-_T = TypeVar("_T")
-
-
-@asynccontextmanager
-async def contextmanager_in_threadpool(
- cm: ContextManager[_T],
-) -> AsyncGenerator[_T, None]:
- # blocking __exit__ from running waiting on a free thread
- # can create race conditions/deadlocks if the context manager itself
- # has it's own internal pool (e.g. a database connection pool)
- # to avoid this we let __exit__ run without a capacity limit
- # since we're creating a new limiter for each call, any non-zero limit
- # works (1 is arbitrary)
- exit_limiter = CapacityLimiter(1)
- try:
- yield await run_in_threadpool(cm.__enter__)
- except Exception as e:
- ok = bool(
- await anyio.to_thread.run_sync(
- cm.__exit__, type(e), e, None, limiter=exit_limiter
- )
- )
- if not ok:
- raise e
- else:
- await anyio.to_thread.run_sync(
- cm.__exit__, None, None, None, limiter=exit_limiter
- )
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/etree.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/etree.py
deleted file mode 100644
index 9d4a65c36014c8381306968c69432f50f0c0b886..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/misc/etree.py
+++ /dev/null
@@ -1,478 +0,0 @@
-"""Shim module exporting the same ElementTree API for lxml and
-xml.etree backends.
-
-When lxml is installed, it is automatically preferred over the built-in
-xml.etree module.
-On Python 2.7, the cElementTree module is preferred over the pure-python
-ElementTree module.
-
-Besides exporting a unified interface, this also defines extra functions
-or subclasses built-in ElementTree classes to add features that are
-only availble in lxml, like OrderedDict for attributes, pretty_print and
-iterwalk.
-"""
-from fontTools.misc.textTools import tostr
-
-
-XML_DECLARATION = """"""
-
-__all__ = [
- # public symbols
- "Comment",
- "dump",
- "Element",
- "ElementTree",
- "fromstring",
- "fromstringlist",
- "iselement",
- "iterparse",
- "parse",
- "ParseError",
- "PI",
- "ProcessingInstruction",
- "QName",
- "SubElement",
- "tostring",
- "tostringlist",
- "TreeBuilder",
- "XML",
- "XMLParser",
- "register_namespace",
-]
-
-try:
- from lxml.etree import *
-
- _have_lxml = True
-except ImportError:
- try:
- from xml.etree.cElementTree import *
-
- # the cElementTree version of XML function doesn't support
- # the optional 'parser' keyword argument
- from xml.etree.ElementTree import XML
- except ImportError: # pragma: no cover
- from xml.etree.ElementTree import *
- _have_lxml = False
-
- import sys
-
- # dict is always ordered in python >= 3.6 and on pypy
- PY36 = sys.version_info >= (3, 6)
- try:
- import __pypy__
- except ImportError:
- __pypy__ = None
- _dict_is_ordered = bool(PY36 or __pypy__)
- del PY36, __pypy__
-
- if _dict_is_ordered:
- _Attrib = dict
- else:
- from collections import OrderedDict as _Attrib
-
- if isinstance(Element, type):
- _Element = Element
- else:
- # in py27, cElementTree.Element cannot be subclassed, so
- # we need to import the pure-python class
- from xml.etree.ElementTree import Element as _Element
-
- class Element(_Element):
- """Element subclass that keeps the order of attributes."""
-
- def __init__(self, tag, attrib=_Attrib(), **extra):
- super(Element, self).__init__(tag)
- self.attrib = _Attrib()
- if attrib:
- self.attrib.update(attrib)
- if extra:
- self.attrib.update(extra)
-
- def SubElement(parent, tag, attrib=_Attrib(), **extra):
- """Must override SubElement as well otherwise _elementtree.SubElement
- fails if 'parent' is a subclass of Element object.
- """
- element = parent.__class__(tag, attrib, **extra)
- parent.append(element)
- return element
-
- def _iterwalk(element, events, tag):
- include = tag is None or element.tag == tag
- if include and "start" in events:
- yield ("start", element)
- for e in element:
- for item in _iterwalk(e, events, tag):
- yield item
- if include:
- yield ("end", element)
-
- def iterwalk(element_or_tree, events=("end",), tag=None):
- """A tree walker that generates events from an existing tree as
- if it was parsing XML data with iterparse().
- Drop-in replacement for lxml.etree.iterwalk.
- """
- if iselement(element_or_tree):
- element = element_or_tree
- else:
- element = element_or_tree.getroot()
- if tag == "*":
- tag = None
- for item in _iterwalk(element, events, tag):
- yield item
-
- _ElementTree = ElementTree
-
- class ElementTree(_ElementTree):
- """ElementTree subclass that adds 'pretty_print' and 'doctype'
- arguments to the 'write' method.
- Currently these are only supported for the default XML serialization
- 'method', and not also for "html" or "text", for these are delegated
- to the base class.
- """
-
- def write(
- self,
- file_or_filename,
- encoding=None,
- xml_declaration=False,
- method=None,
- doctype=None,
- pretty_print=False,
- ):
- if method and method != "xml":
- # delegate to super-class
- super(ElementTree, self).write(
- file_or_filename,
- encoding=encoding,
- xml_declaration=xml_declaration,
- method=method,
- )
- return
-
- if encoding is not None and encoding.lower() == "unicode":
- if xml_declaration:
- raise ValueError(
- "Serialisation to unicode must not request an XML declaration"
- )
- write_declaration = False
- encoding = "unicode"
- elif xml_declaration is None:
- # by default, write an XML declaration only for non-standard encodings
- write_declaration = encoding is not None and encoding.upper() not in (
- "ASCII",
- "UTF-8",
- "UTF8",
- "US-ASCII",
- )
- else:
- write_declaration = xml_declaration
-
- if encoding is None:
- encoding = "ASCII"
-
- if pretty_print:
- # NOTE this will modify the tree in-place
- _indent(self._root)
-
- with _get_writer(file_or_filename, encoding) as write:
- if write_declaration:
- write(XML_DECLARATION % encoding.upper())
- if pretty_print:
- write("\n")
- if doctype:
- write(_tounicode(doctype))
- if pretty_print:
- write("\n")
-
- qnames, namespaces = _namespaces(self._root)
- _serialize_xml(write, self._root, qnames, namespaces)
-
- import io
-
- def tostring(
- element,
- encoding=None,
- xml_declaration=None,
- method=None,
- doctype=None,
- pretty_print=False,
- ):
- """Custom 'tostring' function that uses our ElementTree subclass, with
- pretty_print support.
- """
- stream = io.StringIO() if encoding == "unicode" else io.BytesIO()
- ElementTree(element).write(
- stream,
- encoding=encoding,
- xml_declaration=xml_declaration,
- method=method,
- doctype=doctype,
- pretty_print=pretty_print,
- )
- return stream.getvalue()
-
- # serialization support
-
- import re
-
- # Valid XML strings can include any Unicode character, excluding control
- # characters, the surrogate blocks, FFFE, and FFFF:
- # Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF]
- # Here we reversed the pattern to match only the invalid characters.
- # For the 'narrow' python builds supporting only UCS-2, which represent
- # characters beyond BMP as UTF-16 surrogate pairs, we need to pass through
- # the surrogate block. I haven't found a more elegant solution...
- UCS2 = sys.maxunicode < 0x10FFFF
- if UCS2:
- _invalid_xml_string = re.compile(
- "[\u0000-\u0008\u000B-\u000C\u000E-\u001F\uFFFE-\uFFFF]"
- )
- else:
- _invalid_xml_string = re.compile(
- "[\u0000-\u0008\u000B-\u000C\u000E-\u001F\uD800-\uDFFF\uFFFE-\uFFFF]"
- )
-
- def _tounicode(s):
- """Test if a string is valid user input and decode it to unicode string
- using ASCII encoding if it's a bytes string.
- Reject all bytes/unicode input that contains non-XML characters.
- Reject all bytes input that contains non-ASCII characters.
- """
- try:
- s = tostr(s, encoding="ascii", errors="strict")
- except UnicodeDecodeError:
- raise ValueError(
- "Bytes strings can only contain ASCII characters. "
- "Use unicode strings for non-ASCII characters."
- )
- except AttributeError:
- _raise_serialization_error(s)
- if s and _invalid_xml_string.search(s):
- raise ValueError(
- "All strings must be XML compatible: Unicode or ASCII, "
- "no NULL bytes or control characters"
- )
- return s
-
- import contextlib
-
- @contextlib.contextmanager
- def _get_writer(file_or_filename, encoding):
- # returns text write method and release all resources after using
- try:
- write = file_or_filename.write
- except AttributeError:
- # file_or_filename is a file name
- f = open(
- file_or_filename,
- "w",
- encoding="utf-8" if encoding == "unicode" else encoding,
- errors="xmlcharrefreplace",
- )
- with f:
- yield f.write
- else:
- # file_or_filename is a file-like object
- # encoding determines if it is a text or binary writer
- if encoding == "unicode":
- # use a text writer as is
- yield write
- else:
- # wrap a binary writer with TextIOWrapper
- detach_buffer = False
- if isinstance(file_or_filename, io.BufferedIOBase):
- buf = file_or_filename
- elif isinstance(file_or_filename, io.RawIOBase):
- buf = io.BufferedWriter(file_or_filename)
- detach_buffer = True
- else:
- # This is to handle passed objects that aren't in the
- # IOBase hierarchy, but just have a write method
- buf = io.BufferedIOBase()
- buf.writable = lambda: True
- buf.write = write
- try:
- # TextIOWrapper uses this methods to determine
- # if BOM (for UTF-16, etc) should be added
- buf.seekable = file_or_filename.seekable
- buf.tell = file_or_filename.tell
- except AttributeError:
- pass
- wrapper = io.TextIOWrapper(
- buf,
- encoding=encoding,
- errors="xmlcharrefreplace",
- newline="\n",
- )
- try:
- yield wrapper.write
- finally:
- # Keep the original file open when the TextIOWrapper and
- # the BufferedWriter are destroyed
- wrapper.detach()
- if detach_buffer:
- buf.detach()
-
- from xml.etree.ElementTree import _namespace_map
-
- def _namespaces(elem):
- # identify namespaces used in this tree
-
- # maps qnames to *encoded* prefix:local names
- qnames = {None: None}
-
- # maps uri:s to prefixes
- namespaces = {}
-
- def add_qname(qname):
- # calculate serialized qname representation
- try:
- qname = _tounicode(qname)
- if qname[:1] == "{":
- uri, tag = qname[1:].rsplit("}", 1)
- prefix = namespaces.get(uri)
- if prefix is None:
- prefix = _namespace_map.get(uri)
- if prefix is None:
- prefix = "ns%d" % len(namespaces)
- else:
- prefix = _tounicode(prefix)
- if prefix != "xml":
- namespaces[uri] = prefix
- if prefix:
- qnames[qname] = "%s:%s" % (prefix, tag)
- else:
- qnames[qname] = tag # default element
- else:
- qnames[qname] = qname
- except TypeError:
- _raise_serialization_error(qname)
-
- # populate qname and namespaces table
- for elem in elem.iter():
- tag = elem.tag
- if isinstance(tag, QName):
- if tag.text not in qnames:
- add_qname(tag.text)
- elif isinstance(tag, str):
- if tag not in qnames:
- add_qname(tag)
- elif tag is not None and tag is not Comment and tag is not PI:
- _raise_serialization_error(tag)
- for key, value in elem.items():
- if isinstance(key, QName):
- key = key.text
- if key not in qnames:
- add_qname(key)
- if isinstance(value, QName) and value.text not in qnames:
- add_qname(value.text)
- text = elem.text
- if isinstance(text, QName) and text.text not in qnames:
- add_qname(text.text)
- return qnames, namespaces
-
- def _serialize_xml(write, elem, qnames, namespaces, **kwargs):
- tag = elem.tag
- text = elem.text
- if tag is Comment:
- write("" % _tounicode(text))
- elif tag is ProcessingInstruction:
- write("%s?>" % _tounicode(text))
- else:
- tag = qnames[_tounicode(tag) if tag is not None else None]
- if tag is None:
- if text:
- write(_escape_cdata(text))
- for e in elem:
- _serialize_xml(write, e, qnames, None)
- else:
- write("<" + tag)
- if namespaces:
- for uri, prefix in sorted(
- namespaces.items(), key=lambda x: x[1]
- ): # sort on prefix
- if prefix:
- prefix = ":" + prefix
- write(' xmlns%s="%s"' % (prefix, _escape_attrib(uri)))
- attrs = elem.attrib
- if attrs:
- # try to keep existing attrib order
- if len(attrs) <= 1 or type(attrs) is _Attrib:
- items = attrs.items()
- else:
- # if plain dict, use lexical order
- items = sorted(attrs.items())
- for k, v in items:
- if isinstance(k, QName):
- k = _tounicode(k.text)
- else:
- k = _tounicode(k)
- if isinstance(v, QName):
- v = qnames[_tounicode(v.text)]
- else:
- v = _escape_attrib(v)
- write(' %s="%s"' % (qnames[k], v))
- if text is not None or len(elem):
- write(">")
- if text:
- write(_escape_cdata(text))
- for e in elem:
- _serialize_xml(write, e, qnames, None)
- write("" + tag + ">")
- else:
- write("/>")
- if elem.tail:
- write(_escape_cdata(elem.tail))
-
- def _raise_serialization_error(text):
- raise TypeError("cannot serialize %r (type %s)" % (text, type(text).__name__))
-
- def _escape_cdata(text):
- # escape character data
- try:
- text = _tounicode(text)
- # it's worth avoiding do-nothing calls for short strings
- if "&" in text:
- text = text.replace("&", "&")
- if "<" in text:
- text = text.replace("<", "<")
- if ">" in text:
- text = text.replace(">", ">")
- return text
- except (TypeError, AttributeError):
- _raise_serialization_error(text)
-
- def _escape_attrib(text):
- # escape attribute value
- try:
- text = _tounicode(text)
- if "&" in text:
- text = text.replace("&", "&")
- if "<" in text:
- text = text.replace("<", "<")
- if ">" in text:
- text = text.replace(">", ">")
- if '"' in text:
- text = text.replace('"', """)
- if "\n" in text:
- text = text.replace("\n", "
")
- return text
- except (TypeError, AttributeError):
- _raise_serialization_error(text)
-
- def _indent(elem, level=0):
- # From http://effbot.org/zone/element-lib.htm#prettyprint
- i = "\n" + level * " "
- if len(elem):
- if not elem.text or not elem.text.strip():
- elem.text = i + " "
- if not elem.tail or not elem.tail.strip():
- elem.tail = i
- for elem in elem:
- _indent(elem, level + 1)
- if not elem.tail or not elem.tail.strip():
- elem.tail = i
- else:
- if level and (not elem.tail or not elem.tail.strip()):
- elem.tail = i
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/implementations/jupyter.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/implementations/jupyter.py
deleted file mode 100644
index 782fa86399d0ae7e4abaf5bad590f6a67f1a4f08..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/implementations/jupyter.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import base64
-import io
-import re
-
-import requests
-
-import fsspec
-
-
-class JupyterFileSystem(fsspec.AbstractFileSystem):
- """View of the files as seen by a Jupyter server (notebook or lab)"""
-
- protocol = ("jupyter", "jlab")
-
- def __init__(self, url, tok=None, **kwargs):
- """
-
- Parameters
- ----------
- url : str
- Base URL of the server, like "http://127.0.0.1:8888". May include
- token in the string, which is given by the process when starting up
- tok : str
- If the token is obtained separately, can be given here
- kwargs
- """
- if "?" in url:
- if tok is None:
- try:
- tok = re.findall("token=([a-z0-9]+)", url)[0]
- except IndexError as e:
- raise ValueError("Could not determine token") from e
- url = url.split("?", 1)[0]
- self.url = url.rstrip("/") + "/api/contents"
- self.session = requests.Session()
- if tok:
- self.session.headers["Authorization"] = f"token {tok}"
-
- super().__init__(**kwargs)
-
- def ls(self, path, detail=True, **kwargs):
- path = self._strip_protocol(path)
- r = self.session.get(self.url + "/" + path)
- if r.status_code == 404:
- return FileNotFoundError(path)
- r.raise_for_status()
- out = r.json()
-
- if out["type"] == "directory":
- out = out["content"]
- else:
- out = [out]
- for o in out:
- o["name"] = o.pop("path")
- o.pop("content")
- if o["type"] == "notebook":
- o["type"] = "file"
- if detail:
- return out
- return [o["name"] for o in out]
-
- def cat_file(self, path, start=None, end=None, **kwargs):
- path = self._strip_protocol(path)
- r = self.session.get(self.url + "/" + path)
- if r.status_code == 404:
- return FileNotFoundError(path)
- r.raise_for_status()
- out = r.json()
- if out["format"] == "text":
- # data should be binary
- b = out["content"].encode()
- else:
- b = base64.b64decode(out["content"])
- return b[start:end]
-
- def pipe_file(self, path, value, **_):
- path = self._strip_protocol(path)
- json = {
- "name": path.rsplit("/", 1)[-1],
- "path": path,
- "size": len(value),
- "content": base64.b64encode(value).decode(),
- "format": "base64",
- "type": "file",
- }
- self.session.put(self.url + "/" + path, json=json)
-
- def mkdir(self, path, create_parents=True, **kwargs):
- path = self._strip_protocol(path)
- if create_parents and "/" in path:
- self.mkdir(path.rsplit("/", 1)[0], True)
- json = {
- "name": path.rsplit("/", 1)[-1],
- "path": path,
- "size": None,
- "content": None,
- "type": "directory",
- }
- self.session.put(self.url + "/" + path, json=json)
-
- def _rm(self, path):
- path = self._strip_protocol(path)
- self.session.delete(self.url + "/" + path)
-
- def _open(self, path, mode="rb", **kwargs):
- path = self._strip_protocol(path)
- if mode == "rb":
- data = self.cat_file(path)
- return io.BytesIO(data)
- else:
- return SimpleFileWriter(self, path, mode="wb")
-
-
-class SimpleFileWriter(fsspec.spec.AbstractBufferedFile):
- def _upload_chunk(self, final=False):
- """Never uploads a chunk until file is done
-
- Not suitable for large files
- """
- if final is False:
- return False
- self.buffer.seek(0)
- data = self.buffer.read()
- self.fs.pipe_file(self.path, data)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-c40f2837.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-c40f2837.js
deleted file mode 100644
index ff0533c344872d6cbafcd65192628b39399ca135..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-c40f2837.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as q,e as R,s as y,J as D,K as _,p as L,M as b,n as A,A as M,N as B,P as S,O as C,U as V,L as Z,R as z,B as T,G as F,m as U,V as X,Q as x,k as E,o as N,z as h,v as p,x as j,E as $,ae as ee,q as le,r as te,u as G,y as I,F as ne}from"./index-f877dfd5.js";import{B as se}from"./Button-11a87b79.js";import{B as ae}from"./BlockLabel-7929e88d.js";import{E as ie}from"./Empty-2159e5e9.js";function ce(s){let e,t;return{c(){e=D("svg"),t=D("path"),_(t,"fill","currentColor"),_(t,"d","M4 2H2v26a2 2 0 0 0 2 2h26v-2H4v-3h22v-8H4v-4h14V5H4Zm20 17v4H4v-4ZM16 7v4H4V7Z"),_(e,"xmlns","http://www.w3.org/2000/svg"),_(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),_(e,"aria-hidden","true"),_(e,"role","img"),_(e,"class","iconify iconify--carbon"),_(e,"width","100%"),_(e,"height","100%"),_(e,"preserveAspectRatio","xMidYMid meet"),_(e,"viewBox","0 0 32 32")},m(l,n){L(l,e,n),b(e,t)},p:A,i:A,o:A,d(l){l&&M(e)}}}class Y extends q{constructor(e){super(),R(this,e,null,ce,y,{})}}function J(s,e,t){const l=s.slice();return l[5]=e[t],l[7]=t,l}function K(s){let e,t=F(s[0].confidences),l=[];for(let n=0;n{n("select",{index:f,value:g.label})};return s.$$set=f=>{"value"in f&&t(0,l=f.value),"color"in f&&t(1,a=f.color),"selectable"in f&&t(2,i=f.selectable)},[l,a,i,n,o]}class re extends q{constructor(e){super(),R(this,e,fe,oe,y,{value:0,color:1,selectable:2})}}function Q(s){let e,t;return e=new ae({props:{Icon:Y,label:s[5],disable:s[6]===!1}}),{c(){E(e.$$.fragment)},m(l,n){N(e,l,n),t=!0},p(l,n){const a={};n&32&&(a.label=l[5]),n&64&&(a.disable=l[6]===!1),e.$set(a)},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){j(e,l)}}}function ue(s){let e,t;return e=new ie({props:{unpadded_box:!0,$$slots:{default:[de]},$$scope:{ctx:s}}}),{c(){E(e.$$.fragment)},m(l,n){N(e,l,n),t=!0},p(l,n){const a={};n&65536&&(a.$$scope={dirty:n,ctx:l}),e.$set(a)},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){j(e,l)}}}function _e(s){let e,t;return e=new re({props:{selectable:s[11],value:s[4],color:s[3]}}),e.$on("select",s[14]),{c(){E(e.$$.fragment)},m(l,n){N(e,l,n),t=!0},p(l,n){const a={};n&2048&&(a.selectable=l[11]),n&16&&(a.value=l[4]),n&8&&(a.color=l[3]),e.$set(a)},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){j(e,l)}}}function de(s){let e,t;return e=new Y({}),{c(){E(e.$$.fragment)},m(l,n){N(e,l,n),t=!0},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){j(e,l)}}}function me(s){let e,t,l,n,a,i,o;const f=[s[9]];let g={};for(let c=0;c{u=null}),I());let v=n;n=H(c),n===v?m[n].p(c,d):(G(),p(m[v],1,1,()=>{m[v]=null}),I(),a=m[n],a?a.p(c,d):(a=m[n]=k[n](c),a.c()),h(a,1),a.m(i.parentNode,i))},i(c){o||(h(e.$$.fragment,c),h(u),h(a),o=!0)},o(c){p(e.$$.fragment,c),p(u),p(a),o=!1},d(c){c&&(M(t),M(l),M(i)),j(e,c),u&&u.d(c),m[n].d(c)}}}function be(s){let e,t;return e=new se({props:{test_id:"label",visible:s[2],elem_id:s[0],elem_classes:s[1],container:s[6],scale:s[7],min_width:s[8],padding:!1,$$slots:{default:[me]},$$scope:{ctx:s}}}),{c(){E(e.$$.fragment)},m(l,n){N(e,l,n),t=!0},p(l,[n]){const a={};n&4&&(a.visible=l[2]),n&1&&(a.elem_id=l[0]),n&2&&(a.elem_classes=l[1]),n&64&&(a.container=l[6]),n&128&&(a.scale=l[7]),n&256&&(a.min_width=l[8]),n&73336&&(a.$$scope={dirty:n,ctx:l}),e.$set(a)},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){p(e.$$.fragment,l),t=!1},d(l){j(e,l)}}}function ge(s,e,t){let l,n,{elem_id:a=""}=e,{elem_classes:i=[]}=e,{visible:o=!0}=e,{color:f=void 0}=e,{value:g={}}=e,{label:u="Label"}=e,{container:k=!1}=e,{scale:m=null}=e,{min_width:H=void 0}=e,{loading_status:c}=e,{show_label:d=!0}=e,{selectable:w=!1}=e;const v=T();function W(r){ne.call(this,s,r)}return s.$$set=r=>{"elem_id"in r&&t(0,a=r.elem_id),"elem_classes"in r&&t(1,i=r.elem_classes),"visible"in r&&t(2,o=r.visible),"color"in r&&t(3,f=r.color),"value"in r&&t(4,g=r.value),"label"in r&&t(5,u=r.label),"container"in r&&t(6,k=r.container),"scale"in r&&t(7,m=r.scale),"min_width"in r&&t(8,H=r.min_width),"loading_status"in r&&t(9,c=r.loading_status),"show_label"in r&&t(10,d=r.show_label),"selectable"in r&&t(11,w=r.selectable)},s.$$.update=()=>{s.$$.dirty&16&&t(13,{confidences:l,label:n}=g,l,(t(12,n),t(4,g))),s.$$.dirty&12288&&v("change")},[a,i,o,f,g,u,k,m,H,c,d,w,n,l,W]}class ve extends q{constructor(e){super(),R(this,e,ge,be,y,{elem_id:0,elem_classes:1,visible:2,color:3,value:4,label:5,container:6,scale:7,min_width:8,loading_status:9,show_label:10,selectable:11})}}const Le=ve,Me=["static"],Be=s=>({type:{payload:"{ label: string; confidences?: Array<{ label: string; confidence: number }>"},description:{payload:"output label and optional set of confidences per label"}});export{Le as Component,Be as document,Me as modes};
-//# sourceMappingURL=index-c40f2837.js.map
diff --git a/spaces/cihyFjudo/fairness-paper-search/Awkward Silence Cricket Sound Effect 19 Download and Use for Free.md b/spaces/cihyFjudo/fairness-paper-search/Awkward Silence Cricket Sound Effect 19 Download and Use for Free.md
deleted file mode 100644
index cb3431075fe3faa5fd1f2a459c5ad5c6ed897303..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Awkward Silence Cricket Sound Effect 19 Download and Use for Free.md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
The wings lie flat on the body and are very variable in size between species, being reduced in size in some crickets and missing in others. The fore wings are elytra made of tough chitin, acting as a protective shield for the soft parts of the body and in males, bear the stridulatory organs for the production of sound. The hind pair is membranous, folding fan-wise under the fore wings. In many species, the wings are not adapted for flight.[1]
Most male crickets make a loud chirping sound by stridulation (scraping two specially textured body parts together). The stridulatory organ is located on the tegmen, or fore wing, which is leathery in texture. A large vein runs along the centre of each tegmen, with comb-like serrations on its edge forming a file-like structure, and at the rear edge of the tegmen is a scraper. The tegmina are held at an angle to the body and rhythmically raised and lowered which causes the scraper on one wing to rasp on the file on the other. The central part of the tegmen contains the "harp", an area of thick, sclerotized membrane which resonates and amplifies the volume of sound, as does the pocket of air between the tegmina and the body wall. Most female crickets lack the necessary adaptations to stridulate, so make no sound.[7]
-
In 1975, Dr. William H. Cade discovered that the parasitic tachinid fly Ormia ochracea is attracted to the song of the cricket, and uses it to locate the male to deposit her larvae on him. It was the first known example of a natural enemy that locates its host or prey using the mating signal.[10] Since then, many species of crickets have been found to be carrying the same parasitic fly, or related species. In response to this selective pressure, a mutation leaving males unable to chirp was observed amongst a population of Teleogryllus oceanicus on the Hawaiian island of Kauai, enabling these crickets to elude their parasitoid predators.[11] A different mutation with the same effect was also discovered on the neighboring island of Oahu (ca. 100 miles (160 km) away).[12] Recently, new "purring" males of the same species in Hawaii are able to produce a novel auditory sexual signal that can be used to attract females while greatly reducing the likelihood of parasitoid attack from the fly.[13]
-
Some species of cricket are polyandrous. In Gryllus bimaculatus, the females select and mate with multiple viable sperm donors, preferring novel mates.[23] Female Teleogryllus oceanicus crickets from natural populations similarly mate and store sperm from multiple males.[24] Female crickets exert a postcopulatory fertilization bias in favour of unrelated males to avoid the genetic consequences of inbreeding. Fertilization bias depends on the control of sperm transport to the sperm storage organs. The inhibition of sperm storage by female crickets can act as a form of cryptic female choice to avoid the severe negative effects of inbreeding.[25] Controlled-breeding experiments with the cricket Gryllus firmus demonstrated inbreeding depression, as nymphal weight and early fecundity declined substantially over the generations'[26] this was caused as expected by an increased frequency of homozygous combinations of deleterious recessive alleles.[26][27]
-
By the end of the 20th century the sound of chirping crickets came to represent quietude in literature, theatre and film. From this sentiment arose expressions equating "crickets" with silence altogether, particularly when a group of assembled people makes no noise. These expressions have grown from the more descriptive, "so quiet that you can hear crickets," to simply saying, "crickets" as shorthand for "complete silence."[62]
-
Cricket characters feature in the Walt Disney animated movies Pinocchio (1940), where Jiminy Cricket becomes the title character's conscience, and in Mulan (1998), where Cri-Kee is carried in a cage as a symbol of luck, in the Asian manner. The Crickets was the name of Buddy Holly's rock and roll band;[63] Holly's home town baseball team in the 1990s was called the Lubbock Crickets.[64] Cricket is the name of a US children's literary magazine founded in 1973; it uses a cast of insect characters.[65] The sound of crickets is often used in media to emphasize silence, often for comic effect after an awkward joke, in a similar manner to tumbleweed.
-
-
Enter: the sound machine. I had heard so many great things about white noise, and I figured trying to block out the sounds and awkward silence that were keeping me awake was worth a shot. After some research, I settled on a LectroFan sound machine, which had gotten good reviews on Amazon.
-
It's always been pretty hard for me to fall asleep and sleep soundly through the night. The problem is twofold: I'm a night owl, so I find it challenging to wind down at a reasonable hour, and I also really don't like silence \u2014 without some noise in the background, I can't doze off. \nThe type of noise matters, too. I'm bothered by snoring, loud breathing, and strange little noises that come from wildlife or outside my window (just one reason I hate camping). I've tried taking melatonin and even Benadryl to pass out, and while these remedies worked at times, they're probably not the healthiest or most sustainable methods of falling asleep. I needed to find a way to doze off naturally and feel more rested in the morning.\nEnter: the sound machine. I had heard so many great things about white noise, and I figured trying to block out the sounds and awkward silence that were keeping me awake was worth a shot. After some research, I settled on a LectroFan sound machine, which had gotten good reviews on Amazon.\nAnd let me tell you: I'm in love with this thing. No matter who occupies my bed for the rest of my life, this glorious piece of technology will be by my side. Heck, that's true even when I'm not in my own bed \u2014 I went on vacation with my mother shortly after getting the LectroFan, and the machine was so lightweight that I decided to take it along. While my mom was resistant to running it in our room at first, she acquiesced when I explained that I couldn't sleep without it. (I'm not just saying that. I've tried.) She ended up loving it and purchased one when she returned home a week later.\nThis machine has changed my sleep habits so completely that I now keep one at my childhood home for when I visit and at my boyfriend's place to use when I spend the night. (I am not messing around here.) Incredibly, switching it on seems to prime my body for sleep before I've even climbed into bed. It's as though my body responds to this specific sound, and I instantly become drowsy. I've come to depend on it, and honestly, that's fine by me. Keep reading for a closer look.\n","id":45542688,"type":"gallery","photo_source":"Image Source: POPSUGAR Photography \/ RC Rivera","permalink":"https:\/\/www.popsugar.com\/fitness\/LectroFan-White-Noise-Machine-Review-45542688","canonical":"https:\/\/www.popsugar.com\/fitness\/LectroFan-White-Noise-Machine-Review-45542688","share_text":"I Bought This 4.5-Star Sound Machine, and Now I Sleep Like a Teenager on Summer Break","use_tall_image":false,"omit_from_countdown":false,"caption_num":false,"slide_tags":"Healthy LivingSleepPersonal EssayEditor Experiments","is_cover":true},"image":"https:\/\/media1.popsugar-assets.com\/files\/thumbor\/vpO_usaeWNXNvrienWXsj6YTvY8\/fit-in\/1024x1024\/filters:format_auto-!!-:strip_icc-!!-\/2019\/01\/02\/052\/n\/43953451\/1f1c58dd5c2d544bd0cae2.01882548_818XkLHmHFL\/i\/Machine.jpg","share_image":"https:\/\/media1.popsugar-assets.com\/files\/thumbor\/_D35ZzBvrJhpgO1L51M0Kf2CG4o\/fit-in\/2048xorig\/filters:format_auto-!!-:strip_icc-!!-\/2019\/01\/02\/052\/n\/43953451\/1f1c58dd5c2d544bd0cae2.01882548_818XkLHmHFL\/i\/Machine.jpg","title":"The Machine","intro_text":false,"body":"The LectroFan High Fidelity White Noise Machine ($50) comes in both white and black, and its name is just a really elaborate way of saying \"game changer.\"\n","id":45542689,"type":"image","photo_source":"","permalink":"https:\/\/www.popsugar.com\/fitness\/photo-gallery\/45542688\/image\/45542689\/Machine","canonical":"https:\/\/www.popsugar.com\/fitness\/LectroFan-White-Noise-Machine-Review-45542688","share_text":"The Machine","use_tall_image":false,"omit_from_countdown":false,"caption_num":false,"buy_now_content":"\n \n \n\n\n \n \n \n \n\n \n \n LectroFan High Fidelity White Noise Machine\n \n\n \n $50\n from amazon.com\n \n \n \n Buy Now\n\n \n \n\n \n","buy_url":"http:\/\/www.amazon.com\/LectroFan-Fidelity-Machine-Unique-Non-Looping\/dp\/B00JU8P8VY?th=1&tag=popsugarshopx-20&asc_source=none&asc_campaign=none&asc_refurl=www.popsugar.com\/fitness\/LectroFan-White-Noise-Machine-Review-45542688&ascsubtag=___psv__p_45542688__t_w__d_d_","has_direct_to_retailer_tracking_params":"url_pos=gallery-image&evar1=fit%3Aus&evar3=article%3Agallery-slide%3Apss&evar7=&evar9=45542688&evar98=https%3A%2F%2Fwww.popsugar.com%2Ffitness%2FLectroFan-White-Noise-Machine-Review-45542688&list1=healthy%20living%2Csleep%2Cpersonal%20essay%2Cpopsugar%20voices%20pitches%2Ceditor%20experiments%2Cshoppable%2Coriginal%20feature%2Cnewsletter%2Cpopsugar%20voices%2Csyndicate&prop13=desktop&page_name=fit%3Aus%3Aarticle%3Alectrofan-white-noise-machine-review-45542688&brand=&price=50&retailer=amazon.com&p_name=LectroFan%20High%20Fidelity%20White%20Noise%20Machine&pid=393473&pdata=18979700","slide_tags":"SleepHealthy LivingPersonal EssayEditor Experiments","image":"https:\/\/media1.popsugar-assets.com\/files\/thumbor\/hHtnSkCLVpVaAFQtM5YyVrAHScE\/fit-in\/1024x1024\/filters:format_auto-!!-:strip_icc-!!-\/2019\/01\/02\/060\/n\/43953451\/bc01d1845c2d56bca28344.07934104_61ekiDKZ1LL._SL1000_\/i\/Settings.jpg","share_image":"https:\/\/media1.popsugar-assets.com\/files\/thumbor\/KgXfs4TIUTkDm131LjEtRMNKANg\/fit-in\/2048xorig\/filters:format_auto-!!-:strip_icc-!!-\/2019\/01\/02\/060\/n\/43953451\/bc01d1845c2d56bca28344.07934104_61ekiDKZ1LL._SL1000_\/i\/Settings.jpg","title":"The Settings","intro_text":false,"body":"Of the 20 sounds built into the machine, I use a fan sound, rather than harsher-sounding white noise. I run it at a relatively loud volume to help block out any deep breaths from my partner and fill the room with some background noise to help me zone out. It even comes with a sleep-timer option, which I personally never use. I just prefer to turn on the machine when climbing into bed and off once I wake up, but it's nice to have the option.\n","id":45542691,"type":"image","photo_source":"","permalink":"https:\/\/www.popsugar.com\/fitness\/photo-gallery\/45542688\/image\/45542691\/Settings","canonical":"https:\/\/www.popsugar.com\/fitness\/LectroFan-White-Noise-Machine-Review-45542688","share_text":"The Settings","use_tall_image":false,"omit_from_countdown":false,"caption_num":false,"slide_tags":"SleepHealthy LivingPersonal EssayEditor Experiments","content":"\n \n \n \n \n Up Next\n \n \n \n We'll Make Your Day With 5 Words: Trader Joe's Vegan Sandwich Cookies\n \n\n\n\n Image Source: Instagram user @traderjoesvegan\n","type":"end_screen","permalink":"https:\/\/www.popsugar.com\/fitness\/Trader-Joe-Vegan-Sandwich-Cookies-45707725","image":"https:\/\/media1.popsugar-assets.com\/files\/thumbor\/JoKLyJQZFkUSnIwfbiqeGOPuhRQ\/fit-in\/1024x1024\/filters:format_auto-!!-:strip_icc-!!-\/2019\/01\/24\/788\/n\/1922729\/6f47cb905c49fbedcf7804.53579298_\/i\/Trader-Joe-Vegan-Sandwich-Cookies.jpg"], "tweet":"I Bought This 4.5-Star Sound Machine, and Now I Sleep Like a Teenager on Summer Break","next":"https:\/\/www.popsugar.com\/fitness\/Trader-Joe-Vegan-Sandwich-Cookies-45707725","prev":"https:\/\/www.popsugar.com\/fitness\/Best-Matcha-Products-45619771","slideCount":2,"isCPC":false,"disableInterstitials":true,"tweetText":"I Bought This 4.5-Star Sound Machine, and Now I Sleep Like a Teenager on Summer Break","pinterestImage":"https:\/\/media1.popsugar-assets.com\/files\/2019\/01\/02\/144\/n\/1922729\/ac89730b5c2d73327074e8.78508581_.jpg","galleryURL":"https:\/\/www.popsugar.com\/fitness\/LectroFan-White-Noise-Machine-Review-45542688", ]); want more?
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Fisica moderna Vol. I Harvey E. White Ph.D. Sc.D. Master Modern Physics with the Help of Editorial Limusa and Grupo Noriega Editores.md b/spaces/cihyFjudo/fairness-paper-search/Fisica moderna Vol. I Harvey E. White Ph.D. Sc.D. Master Modern Physics with the Help of Editorial Limusa and Grupo Noriega Editores.md
deleted file mode 100644
index 495665a61e95bb16f9644ee4e03708fd965aba31..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Fisica moderna Vol. I Harvey E. White Ph.D. Sc.D. Master Modern Physics with the Help of Editorial Limusa and Grupo Noriega Editores.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Fisica moderna, Vol. I, Harvey E. White, Ph.D. Sc.D., Editorial Limusa, Grupo Noriega Editores, 39
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/HerunterladenAutoCAD Inventor LT Suite 2016 Aktivierungscode 64 Bits DE Die Vorteile der Kombination von AutoCAD LT und Inventor LT.md b/spaces/cihyFjudo/fairness-paper-search/HerunterladenAutoCAD Inventor LT Suite 2016 Aktivierungscode 64 Bits DE Die Vorteile der Kombination von AutoCAD LT und Inventor LT.md
deleted file mode 100644
index 3b77fecc9819cadfabf0e3a5074ae1f62622d078..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/HerunterladenAutoCAD Inventor LT Suite 2016 Aktivierungscode 64 Bits DE Die Vorteile der Kombination von AutoCAD LT und Inventor LT.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
ASCENT: Crash Landing Download Litel FULL HSMWorks 2017 Helius PFA 2017 Scaricare Codice Di Attivazione 32 Bits Italiano Hostel 2 Movie In Hindi Download Hirens Boot Cd 10.1 Iso Free Download 56l Xforce Keygen 32bits Or 64bits Version Alias Concept 2007 Keygen Cute Gay Boys Xxxl Hyderabad Blues Full Movie Hindi Dubbed Download Moviesl Xforce Keygen Instructables 2016 32 Bit.zip Free download electronics books in pdf The
-
HerunterladenAutoCAD Inventor LT Suite 2016 Aktivierungscode 64 Bits DE
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/I Tre Moschettieri Full Movie 720p Download Free Extra Quality.md b/spaces/cihyFjudo/fairness-paper-search/I Tre Moschettieri Full Movie 720p Download Free Extra Quality.md
deleted file mode 100644
index a0a605abf229e08e067860d2ce7f9eb8470aa659..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/I Tre Moschettieri Full Movie 720p Download Free Extra Quality.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Saazni by Shekhar Ravjiani A Musical Treat for Marathi Lovers Free Download MP3 Song.md b/spaces/cihyFjudo/fairness-paper-search/Saazni by Shekhar Ravjiani A Musical Treat for Marathi Lovers Free Download MP3 Song.md
deleted file mode 100644
index 250aa9a4ce473d5e6bd095d9d36268be0734b3f2..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Saazni by Shekhar Ravjiani A Musical Treat for Marathi Lovers Free Download MP3 Song.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/TEF Test DEvaluation De Francais 250 Activites (Livre Audio) - Repost. How to Ace the French Evaluation Test with Audio Activities.md b/spaces/cihyFjudo/fairness-paper-search/TEF Test DEvaluation De Francais 250 Activites (Livre Audio) - Repost. How to Ace the French Evaluation Test with Audio Activities.md
deleted file mode 100644
index 31d9e1facc3a6e554ac9f51fc1c2aeed35ada5d7..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/TEF Test DEvaluation De Francais 250 Activites (Livre Audio) - Repost. How to Ace the French Evaluation Test with Audio Activities.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
TEF: Test D\\\\\\\\'Evaluation De Francais: 250 Activites (Livre Audio) - Repost