modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 18:27:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 18:27:32
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
trapoom555/Gemma-2B-Text-Embedding-cft-checkpoints | trapoom555 | 2024-05-08T09:11:30Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"sentence-embedding",
"sentence-similarity",
"feature-extraction",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-08T06:09:29Z | ---
license: mit
language:
- en
tags:
- sentence-embedding
- sentence-similarity
- transformers
- feature-extraction
pipeline_tag: sentence-similarity
---
# Gemma-2B-Text-Embedding-cft-checkpoints
All checkpoints of [trapoom555/Gemma-2B-Text-Embedding-cft](https://huggingface.co/trapoom555/Gemma-2B-Text-Embedding-cft).
|
lupobricco/irony_classification_single_label_base | lupobricco | 2024-05-08T09:10:14Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:Musixmatch/umberto-commoncrawl-cased-v1",
"base_model:finetune:Musixmatch/umberto-commoncrawl-cased-v1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T08:52:38Z | ---
base_model: Musixmatch/umberto-commoncrawl-cased-v1
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: irony_classification_single_label_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# irony_classification_single_label_base
This model is a fine-tuned version of [Musixmatch/umberto-commoncrawl-cased-v1](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9822
- Accuracy: 0.6227
- F1: 0.5853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9554 | 1.0 | 718 | 0.8483 | 0.6247 | 0.5794 |
| 0.6941 | 2.0 | 1436 | 0.9822 | 0.6227 | 0.5853 |
| 0.3184 | 3.0 | 2154 | 1.5308 | 0.6206 | 0.5835 |
| 0.2401 | 4.0 | 2872 | 2.0444 | 0.6093 | 0.5714 |
| 0.1284 | 5.0 | 3590 | 2.1603 | 0.6124 | 0.5643 |
| 0.0646 | 6.0 | 4308 | 2.3836 | 0.6041 | 0.5571 |
| 0.0362 | 7.0 | 5026 | 2.5046 | 0.6268 | 0.5635 |
| 0.0232 | 8.0 | 5744 | 2.6831 | 0.6072 | 0.5534 |
| 0.024 | 9.0 | 6462 | 2.7345 | 0.6165 | 0.5546 |
| 0.0084 | 10.0 | 7180 | 2.7679 | 0.6144 | 0.5616 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
|
jlbaker361/dcgan-k-text | jlbaker361 | 2024-05-08T09:09:25Z | 0 | 0 | null | [
"region:us"
] | null | 2024-03-05T16:51:38Z | ---
{}
---
Creative Adversarial Network
epochs: 100
dataset jlbaker361/wikiart
n classes 5
batch_size 64
images where resized to 768
and then center cropped to: 512
used clip=False
conditional =False
discriminator parameters:
init_dim: 32
final_dim 512
generator parameters:
input noise_dim: 100
wandb project: https://wandb.ai/jlbaker361/creativity/runs/2lbof3jh
|
lilzzz/dbbuc_30p | lilzzz | 2024-05-08T09:05:45Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-08T09:05:26Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: dbbuc_30p
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dbbuc_30p
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1597
- Precision: 0.5256
- Recall: 0.5222
- F1: 0.5239
- Accuracy: 0.9675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 267 | 0.1502 | 0.3872 | 0.3270 | 0.3546 | 0.9595 |
| 0.1891 | 2.0 | 534 | 0.1349 | 0.4992 | 0.4825 | 0.4907 | 0.9650 |
| 0.1891 | 3.0 | 801 | 0.1412 | 0.4708 | 0.5254 | 0.4966 | 0.9642 |
| 0.056 | 4.0 | 1068 | 0.1539 | 0.5055 | 0.5143 | 0.5098 | 0.9667 |
| 0.056 | 5.0 | 1335 | 0.1597 | 0.5256 | 0.5222 | 0.5239 | 0.9675 |
### Framework versions
- Transformers 4.39.0
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.15.2
|
LiteLLMs/Meta-Llama-3-13B-Instruct-GGUF | LiteLLMs | 2024-05-08T09:04:44Z | 347 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"GGUF",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-08T07:02:43Z |
---
language:
- en
license: other
library_name: transformers
tags:
- mergekit
- merge
- GGUF
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
quantized_by: andrijdavid
---
# Meta-Llama-3-13B-Instruct-GGUF
- Original model: [Meta-Llama-3-13B-Instruct](https://huggingface.co/andrijdavid/Meta-Llama-3-13B-Instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Meta-Llama-3-13B-Instruct](https://huggingface.co/andrijdavid/Meta-Llama-3-13B-Instruct).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Meta-Llama-3-13B-Instruct-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Meta-Llama-3-13B-Instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Meta-Llama-3-13B-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Meta-Llama-3-13B-Instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Meta-Llama-3-13B-Instruct
# Meta-Llama-3-13B-Instruct
Meta-Llama-3-13B-Instruct is a [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main).
## Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 16]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [4, 24]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [8, 31]
model: meta-llama/Meta-Llama-3-8B-Instruct
merge_method: passthrough
dtype: float16
```
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "andrijdavid/Meta-Llama-3-13B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
<!-- original-model-card end -->
|
cria111/dbbuc_5p | cria111 | 2024-05-08T09:04:33Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-08T09:04:03Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: dbbuc_5p
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dbbuc_5p
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1537
- Precision: 0.5208
- Recall: 0.5159
- F1: 0.5183
- Accuracy: 0.9670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 216 | 0.1629 | 0.3631 | 0.3159 | 0.3379 | 0.9584 |
| No log | 2.0 | 432 | 0.1414 | 0.5027 | 0.4429 | 0.4709 | 0.9653 |
| 0.1826 | 3.0 | 648 | 0.1419 | 0.4870 | 0.5365 | 0.5106 | 0.9656 |
| 0.1826 | 4.0 | 864 | 0.1527 | 0.5222 | 0.5048 | 0.5133 | 0.9670 |
| 0.0512 | 5.0 | 1080 | 0.1537 | 0.5208 | 0.5159 | 0.5183 | 0.9670 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.0+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
clio-ai/recipes20M_gpt2tok | clio-ai | 2024-05-08T09:04:07Z | 147 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T08:51:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aiaustin/llama-3-8b-Instruct-bnb-4bit-aiaustin-demo3 | aiaustin | 2024-05-08T09:02:38Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-03T08:15:40Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** aiaustin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
- **Trained to:** convert a prompt to a team of agents into a python list of tasks that need to be completed using first principle reasoning.
To get the desired effects, use the system prompt that the model was trained with:
```python
system_prompt = "You are an AI task automator. You will take a users prompt and use first principle reasoning to break the prompt into tasks that you must accomplish within another chat. RESPOND TO THIS MESSAGE ONLY WITH A PYTHON FORMATTED LIST OF TASKS THAT YOU MUST COMPLETE TO TRUTHFULLY AND INTELLIGENTLY ACCOMPLISH THE USERS REQUEST. ASSUME YOU CAN SEARCH THE WEB, WRITE CODE, RUN CODE, DEBUG CODE, AND AUTOMATE ANYTHING ON THE USERS COMPUTER TO ACCOMPLISH THE PROMPT. CORRECT RESPONSE FORMAT: ['task 1', 'task 2', 'task 3']"
``` |
vanderlist/distilbert-base-uncased-finetuned-emotion | vanderlist | 2024-05-08T09:00:06Z | 121 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-07T13:49:52Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
- name: F1
type: f1
value: 0.9294838225405171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2208
- Accuracy: 0.9295
- F1: 0.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8275 | 1.0 | 250 | 0.3187 | 0.907 | 0.9061 |
| 0.2597 | 2.0 | 500 | 0.2208 | 0.9295 | 0.9295 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
tedad09/PolizzeDonut-RifaGDMarks-5Epochs | tedad09 | 2024-05-08T08:56:55Z | 49 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-08T07:24:00Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: PolizzeDonut-RifaGDMarks-5Epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PolizzeDonut-RifaGDMarks-5Epochs
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
rj1ALINT/raining-weather | rj1ALINT | 2024-05-08T08:51:35Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-08T08:50:30Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Raining_Weather on Stable Diffusion via Dreambooth
#### model by rj1ALINT
This your the Stable Diffusion model fine-tuned the Raining_Weather concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<dashcam footage > of a car driving in Raining Weather**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
HausaNLP/afrisenti-yor-regression | HausaNLP | 2024-05-08T08:51:14Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-07T22:42:16Z | ---
library_name: transformers
tags: []
---
## AfriSenti Yoruba Sentiment Regressor Description
Takes a text and predicts the sentiment value between -1 (Negative) to 1 (Positive) with 0 being Neutral.
Regression Value Description:
| Value | Sentiment |
|--|--|
| -1 | Negative |
| 0 | Neutral |
| 1 | Positive |
## How to Get Started with the Model
Use the code below to get started with the model.
```
import math
import torch
import pandas as pd
from transformers import AutoModelForSequenceClassification, AutoTokenizer
BATCH_SIZE = 32
ds = pd.read_csv('test.csv')
BASE_MODEL = 'HausaNLP/afrisenti-yor-regression'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
model = AutoModelForSequenceClassification.from_pretrained(BASE_MODEL)
nb_batches = math.ceil(len(ds)/BATCH_SIZE)
y_preds = []
for i in range(nb_batches):
input_texts = ds[i * BATCH_SIZE: (i+1) * BATCH_SIZE]["tweet"]
encoded = tokenizer(input_texts, truncation=True, padding="max_length", max_length=256, return_tensors="pt").to(device)
y_preds += model(**encoded).logits.reshape(-1).tolist()
df = pd.DataFrame([ds['tweet'], ds['label'], y_preds], ["Text", "Label", "Prediction"]).T
df.to_csv('predictions.csv', index=False)
``` |
yee0930/llama3-8b-oig-unsloth-merged | yee0930 | 2024-05-08T08:51:04Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T07:23:36Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** yee0930
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HausaNLP/afrisenti-kin-regression | HausaNLP | 2024-05-08T08:49:53Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-07T21:08:17Z | ---
library_name: transformers
tags: []
---
## AfriSenti Kinyarwanda Sentiment Regressor Description
Takes a text and predicts the sentiment value between -1 (Negative) to 1 (Positive) with 0 being Neutral.
Regression Value Description:
| Value | Sentiment |
|--|--|
| -1 | Negative |
| 0 | Neutral |
| 1 | Positive |
## How to Get Started with the Model
Use the code below to get started with the model.
```
import math
import torch
import pandas as pd
from transformers import AutoModelForSequenceClassification, AutoTokenizer
BATCH_SIZE = 32
ds = pd.read_csv('test.csv')
BASE_MODEL = 'HausaNLP/afrisenti-kin-regression'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
model = AutoModelForSequenceClassification.from_pretrained(BASE_MODEL)
nb_batches = math.ceil(len(ds)/BATCH_SIZE)
y_preds = []
for i in range(nb_batches):
input_texts = ds[i * BATCH_SIZE: (i+1) * BATCH_SIZE]["tweet"]
encoded = tokenizer(input_texts, truncation=True, padding="max_length", max_length=256, return_tensors="pt").to(device)
y_preds += model(**encoded).logits.reshape(-1).tolist()
df = pd.DataFrame([ds['tweet'], ds['label'], y_preds], ["Text", "Label", "Prediction"]).T
df.to_csv('predictions.csv', index=False)
``` |
annamalai-s/bertopic_newsgroup_minilm | annamalai-s | 2024-05-08T08:49:41Z | 6 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2024-05-08T08:49:40Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# bertopic_newsgroup_minilm
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("annamalai-s/bertopic_newsgroup_minilm")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 362
* Number of training documents: 18846
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | the - to - and - of - for | 10 | -1_the_to_and_of |
| 0 | gun - guns - firearms - weapons - militia | 6635 | 0_gun_guns_firearms_weapons |
| 1 | cramer - optilink - gay - clayton - homosexual | 424 | 1_cramer_optilink_gay_clayton |
| 2 | atheism - atheists - god - atheist - religion | 226 | 2_atheism_atheists_god_atheist |
| 3 | espn - game - abc - games - hockey | 154 | 3_espn_game_abc_games |
| 4 | monitor - monitors - vga - nanao - nec | 146 | 4_monitor_monitors_vga_nanao |
| 5 | printer - deskjet - printers - laser - hp | 142 | 5_printer_deskjet_printers_laser |
| 6 | amp - sale - speakers - sony - stereo | 140 | 6_amp_sale_speakers_sony |
| 7 | drivers - diamond - card - ati - driver | 140 | 7_drivers_diamond_card_ati |
| 8 | lib - x11r5 - usr - libxmu - ndet_loop | 139 | 8_lib_x11r5_usr_libxmu |
| 9 | 55 - 25 - pit - det - bos | 125 | 9_55_25_pit_det |
| 10 | cosmo - angmar - internet - address - mit | 112 | 10_cosmo_angmar_internet_address |
| 11 | armenian - turkish - armenians - genocide - serdar | 111 | 11_armenian_turkish_armenians_genocide |
| 12 | sky - space - billboard - vandalizing - advertising | 109 | 12_sky_space_billboard_vandalizing |
| 13 | modem - modems - fax - courier - baud | 104 | 13_modem_modems_fax_courier |
| 14 | fire - atf - fbi - survivors - dividian | 103 | 14_fire_atf_fbi_survivors |
| 15 | jews - zionism - jewish - israel - holocaust | 103 | 15_jews_zionism_jewish_israel |
| 16 | forged - locutus - colorado - infante - posts | 102 | 16_forged_locutus_colorado_infante |
| 17 | muslims - serbs - bosnia - bosnian - muslim | 101 | 17_muslims_serbs_bosnia_bosnian |
| 18 | rushdie - islam - islamic - jaeger - gregg | 95 | 18_rushdie_islam_islamic_jaeger |
| 19 | simms - simm - vram - 256k - ram | 95 | 19_simms_simm_vram_256k |
| 20 | objective - morality - moral - frank - values | 85 | 20_objective_morality_moral_frank |
| 21 | hell - eternal - god - heaven - jesus | 83 | 21_hell_eternal_god_heaven |
| 22 | microsoft - os - challenge - supporters - ms | 82 | 22_microsoft_os_challenge_supporters |
| 23 | dos - windows - window - widget - microsoft | 80 | 23_dos_windows_window_widget |
| 24 | homosexuality - homosexual - gay - paul - boswell | 78 | 24_homosexuality_homosexual_gay_paul |
| 25 | israel - arab - jews - arabs - israeli | 78 | 25_israel_arab_jews_arabs |
| 26 | clipper - phone - escrow - tap - keys | 78 | 26_clipper_phone_escrow_tap |
| 27 | dos - allocation - windows - linked - vpic46 | 78 | 27_dos_allocation_windows_linked |
| 28 | moon - billion - prize - henry - alaska | 77 | 28_moon_billion_prize_henry |
| 29 | leafs - game - wings - goal - habs | 76 | 29_leafs_game_wings_goal |
| 30 | radar - detector - detectors - alarm - valentine | 72 | 30_radar_detector_detectors_alarm |
| 31 | clipper - encryption - chip - intercon - amanda | 70 | 31_clipper_encryption_chip_intercon |
| 32 | msg - food - sensitivity - chinese - superstition | 67 | 32_msg_food_sensitivity_chinese |
| 33 | morality - moral - keith - livesey - cobb | 64 | 33_morality_moral_keith_livesey |
| 34 | nmm - traffic - behind - bike - lane | 61 | 34_nmm_traffic_behind_bike |
| 35 | games - sega - genesis - snes - cd | 61 | 35_games_sega_genesis_snes |
| 36 | swap - memory - emm386 - windows - file | 61 | 36_swap_memory_emm386_windows |
| 37 | president - stephanopoulos - myers - mr - ms | 60 | 37_president_stephanopoulos_myers_mr |
| 38 | mary - she - her - immaculate - sin | 60 | 38_mary_she_her_immaculate |
| 39 | hst - mission - servicing - solar - shuttle | 59 | 39_hst_mission_servicing_solar |
| 40 | copy - protected - protection - disks - sehari | 59 | 40_copy_protected_protection_disks |
| 41 | bmw - moa - rider - cactus - requests | 58 | 41_bmw_moa_rider_cactus |
| 42 | colormap - dpy - visual - color - window | 58 | 42_colormap_dpy_visual_color |
| 43 | points - sphere - den - p3 - p1 | 57 | 43_points_sphere_den_p3 |
| 44 | batf - warrant - assault - waco - they | 56 | 44_batf_warrant_assault_waco |
| 45 | nsa - encryption - cryptosystems - sternlight - government | 56 | 45_nsa_encryption_cryptosystems_sternlight |
| 46 | israel - lebanese - lebanon - israeli - hernlem | 55 | 46_israel_lebanese_lebanon_israeli |
| 47 | gaza - israel - palestinian - israeli - peace | 55 | 47_gaza_israel_palestinian_israeli |
| 48 | yuk - motorcycling - east - rtsg - riders | 55 | 48_yuk_motorcycling_east_rtsg |
| 49 | science - methodology - scientific - sas - fulk | 54 | 49_science_methodology_scientific_sas |
| 50 | shift - shifting - manual - transmission - automatic | 53 | 50_shift_shifting_manual_transmission |
| 51 | tax - taxes - income - vat - deficit | 53 | 51_tax_taxes_income_vat |
| 52 | window - manager - main_win - xsizehints - expose | 52 | 52_window_manager_main_win_xsizehints |
| 53 | drive - controller - drives - disk - ide | 52 | 53_drive_controller_drives_disk |
| 54 | gif - format - linux - convert - files | 52 | 54_gif_format_linux_convert |
| 55 | israeli - israel - hamid - mcrcim - israelis | 51 | 55_israeli_israel_hamid_mcrcim |
| 56 | pin - ethernet - board - card - asante | 51 | 56_pin_ethernet_board_card |
| 57 | gamma - bursters - oort - ray - cloud | 51 | 57_gamma_bursters_oort_ray |
| 58 | drive - floptical - drives - disks - hard | 50 | 58_drive_floptical_drives_disks |
| 59 | serial - modem - dtr - uart - rts | 50 | 59_serial_modem_dtr_uart |
| 60 | finland - sweden - ericsson - czech - finnish | 49 | 60_finland_sweden_ericsson_czech |
| 61 | lankford - torre - he - gilkey - hitter | 49 | 61_lankford_torre_he_gilkey |
| 62 | cd - rom - toshiba - cd300 - cdrom | 47 | 62_cd_rom_toshiba_cd300 |
| 63 | dog - dogs - springer - dod - bike | 47 | 63_dog_dogs_springer_dod |
| 64 | clutch - runs - hit - batting - rbis | 47 | 64_clutch_runs_hit_batting |
| 65 | candida - yeast - noring - systemic - infections | 47 | 65_candida_yeast_noring_systemic |
| 66 | lopez - year - he - catchers - players | 46 | 66_lopez_year_he_catchers |
| 67 | battery - batteries - concrete - acid - lead | 46 | 67_battery_batteries_concrete_acid |
| 68 | 50 - 486 - 486dx2 - cyrix - mhz | 46 | 68_50_486_486dx2_cyrix |
| 69 | scsi - ide - dma - bus - controller | 46 | 69_scsi_ide_dma_bus |
| 70 | font - fonts - truetype - atm - tt | 45 | 70_font_fonts_truetype_atm |
| 71 | drugs - drug - cocaine - illegal - marijuana | 45 | 71_drugs_drug_cocaine_illegal |
| 72 | helmet - helmets - shoei - jacket - fit | 44 | 72_helmet_helmets_shoei_jacket |
| 73 | mormon - mormons - lds - church - ceremonies | 44 | 73_mormon_mormons_lds_church |
| 74 | br - isc - steveh - thor - government | 44 | 74_br_isc_steveh_thor |
| 75 | allergy - antihistamine - shots - dyer - sleep | 44 | 75_allergy_antihistamine_shots_dyer |
| 76 | pens - caps - cup - jets - canucks | 44 | 76_pens_caps_cup_jets |
| 77 | petch - god - love - gvg47 - gvg | 44 | 77_petch_god_love_gvg47 |
| 78 | mazda - toyota - miles - car - camry | 44 | 78_mazda_toyota_miles_car |
| 79 | truth - arrogance - absolutes - absolute - christians | 43 | 79_truth_arrogance_absolutes_absolute |
| 80 | shaft - wheelies - stafford - wheelie - winona | 43 | 80_shaft_wheelies_stafford_wheelie |
| 81 | crypt - key - cryptography - des - ciphers | 43 | 81_crypt_key_cryptography_des |
| 82 | oil - drain - changing - ohio - plug | 42 | 82_oil_drain_changing_ohio |
| 83 | jewish - baseball - vb30 - lafibm - players | 42 | 83_jewish_baseball_vb30_lafibm |
| 84 | sleeve - sale - picture - cd - 45 | 42 | 84_sleeve_sale_picture_cd |
| 85 | morris - team - jays - maynard - viola | 42 | 85_morris_team_jays_maynard |
| 86 | cable - antenna - receiver - distance - tv | 41 | 86_cable_antenna_receiver_distance |
| 87 | black - king - kyle - adjective - kkopp | 41 | 87_black_king_kyle_adjective |
| 88 | countersteering - mjs - bike - countersteering_faq - lean | 41 | 88_countersteering_mjs_bike_countersteering_faq |
| 89 | cpu - fan - heat - sink - fans | 41 | 89_cpu_fan_heat_sink |
| 90 | jesus - tomb - magi - resurrection - disciples | 41 | 90_jesus_tomb_magi_resurrection |
| 91 | canon - scripture - books - bible - septuagint | 40 | 91_canon_scripture_books_bible |
| 92 | mac - disks - 800k - 44mb - read | 40 | 92_mac_disks_800k_44mb |
| 93 | keenan - rangers - hockey - messier - roger | 40 | 93_keenan_rangers_hockey_messier |
| 94 | xv - bit - 24bit - image - images | 39 | 94_xv_bit_24bit_image |
| 95 | greek - greece - turkish - greeks - turks | 39 | 95_greek_greece_turkish_greeks |
| 96 | drive - meg - ram - sale - scherf | 39 | 96_drive_meg_ram_sale |
| 97 | photography - krillean - kirlian - pictures - unlv | 39 | 97_photography_krillean_kirlian_pictures |
| 98 | monitors - hours - day - nevai - monitor | 39 | 98_monitors_hours_day_nevai |
| 99 | card - orchid - p9000 - vlb - cards | 38 | 99_card_orchid_p9000_vlb |
| 100 | sale - list - 00 - guide - shipping | 38 | 100_sale_list_00_guide |
| 101 | monitor - screen - problem - 610 - video | 38 | 101_monitor_screen_problem_610 |
| 102 | baptism - sin - aaron - infants - baptized | 38 | 102_baptism_sin_aaron_infants |
| 103 | kuwait - saudi - iraq - gulf - war | 37 | 103_kuwait_saudi_iraq_gulf |
| 104 | station - redesign - dc - shuttle - space | 37 | 104_station_redesign_dc_shuttle |
| 105 | marriage - married - marry - ceremony - marriages | 37 | 105_marriage_married_marry_ceremony |
| 106 | polygon - polygons - ___ - routine - fast | 37 | 106_polygon_polygons_____routine |
| 107 | space - shuttle - launch - afit - astronomy | 37 | 107_space_shuttle_launch_afit |
| 108 | sabres - buffalo - fuhr - boston - bruins | 36 | 108_sabres_buffalo_fuhr_boston |
| 109 | waco - reno - federal - fbi - batf | 36 | 109_waco_reno_federal_fbi |
| 110 | bike - 805 - motorcycle - ride - motorcycles | 36 | 110_bike_805_motorcycle_ride |
| 111 | phone - hook - number - line - tip | 36 | 111_phone_hook_number_line |
| 112 | phillies - phils - 1964 - bunning - reds | 36 | 112_phillies_phils_1964_bunning |
| 113 | roby - fbi - udel - chopin - compound | 35 | 113_roby_fbi_udel_chopin |
| 114 | hernia - pain - bone - radiologist - arm | 35 | 114_hernia_pain_bone_radiologist |
| 115 | sco - split - newsgroup - graphics - comp | 35 | 115_sco_split_newsgroup_graphics |
| 116 | irq - interrupt - port - com4 - com3 | 34 | 116_irq_interrupt_port_com4 |
| 117 | gopher - search - images - ftp - data | 34 | 117_gopher_search_images_ftp |
| 118 | 3d - grafsys - library - graphics - shading | 34 | 118_3d_grafsys_library_graphics |
| 119 | comet - jupiter - gehrels - orbit - sq | 34 | 119_comet_jupiter_gehrels_orbit |
| 120 | gtoal - celp - speech - compression - voice | 34 | 120_gtoal_celp_speech_compression |
| 121 | insurance - health - private - care - gld | 34 | 121_insurance_health_private_care |
| 122 | centaur - proton - energy - uranium - ryukoku | 34 | 122_centaur_proton_energy_uranium |
| 123 | easter - goddess - mithras - resurrection - pagan | 33 | 123_easter_goddess_mithras_resurrection |
| 124 | cult - cults - freemasonry - baptists - baptist | 32 | 124_cult_cults_freemasonry_baptists |
| 125 | ticket - airline - hotel - tickets - voucher | 32 | 125_ticket_airline_hotel_tickets |
| 126 | nhl - stars - team - minnesota - franchise | 32 | 126_nhl_stars_team_minnesota |
| 127 | sox - red - bosio - bosox - clemens | 32 | 127_sox_red_bosio_bosox |
| 128 | ashok - slip - packet - cwru - slipper | 32 | 128_ashok_slip_packet_cwru |
| 129 | jehovah - elohim - lord - pope - father | 32 | 129_jehovah_elohim_lord_pope |
| 130 | spacecraft - baalke - mission - galileo - pluto | 31 | 130_spacecraft_baalke_mission_galileo |
| 131 | speed - 680x0 - x86 - clock - 68040 | 31 | 131_speed_680x0_x86_clock |
| 132 | escrow - key - agencies - keys - secure | 31 | 132_escrow_key_agencies_keys |
| 133 | doctor - clinic - surgery - patient - japanese | 31 | 133_doctor_clinic_surgery_patient |
| 134 | bike - bikes - mower - sale - honda | 31 | 134_bike_bikes_mower_sale |
| 135 | wave - bikers - cage - squid - waved | 31 | 135_wave_bikers_cage_squid |
| 136 | insurance - fault - car - hail - rates | 31 | 136_insurance_fault_car_hail |
| 137 | garrett - ingres - ibm - rickert - turkey | 30 | 137_garrett_ingres_ibm_rickert |
| 138 | theism - fanatism - frank - dwyer - belief | 30 | 138_theism_fanatism_frank_dwyer |
| 139 | migraine - pain - migraines - zisfein - headache | 30 | 139_migraine_pain_migraines_zisfein |
| 140 | 130 - boyle - road - speed - roads | 28 | 140_130_boyle_road_speed |
| 141 | satellite - digex - satellites - access - drag | 28 | 141_satellite_digex_satellites_access |
| 142 | 610 - centris - iivx - lciii - c610 | 28 | 142_610_centris_iivx_lciii |
| 143 | depression - prozac - thyroid - thyroxin - nutrition | 28 | 143_depression_prozac_thyroid_thyroxin |
| 144 | journalism - baseball - dwarner - bolick - dodgers | 28 | 144_journalism_baseball_dwarner_bolick |
| 145 | tempest - holland - northeastern - monitor - colostate | 28 | 145_tempest_holland_northeastern_monitor |
| 146 | 00 - wolverine - 1st - 50 - comics | 28 | 146_00_wolverine_1st_50 |
| 147 | murray - gm - wings - ottawa - lindros | 28 | 147_murray_gm_wings_ottawa |
| 148 | duo - 230 - beeps - chimes - machine | 27 | 148_duo_230_beeps_chimes |
| 149 | mr2 - engine - clutch - eliot - noisy | 27 | 149_mr2_engine_clutch_eliot |
| 150 | christianity - convenient - christian - definition - christians | 27 | 150_christianity_convenient_christian_definition |
| 151 | satan - ra - god - lucifer - heaven | 27 | 151_satan_ra_god_lucifer |
| 152 | summer - room - sublet - jhuvm - bedroom | 26 | 152_summer_room_sublet_jhuvm |
| 153 | software - wingert - level - sci - space | 26 | 153_software_wingert_level_sci |
| 154 | god - jesus - malcolm - royalroads - law | 26 | 154_god_jesus_malcolm_royalroads |
| 155 | europeans - nhl - rauser - players - european | 26 | 155_europeans_nhl_rauser_players |
| 156 | mustang - camaro - ford - howell - firebird | 25 | 156_mustang_camaro_ford_howell |
| 157 | stove - wpi - irvine - stratus - electric | 25 | 157_stove_wpi_irvine_stratus |
| 158 | scope - scopes - oscilloscope - fluke - phosphor | 25 | 158_scope_scopes_oscilloscope_fluke |
| 159 | odometer - bmw - sensor - car - dealer | 25 | 159_odometer_bmw_sensor_car |
| 160 | koresh - utarlg - sbc - uta - backing | 25 | 160_koresh_utarlg_sbc_uta |
| 161 | tape - backup - adaptec - aspi4dos - 1542 | 25 | 161_tape_backup_adaptec_aspi4dos |
| 162 | mask - goalie - gtd597a - votes - hrivnak | 25 | 162_mask_goalie_gtd597a_votes |
| 163 | astros - houston - games - rbi - sweda | 24 | 163_astros_houston_games_rbi |
| 164 | icon - icons - program - manager - vpnet | 24 | 164_icon_icons_program_manager |
| 165 | solvent - adhesive - duct - tape - ruck | 24 | 165_solvent_adhesive_duct_tape |
| 166 | keymap - key - numlock - keyboard - xterm | 24 | 166_keymap_key_numlock_keyboard |
| 167 | ir - dres - dnd - detector - cycle | 24 | 167_ir_dres_dnd_detector |
| 168 | car - dealer - price - blue - sales | 24 | 168_car_dealer_price_blue |
| 169 | midi - sound - blaster - driver - soundblaster | 24 | 169_midi_sound_blaster_driver |
| 170 | blue - boards - leds - led - green | 24 | 170_blue_boards_leds_led |
| 171 | wax - scratches - plastic - finish - paint | 24 | 171_wax_scratches_plastic_finish |
| 172 | motif - linux - bindings - xact - cose | 24 | 172_motif_linux_bindings_xact |
| 173 | v4 - v12 - cdac - v8 - ole | 24 | 173_v4_v12_cdac_v8 |
| 174 | officers - cop - mcguire - xxxx - police | 23 | 174_officers_cop_mcguire_xxxx |
| 175 | gant - hirschbeck - umpire - strike - duke | 23 | 175_gant_hirschbeck_umpire_strike |
| 176 | abortion - abortions - nyikos - choice - landreneau | 23 | 176_abortion_abortions_nyikos_choice |
| 177 | sharks - season - chuq - grade - acquired | 23 | 177_sharks_season_chuq_grade |
| 178 | punishment - penalty - capital - death - innocent | 23 | 178_punishment_penalty_capital_death |
| 179 | mouse - windows - driver - stuttgart - com3 | 23 | 179_mouse_windows_driver_stuttgart |
| 180 | processing - image - imaging - mishra - hendrix | 23 | 180_processing_image_imaging_mishra |
| 181 | freedom - virginia - beyer - ucla - ab4z | 23 | 181_freedom_virginia_beyer_ucla |
| 182 | seizures - corn - paulson - seizure - cereals | 23 | 182_seizures_corn_paulson_seizure |
| 183 | crohn - ibd - inflammation - diet - wiesel | 23 | 183_crohn_ibd_inflammation_diet |
| 184 | barbecued - foods - carcinogenic - food - meat | 23 | 184_barbecued_foods_carcinogenic_food |
| 185 | pillion - riding - advice - passenger - ride | 22 | 185_pillion_riding_advice_passenger |
| 186 | key - chip - clipper - session - encrypted | 22 | 186_key_chip_clipper_session |
| 187 | powerbook - portable - pb100 - pb - peirce | 22 | 187_powerbook_portable_pb100_pb |
| 188 | ear - ears - hearing - earwax - dizziness | 22 | 188_ear_ears_hearing_earwax |
| 189 | photoshop - adobe - rot - dgf1 - qc | 22 | 189_photoshop_adobe_rot_dgf1 |
| 190 | evolution - theory - rawlins - scharle - science | 22 | 190_evolution_theory_rawlins_scharle |
| 191 | ftp - nonibm - puff - glp - minivas | 22 | 191_ftp_nonibm_puff_glp |
| 192 | scanner - scanners - logitech - scanman - grayscale | 22 | 192_scanner_scanners_logitech_scanman |
| 193 | games - baseball - game - pitches - pitcher | 22 | 193_games_baseball_game_pitches |
| 194 | ham - interference - surges - alternator - watts | 22 | 194_ham_interference_surges_alternator |
| 195 | weight - omen - chromium - diet - fat | 22 | 195_weight_omen_chromium_diet |
| 196 | pregnency - teacher - oswego - biology - sperm | 21 | 196_pregnency_teacher_oswego_biology |
| 197 | ghostscript - postscript - ghostview - pageview - ftms | 21 | 197_ghostscript_postscript_ghostview_pageview |
| 198 | 3do - 3d - lightwave - list - imagine | 21 | 198_3do_3d_lightwave_list |
| 199 | polio - disease - alzheimer - syndrome - patients | 21 | 199_polio_disease_alzheimer_syndrome |
| 200 | motherboard - 386 - 386dx - murli - sale | 21 | 200_motherboard_386_386dx_murli |
| 201 | des - key - bits - block - attack | 21 | 201_des_key_bits_block |
| 202 | ax - max - g9v - b8f - a86 | 21 | 202_ax_max_g9v_b8f |
| 203 | israeli - biased - israel - media - none | 21 | 203_israeli_biased_israel_media |
| 204 | exhaust - carbs - intake - engine - air | 21 | 204_exhaust_carbs_intake_engine |
| 205 | tickets - 05pm - 35pm - june - ticket | 21 | 205_tickets_05pm_35pm_june |
| 206 | chain - wax - maxima - cookson - mitre | 21 | 206_chain_wax_maxima_cookson |
| 207 | toyota - cruiser - suv - jeep - explorer | 21 | 207_toyota_cruiser_suv_jeep |
| 208 | lipman - visualization - navy - graphics - seminar | 20 | 208_lipman_visualization_navy_graphics |
| 209 | dwi - speedy - driving - svoboda - liquor | 20 | 209_dwi_speedy_driving_svoboda |
| 210 | dialing - phones - tone - hugo - sweden | 20 | 210_dialing_phones_tone_hugo |
| 211 | image - processing - plplot - analysis - plotting | 20 | 211_image_processing_plplot_analysis |
| 212 | convertible - wife - car - targa - convertibles | 20 | 212_convertible_wife_car_targa |
| 213 | vuille - babb - synapse - ic - pcmcia | 20 | 213_vuille_babb_synapse_ic |
| 214 | nt - windows - chicogo - os - reimert | 20 | 214_nt_windows_chicogo_os |
| 215 | alomar - defensive - sandberg - average - career | 20 | 215_alomar_defensive_sandberg_average |
| 216 | blues - hawks - joseph - blackhawks - shanahan | 20 | 216_blues_hawks_joseph_blackhawks |
| 217 | graphics - pub - 128 - ray - ftp | 20 | 217_graphics_pub_128_ray |
| 218 | w4wg - network - windows - workgroups - lastdrive | 20 | 218_w4wg_network_windows_workgroups |
| 219 | tank - bag - goldberg - fj1100 - pouch | 20 | 219_tank_bag_goldberg_fj1100 |
| 220 | mailing - list - detweiler - mail - rdetweil | 20 | 220_mailing_list_detweiler_mail |
| 221 | gas - tear - unb - riddle - j979 | 20 | 221_gas_tear_unb_riddle |
| 222 | ide - bus - controller - vlb - scsi | 20 | 222_ide_bus_controller_vlb |
| 223 | saturn - dealer - profit - warranty - sl2 | 19 | 223_saturn_dealer_profit_warranty |
| 224 | cursor - xterm - blinking - cursors - allbery | 19 | 224_cursor_xterm_blinking_cursors |
| 225 | joystick - joysticks - arcade - port - int15h | 19 | 225_joystick_joysticks_arcade_port |
| 226 | lyme - disease - ld - infectious - patients | 19 | 226_lyme_disease_ld_infectious |
| 227 | context - jim - joslin - meritt - mwunix | 19 | 227_context_jim_joslin_meritt |
| 228 | qualcomm - clinton - qualcom - rdippold - clipper | 19 | 228_qualcomm_clinton_qualcom_rdippold |
| 229 | cancer - hiv - burzynski - breast - booklet | 19 | 229_cancer_hiv_burzynski_breast |
| 230 | kidney - stones - calcium - she - stone | 19 | 230_kidney_stones_calcium_she |
| 231 | rosicrucian - amorc - ch981 - alicea - tony | 19 | 231_rosicrucian_amorc_ch981_alicea |
| 232 | henrik - armenia - bm - armenians - karabakh | 19 | 232_henrik_armenia_bm_armenians |
| 233 | geico - insurance - claim - davew - wonnacott | 19 | 233_geico_insurance_claim_davew |
| 234 | eye - dominance - prk - handedness - rk | 19 | 234_eye_dominance_prk_handedness |
| 235 | church - churches - crossroads - movement - boston | 19 | 235_church_churches_crossroads_movement |
| 236 | water - mwra - phd - cellar - scoggin | 19 | 236_water_mwra_phd_cellar |
| 237 | integra - car - shadow - dodge - gtz | 19 | 237_integra_car_shadow_dodge |
| 238 | sabbath - worship - law - ceremonial - paul | 19 | 238_sabbath_worship_law_ceremonial |
| 239 | lobby - sammons - letter - ns111310 - colostate | 19 | 239_lobby_sammons_letter_ns111310 |
| 240 | henry - orion - film - prototype - toronto | 18 | 240_henry_orion_film_prototype |
| 241 | trinity - father - son - holy - god | 18 | 241_trinity_father_son_holy |
| 242 | captain - traded - captains - striped - resigned | 18 | 242_captain_traded_captains_striped |
| 243 | 42 - tiff - philosophical - significance - joachim | 18 | 243_42_tiff_philosophical_significance |
| 244 | space - mars - spaceflight - nick - fred | 18 | 244_space_mars_spaceflight_nick |
| 245 | astronaut - space - nasa - pilot - jemison | 18 | 245_astronaut_space_nasa_pilot |
| 246 | circumcision - cons - pros - penile - blix | 18 | 246_circumcision_cons_pros_penile |
| 247 | wire - wiring - ground - neutral - outlets | 17 | 247_wire_wiring_ground_neutral |
| 248 | women - men - monash - depression - sex | 17 | 248_women_men_monash_depression |
| 249 | prophecy - prophecies - earthquake - lord - prophesies | 17 | 249_prophecy_prophecies_earthquake_lord |
| 250 | cooling - towers - nuclear - plants - water | 17 | 250_cooling_towers_nuclear_plants |
| 251 | diesel - diesels - fuel - injector - emissions | 17 | 251_diesel_diesels_fuel_injector |
| 252 | windows - pif - dos - file - command | 17 | 252_windows_pif_dos_file |
| 253 | uv - bulb - flashlight - bulbs - neon | 17 | 253_uv_bulb_flashlight_bulbs |
| 254 | tires - tire - fluids - abs - dot | 17 | 254_tires_tire_fluids_abs |
| 255 | mhz - clock - operational - iisi - cpu | 17 | 255_mhz_clock_operational_iisi |
| 256 | cubs - braves - team - america - talent | 17 | 256_cubs_braves_team_america |
| 257 | lens - rupin - camera - dang - dartmouth | 17 | 257_lens_rupin_camera_dang |
| 258 | dock - duo - apple - bredell - deguzman | 16 | 258_dock_duo_apple_bredell |
| 259 | janet - reno - madman - children - she | 16 | 259_janet_reno_madman_children |
| 260 | lock - locks - cobra - kryptonite - cable | 16 | 260_lock_locks_cobra_kryptonite |
| 261 | mouse - jumpy - motion - byu - smoothly | 16 | 261_mouse_jumpy_motion_byu |
| 262 | god - creates - omnipotence - shaped - omnipotent | 16 | 262_god_creates_omnipotence_shaped |
| 263 | yassin - deir - irgun - dir - village | 16 | 263_yassin_deir_irgun_dir |
| 264 | xv - julian - copyright - lancs - escaped | 16 | 264_xv_julian_copyright_lancs |
| 265 | mjm - fm - circuits - mixer - musone | 16 | 265_mjm_fm_circuits_mixer |
| 266 | tga - rle - pov - povray - tmp | 16 | 266_tga_rle_pov_povray |
| 267 | workspace - managers - workspaces - manager - zip | 16 | 267_workspace_managers_workspaces_manager |
| 268 | quadra - scsi - nodine - cartridge - mac | 16 | 268_quadra_scsi_nodine_cartridge |
| 269 | hpgl - ilmenau - naplps - vuw - schmidt | 16 | 269_hpgl_ilmenau_naplps_vuw |
| 270 | jumper - 2190 - maxtor - thad - drive | 16 | 270_jumper_2190_maxtor_thad |
| 271 | dxf - iff - format - autocad - pei | 16 | 271_dxf_iff_format_autocad |
| 272 | mode - vesa - vga - svga - 640x400 | 16 | 272_mode_vesa_vga_svga |
| 273 | mosques - mosque - jerusalem - eggertj - jake | 16 | 273_mosques_mosque_jerusalem_eggertj |
| 274 | ulf - erau - huot - players - drozinst | 15 | 274_ulf_erau_huot_players |
| 275 | algorithm - secret - chip - reverse - clipper | 15 | 275_algorithm_secret_chip_reverse |
| 276 | font - fonts - alavi - ssa - 8514 | 15 | 276_font_fonts_alavi_ssa |
| 277 | gauge - nancy - gauges - temp - cigarette | 15 | 277_gauge_nancy_gauges_temp |
| 278 | octopus - detroit - ice - cunyvm - hammerl | 15 | 278_octopus_detroit_ice_cunyvm |
| 279 | cview - temp - moscom - directory - zenkar | 15 | 279_cview_temp_moscom_directory |
| 280 | drive - cable - quantum - disk - internal | 15 | 280_drive_cable_quantum_disk |
| 281 | logo - vgalogo - rle - startup - lgo | 15 | 281_logo_vgalogo_rle_startup |
| 282 | ini - updating - svein - sysedit - utility | 15 | 282_ini_updating_svein_sysedit |
| 283 | sin - hate - sinner - love - scott | 15 | 283_sin_hate_sinner_love |
| 284 | administration - privacy - eff - government - inquiry | 15 | 284_administration_privacy_eff_government |
| 285 | bonds - williams - batting - giants - clark | 15 | 285_bonds_williams_batting_giants |
| 286 | 02106 - chemistry - udel - paperback - ravel | 15 | 286_02106_chemistry_udel_paperback |
| 287 | cherry - coach - don - he - him | 15 | 287_cherry_coach_don_he |
| 288 | drink - drinking - riding - alcohol - hours | 15 | 288_drink_drinking_riding_alcohol |
| 289 | ether - planets - twist - sci - mnemonics | 14 | 289_ether_planets_twist_sci |
| 290 | keys - des - lokkur - nanosecond - keyseach | 14 | 290_keys_des_lokkur_nanosecond |
| 291 | virginia - uva - partying - andi - schools | 14 | 291_virginia_uva_partying_andi |
| 292 | hiram - dk - vhs - kou - koutd | 14 | 292_hiram_dk_vhs_kou |
| 293 | eliot - flat - boxer - 180 - v12 | 14 | 293_eliot_flat_boxer_180 |
| 294 | neilson - triumf - seoul - deadly - kids | 14 | 294_neilson_triumf_seoul_deadly |
| 295 | cruel - keith - caltech - constitution - painful | 14 | 295_cruel_keith_caltech_constitution |
| 296 | luminosity - red - rgb - hue - green | 14 | 296_luminosity_red_rgb_hue |
| 297 | she - were - they - her - sumgait | 14 | 297_she_were_they_her |
| 298 | jagr - francis - minus - uvic - player | 14 | 298_jagr_francis_minus_uvic |
| 299 | adl - bullock - gerard - francisco - arens | 14 | 299_adl_bullock_gerard_francisco |
| 300 | widgets - gadgets - dealy - motif - widget | 14 | 300_widgets_gadgets_dealy_motif |
| 301 | print - printer - file - claebaur - portal | 14 | 301_print_printer_file_claebaur |
| 302 | hacker - ethic - hackers - dorsai - carlos | 14 | 302_hacker_ethic_hackers_dorsai |
| 303 | weick - dana - him - cpu - sturges | 14 | 303_weick_dana_him_cpu |
| 304 | xputimage - server - sunview - cam - animation | 14 | 304_xputimage_server_sunview_cam |
| 305 | god - evil - serbian - saved - genocide | 14 | 305_god_evil_serbian_saved |
| 306 | nubus - pds - lc - marvin - higgins | 13 | 306_nubus_pds_lc_marvin |
| 307 | zeos - gateway - murthy - service - vasudev | 13 | 307_zeos_gateway_murthy_service |
| 308 | temperature - henry - interstellar - sky - radiation | 13 | 308_temperature_henry_interstellar_sky |
| 309 | uniforms - marlins - lloyd - reiniger - reds | 13 | 309_uniforms_marlins_lloyd_reiniger |
| 310 | faith - saved - romans - lukewarm - deeds | 13 | 310_faith_saved_romans_lukewarm |
| 311 | scsi - drive - ide - oracle - adaptec | 13 | 311_scsi_drive_ide_oracle |
| 312 | fifth - keyphrase - amendment - key - passwords | 13 | 312_fifth_keyphrase_amendment_key |
| 313 | tongues - language - tounges - gifted - bjorn | 13 | 313_tongues_language_tounges_gifted |
| 314 | rocks - overpass - ejv2j - erik - kids | 13 | 314_rocks_overpass_ejv2j_erik |
| 315 | biggest - disappointment - smale - mvp - surprise | 13 | 315_biggest_disappointment_smale_mvp |
| 316 | nicknames - nickname - healy - tammy - berg | 13 | 316_nicknames_nickname_healy_tammy |
| 317 | ampere - amp - db - bell - ohmite | 13 | 317_ampere_amp_db_bell |
| 318 | handling - ntuvax - ntu - handson - ba7116326 | 13 | 318_handling_ntuvax_ntu_handson |
| 319 | air - r12 - conditioning - substitutes - freon | 13 | 319_air_r12_conditioning_substitutes |
| 320 | soenke - bielefeld - widget - savela - masc0442 | 13 | 320_soenke_bielefeld_widget_savela |
| 321 | eisa - isa - bus - board - video | 13 | 321_eisa_isa_bus_board |
| 322 | wrench - srb - thiokol - pliers - tool | 13 | 322_wrench_srb_thiokol_pliers |
| 323 | oilers - pocklington - edmonton - northlands - yadallee | 13 | 323_oilers_pocklington_edmonton_northlands |
| 324 | sound - stereo - channel - mac - soundbase | 13 | 324_sound_stereo_channel_mac |
| 325 | movies - bikes - csundh30 - cassidy - ursa | 13 | 325_movies_bikes_csundh30_cassidy |
| 326 | haldol - elderly - lithium - drugs - hospital | 13 | 326_haldol_elderly_lithium_drugs |
| 327 | 8051 - oscar - mont - speth - spock | 13 | 327_8051_oscar_mont_speth |
| 328 | cache - iisi - powercache - card - fpu | 13 | 328_cache_iisi_powercache_card |
| 329 | bryce - bike - manish - arches - touring | 13 | 329_bryce_bike_manish_arches |
| 330 | skate - carol - malarchuk - sei - neck | 13 | 330_skate_carol_malarchuk_sei |
| 331 | rush - compuserve - jongsma - anovak - henson | 12 | 331_rush_compuserve_jongsma_anovak |
| 332 | date - clock - dos - bios - cmos | 12 | 332_date_clock_dos_bios |
| 333 | mcadams - sale - suresh - mattress - aj008 | 12 | 333_mcadams_sale_suresh_mattress |
| 334 | silence - moment - prayer - eeb1 - opposing | 12 | 334_silence_moment_prayer_eeb1 |
| 335 | jesus - prayers - god - name - prayer | 12 | 335_jesus_prayers_god_name |
| 336 | habitable - planets - atmosphere - oxygen - planet | 12 | 336_habitable_planets_atmosphere_oxygen |
| 337 | sunset - sunrise - drexel - cbis - wetstein | 12 | 337_sunset_sunrise_drexel_cbis |
| 338 | selective - borden - service - abolish - naval | 12 | 338_selective_borden_service_abolish |
| 339 | illustrator - diablo - autotrace - points - drawing | 12 | 339_illustrator_diablo_autotrace_points |
| 340 | love - kodak - god - dps - ico | 12 | 340_love_kodak_god_dps |
| 341 | koresh - griffen - batf - children - fbi | 12 | 341_koresh_griffen_batf_children |
| 342 | needles - acupuncture - needle - aids - hypodermic | 12 | 342_needles_acupuncture_needle_aids |
| 343 | accelerations - acceleration - 45g - deaddio - amruth | 12 | 343_accelerations_acceleration_45g_deaddio |
| 344 | tape - copy - vcr - video - destructing | 12 | 344_tape_copy_vcr_video |
| 345 | pmy - sword - royalroads - yadlowsky - malcolm | 12 | 345_pmy_sword_royalroads_yadlowsky |
| 346 | educational - price - newsbytes - cda - eu | 12 | 346_educational_price_newsbytes_cda |
| 347 | liar - lunatic - he - christian - bible | 11 | 347_liar_lunatic_he_christian |
| 348 | eff - minerva - yale - jgfoot - tarl | 11 | 348_eff_minerva_yale_jgfoot |
| 349 | seema - hannover - madvlsi - varma - columbia | 11 | 349_seema_hannover_madvlsi_varma |
| 350 | eugenics - memes - genes - genome - ruegg | 11 | 350_eugenics_memes_genes_genome |
| 351 | lunar - ltm1 - manned - tele - exploration | 11 | 351_lunar_ltm1_manned_tele |
| 352 | switch - beams - st11 - bimmer - cookson | 11 | 352_switch_beams_st11_bimmer |
| 353 | commandment - christians - temper - inference - jesus | 11 | 353_commandment_christians_temper_inference |
| 354 | harkey - dl - oscs - cubs - wetteland | 11 | 354_harkey_dl_oscs_cubs |
| 355 | fourd - 0565 - 494 - dimension - cute | 11 | 355_fourd_0565_494_dimension |
| 356 | mattingly - tesla - njit - drm6640 - baseman | 11 | 356_mattingly_tesla_njit_drm6640 |
| 357 | placebo - roth - rr - medicine - jb | 11 | 357_placebo_roth_rr_medicine |
| 358 | tempest - c650 - cyclone - price - drop | 10 | 358_tempest_c650_cyclone_price |
| 359 | ssf - overhead - nasa - tax - billion | 10 | 359_ssf_overhead_nasa_tax |
| 360 | mining - freaks - alaska - eco - miners | 10 | 360_mining_freaks_alaska_eco |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.25.2
* HDBSCAN: 0.8.33
* UMAP: 0.5.6
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.7.0
* Transformers: 4.40.1
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
|
ganbold13/roberta-base-ner-demo | ganbold13 | 2024-05-08T08:47:56Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"mn",
"base_model:bayartsogt/mongolian-roberta-base",
"base_model:finetune:bayartsogt/mongolian-roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-08T08:47:36Z | ---
language:
- mn
base_model: bayartsogt/mongolian-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-ner-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ner-demo
This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- Precision: 0.9235
- Recall: 0.9342
- F1: 0.9288
- Accuracy: 0.9800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1652 | 1.0 | 477 | 0.0832 | 0.8915 | 0.9136 | 0.9024 | 0.9762 |
| 0.0512 | 2.0 | 954 | 0.0828 | 0.9071 | 0.9244 | 0.9156 | 0.9778 |
| 0.0268 | 3.0 | 1431 | 0.0909 | 0.9179 | 0.9274 | 0.9226 | 0.9787 |
| 0.0146 | 4.0 | 1908 | 0.0975 | 0.9217 | 0.9322 | 0.9269 | 0.9798 |
| 0.008 | 5.0 | 2385 | 0.1127 | 0.9178 | 0.9313 | 0.9245 | 0.9793 |
| 0.0053 | 6.0 | 2862 | 0.1255 | 0.9207 | 0.9295 | 0.9251 | 0.9790 |
| 0.0034 | 7.0 | 3339 | 0.1292 | 0.9235 | 0.9335 | 0.9285 | 0.9797 |
| 0.0024 | 8.0 | 3816 | 0.1339 | 0.9186 | 0.9332 | 0.9258 | 0.9795 |
| 0.0015 | 9.0 | 4293 | 0.1359 | 0.9239 | 0.9343 | 0.9291 | 0.9800 |
| 0.0011 | 10.0 | 4770 | 0.1372 | 0.9235 | 0.9342 | 0.9288 | 0.9800 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AbhiKadoor/distilbert-base-uncased-finetuned-squad | AbhiKadoor | 2024-05-08T08:47:27Z | 3 | 0 | null | [
"distilbert",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-02-29T10:42:12Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 56 | 4.2940 |
| No log | 2.0 | 112 | 3.8714 |
| No log | 3.0 | 168 | 3.8093 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cpu
- Datasets 2.19.1
- Tokenizers 0.19.1
|
annamalai-s/bertopic_newsgroup_mpnet | annamalai-s | 2024-05-08T08:46:04Z | 8 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2024-05-08T08:46:02Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# bertopic_newsgroup_mpnet
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("annamalai-s/bertopic_newsgroup_mpnet")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 445
* Number of training documents: 18846
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | to - the - for - from - is | 10 | -1_to_the_for_from |
| 0 | gun - guns - firearms - crime - handgun | 5381 | 0_gun_guns_firearms_crime |
| 1 | cramer - optilink - gay - clayton - sexual | 266 | 1_cramer_optilink_gay_clayton |
| 2 | fbi - batf - waco - compound - koresh | 229 | 2_fbi_batf_waco_compound |
| 3 | db - mov - bh - si - bl | 134 | 3_db_mov_bh_si |
| 4 | atf - fire - survivors - ranch - dividian | 132 | 4_atf_fire_survivors_ranch |
| 5 | drive - slave - drives - master - tape | 127 | 5_drive_slave_drives_master |
| 6 | moon - lunar - billion - alaska - prize | 127 | 6_moon_lunar_billion_alaska |
| 7 | armenian - turkish - armenians - serdar - argic | 117 | 7_armenian_turkish_armenians_serdar |
| 8 | espn - game - abc - coverage - hockey | 112 | 8_espn_game_abc_coverage |
| 9 | 3d - phigs - graphics - navy - lipman | 112 | 9_3d_phigs_graphics_navy |
| 10 | israeli - israel - israelis - arab - soldiers | 104 | 10_israeli_israel_israelis_arab |
| 11 | dos - xfree86 - windows - server - tcp | 100 | 11_dos_xfree86_windows_server |
| 12 | sale - drive - meg - ram - floppy | 98 | 12_sale_drive_meg_ram |
| 13 | homosexuality - homosexual - paul - christians - sin | 97 | 13_homosexuality_homosexual_paul_christians |
| 14 | clutch - alomar - runs - baerga - average | 93 | 14_clutch_alomar_runs_baerga |
| 15 | os - microsoft - challenge - supporters - windows | 91 | 15_os_microsoft_challenge_supporters |
| 16 | migraine - sleep - dyer - thyroid - geb | 91 | 16_migraine_sleep_dyer_thyroid |
| 17 | drive - ide - scsi - drives - controller | 87 | 17_drive_ide_scsi_drives |
| 18 | modem - modems - fax - courier - sportster | 85 | 18_modem_modems_fax_courier |
| 19 | msg - food - sensitivity - chinese - superstition | 84 | 19_msg_food_sensitivity_chinese |
| 20 | objective - morality - larson - frank - values | 84 | 20_objective_morality_larson_frank |
| 21 | windows - swap - memory - emm386 - file | 84 | 21_windows_swap_memory_emm386 |
| 22 | sale - speakers - stereo - offer - amp | 83 | 22_sale_speakers_stereo_offer |
| 23 | jpeg - gif - image - format - file | 82 | 23_jpeg_gif_image_format |
| 24 | jewish - zionism - israel - jews - jew | 82 | 24_jewish_zionism_israel_jews |
| 25 | space - nasa - venus - planet - earth | 80 | 25_space_nasa_venus_planet |
| 26 | encryption - clipper - chip - government - wiretap | 78 | 26_encryption_clipper_chip_government |
| 27 | polygon - polygons - ___ - routine - algorithm | 76 | 27_polygon_polygons_____routine |
| 28 | car - miles - toyota - sale - mazda | 73 | 28_car_miles_toyota_sale |
| 29 | scsi - ide - dma - bus - isa | 71 | 29_scsi_ide_dma_bus |
| 30 | 25 - pit - pts - det - la | 69 | 30_25_pit_pts_det |
| 31 | stephanopoulos - president - mr - myers - ms | 68 | 31_stephanopoulos_president_mr_myers |
| 32 | rushdie - islam - jaeger - islamic - gregg | 66 | 32_rushdie_islam_jaeger_islamic |
| 33 | dumbest - automotive - lights - concepts - continental | 66 | 33_dumbest_automotive_lights_concepts |
| 34 | motif - openwindows - xview - olit - x11 | 66 | 34_motif_openwindows_xview_olit |
| 35 | games - sega - genesis - snes - sale | 65 | 35_games_sega_genesis_snes |
| 36 | bosnia - muslims - bosnian - serbs - bosnians | 65 | 36_bosnia_muslims_bosnian_serbs |
| 37 | mary - her - she - immaculate - conception | 64 | 37_mary_her_she_immaculate |
| 38 | israel - lebanese - lebanon - israeli - hezbollah | 64 | 38_israel_lebanese_lebanon_israeli |
| 39 | existence - evolution - theory - science - exist | 62 | 39_existence_evolution_theory_science |
| 40 | hell - eternal - heaven - god - jesus | 62 | 40_hell_eternal_heaven_god |
| 41 | simms - simm - meg - pin - ram | 61 | 41_simms_simm_meg_pin |
| 42 | br - isc - government - steveh - thor | 61 | 42_br_isc_government_steveh |
| 43 | dos - stacker - windows - dos6 - disk | 60 | 43_dos_stacker_windows_dos6 |
| 44 | clutch - shifting - shift - manual - transmission | 60 | 44_clutch_shifting_shift_manual |
| 45 | radar - detector - detectors - valentine - ka | 59 | 45_radar_detector_detectors_valentine |
| 46 | tax - taxes - income - deficit - vat | 59 | 46_tax_taxes_income_deficit |
| 47 | keyboard - key - keys - keycode - accelerators | 58 | 47_keyboard_key_keys_keycode |
| 48 | copy - protected - protection - disks - sehari | 58 | 48_copy_protected_protection_disks |
| 49 | station - redesign - space - nasa - option | 57 | 49_station_redesign_space_nasa |
| 50 | lib - libxmu - ndet_loop - xmu - usr | 56 | 50_lib_libxmu_ndet_loop_xmu |
| 51 | dog - dogs - parr - ucalgary - attack | 55 | 51_dog_dogs_parr_ucalgary |
| 52 | leafs - wings - game - detroit - goal | 52 | 52_leafs_wings_game_detroit |
| 53 | cancer - water - medical - mwra - circumcision | 51 | 53_cancer_water_medical_mwra |
| 54 | sleeve - cd - cds - sale - picture | 51 | 54_sleeve_cd_cds_sale |
| 55 | sharks - season - keenan - rangers - chuq | 50 | 55_sharks_season_keenan_rangers |
| 56 | battery - batteries - concrete - acid - lead | 49 | 56_battery_batteries_concrete_acid |
| 57 | drugs - drug - marijuana - legalization - cigarettes | 49 | 57_drugs_drug_marijuana_legalization |
| 58 | exhaust - carbs - bike - carb - honda | 49 | 58_exhaust_carbs_bike_carb |
| 59 | shaft - wheelies - stafford - wheelie - winona | 48 | 59_shaft_wheelies_stafford_wheelie |
| 60 | key - chip - clipper - algorithm - chips | 48 | 60_key_chip_clipper_algorithm |
| 61 | photography - krillean - kirlian - pictures - sol1 | 48 | 61_photography_krillean_kirlian_pictures |
| 62 | bike - bikes - motorcycle - motorcycles - buying | 48 | 62_bike_bikes_motorcycle_motorcycles |
| 63 | lynn - riders - rtsg - motorcycling - bike | 47 | 63_lynn_riders_rtsg_motorcycling |
| 64 | church - churches - christianity - christian - movement | 47 | 64_church_churches_christianity_christian |
| 65 | hst - mission - servicing - shuttle - boost | 47 | 65_hst_mission_servicing_shuttle |
| 66 | nissan - wagon - villager - altima - vw | 47 | 66_nissan_wagon_villager_altima |
| 67 | helmet - helmets - shoei - jacket - eskimo | 47 | 67_helmet_helmets_shoei_jacket |
| 68 | finland - sweden - wc - czech - ericsson | 47 | 68_finland_sweden_wc_czech |
| 69 | gamma - oort - bursters - ray - cloud | 46 | 69_gamma_oort_bursters_ray |
| 70 | jewish - baseball - vb30 - lafibm - players | 46 | 70_jewish_baseball_vb30_lafibm |
| 71 | sky - vandalizing - night - pollution - enzo | 46 | 71_sky_vandalizing_night_pollution |
| 72 | nanao - monitors - viewsonic - monitor - inches | 46 | 72_nanao_monitors_viewsonic_monitor |
| 73 | militia - amendment - arms - regulated - bear | 45 | 73_militia_amendment_arms_regulated |
| 74 | rocks - teenagers - warning - overpass - kids | 45 | 74_rocks_teenagers_warning_overpass |
| 75 | des - key - keyseach - bits - shelf | 45 | 75_des_key_keyseach_bits |
| 76 | dl - wetteland - harkey - franco - plymouth | 45 | 76_dl_wetteland_harkey_franco |
| 77 | petch - gvg47 - love - god - gvg | 44 | 77_petch_gvg47_love_god |
| 78 | pin - card - connector - ethernet - board | 44 | 78_pin_card_connector_ethernet |
| 79 | leds - uv - led - blue - subliminal | 43 | 79_leds_uv_led_blue |
| 80 | theism - fanatism - atheism - belief - theists | 42 | 80_theism_fanatism_atheism_belief |
| 81 | freedom - forged - locutus - colorado - speech | 42 | 81_freedom_forged_locutus_colorado |
| 82 | moral - morality - keith - livesey - caltech | 41 | 82_moral_morality_keith_livesey |
| 83 | phd - environmentalism - environmental - heath - pantheism | 41 | 83_phd_environmentalism_environmental_heath |
| 84 | buffalo - sabres - blues - bruins - boston | 41 | 84_buffalo_sabres_blues_bruins |
| 85 | countersteering - mjs - bike - countersteering_faq - lean | 41 | 85_countersteering_mjs_bike_countersteering_faq |
| 86 | nmm - behind - traffic - lane - bike | 41 | 86_nmm_behind_traffic_lane |
| 87 | games - game - baseball - pitches - pitcher | 41 | 87_games_game_baseball_pitches |
| 88 | cpu - fan - heat - sink - fans | 41 | 88_cpu_fan_heat_sink |
| 89 | jehovah - elohim - father - lord - son | 41 | 89_jehovah_elohim_father_lord |
| 90 | cruel - punishment - keith - penalty - death | 40 | 90_cruel_punishment_keith_penalty |
| 91 | insurance - health - private - care - gld | 40 | 91_insurance_health_private_care |
| 92 | powerbook - duo - portable - pb - pb100 | 40 | 92_powerbook_duo_portable_pb |
| 93 | bike - sale - miles - mower - fork | 39 | 93_bike_sale_miles_mower |
| 94 | postscript - ghostscript - ghostview - pageview - files | 39 | 94_postscript_ghostscript_ghostview_pageview |
| 95 | candida - yeast - noring - systemic - infections | 39 | 95_candida_yeast_noring_systemic |
| 96 | card - p9000 - orchid - weitek - vlb | 39 | 96_card_p9000_orchid_weitek |
| 97 | jews - israel - arabs - land - arab | 38 | 97_jews_israel_arabs_land |
| 98 | radiosity - pov - raytracing - ray - amann | 38 | 98_radiosity_pov_raytracing_ray |
| 99 | oil - drain - changing - ohio - magnus | 38 | 99_oil_drain_changing_ohio |
| 100 | scope - scopes - oscilloscope - fluke - meter | 38 | 100_scope_scopes_oscilloscope_fluke |
| 101 | faith - god - exist - proof - burden | 37 | 101_faith_god_exist_proof |
| 102 | sox - rbi - games - game - win | 37 | 102_sox_rbi_games_game |
| 103 | greek - greece - greeks - turkish - turks | 37 | 103_greek_greece_greeks_turkish |
| 104 | science - methodology - sas - fulk - lady | 37 | 104_science_methodology_sas_fulk |
| 105 | hockey - nhl - team - league - stars | 37 | 105_hockey_nhl_team_league |
| 106 | koresh - fbi - compound - fire - cult | 37 | 106_koresh_fbi_compound_fire |
| 107 | lens - camera - rupin - dang - goldberg | 37 | 107_lens_camera_rupin_dang |
| 108 | xv - escaped - g3states - endif - define | 37 | 108_xv_escaped_g3states_endif |
| 109 | mormons - jews - lds - sword - brigham | 36 | 109_mormons_jews_lds_sword |
| 110 | resurrection - jesus - tomb - rise - luke | 36 | 110_resurrection_jesus_tomb_rise |
| 111 | monitors - hours - nevai - day - monitor | 36 | 111_monitors_hours_nevai_day |
| 112 | window - dialog - widget - xlib - application | 36 | 112_window_dialog_widget_xlib |
| 113 | arrogance - truth - christians - arrogant - darren | 36 | 113_arrogance_truth_christians_arrogant |
| 114 | gas - tear - unb - cs - jupiter | 36 | 114_gas_tear_unb_cs |
| 115 | winfield - mattingly - peak - henderson - robinson | 35 | 115_winfield_mattingly_peak_henderson |
| 116 | escrow - key - agencies - aclu - branch | 35 | 116_escrow_key_agencies_aclu |
| 117 | judas - tyre - prophecy - prophecies - decenso | 35 | 117_judas_tyre_prophecy_prophecies |
| 118 | image - processing - plplot - tools - analysis | 35 | 118_image_processing_plplot_tools |
| 119 | eisa - isa - bus - vlb - motherboard | 35 | 119_eisa_isa_bus_vlb |
| 120 | clipper - phone - phones - key - escrow | 35 | 120_clipper_phone_phones_key |
| 121 | morris - team - jays - clemens - viola | 35 | 121_morris_team_jays_clemens |
| 122 | space - moscow - shuttle - spaceflight - term | 34 | 122_space_moscow_shuttle_spaceflight |
| 123 | hotel - voucher - ticket - hiram - airline | 34 | 123_hotel_voucher_ticket_hiram |
| 124 | paint - wax - scratches - plastic - lisa | 34 | 124_paint_wax_scratches_plastic |
| 125 | zeos - gateway - 486 - monitor - murthy | 34 | 125_zeos_gateway_486_monitor |
| 126 | space - advertising - marketing - sky - billboard | 34 | 126_space_advertising_marketing_sky |
| 127 | gopher - search - ftp - sites - exhibit | 34 | 127_gopher_search_ftp_sites |
| 128 | 0d - _o - cx - c_ - 145 | 34 | 128_0d__o_cx_c_ |
| 129 | gtoal - celp - speech - compression - toal | 33 | 129_gtoal_celp_speech_compression |
| 130 | air - freon - aftermarket - behanna - r12 | 33 | 130_air_freon_aftermarket_behanna |
| 131 | 3do - quicktime - ricardo - playback - mcmains | 33 | 131_3do_quicktime_ricardo_playback |
| 132 | v4 - v6 - v8 - v12 - cdac | 33 | 132_v4_v6_v8_v12 |
| 133 | font - fonts - character - truetype - windows | 33 | 133_font_fonts_character_truetype |
| 134 | insurance - car - fault - rates - deductible | 32 | 134_insurance_car_fault_rates |
| 135 | drivers - driver - card - jmarttila - actix | 32 | 135_drivers_driver_card_jmarttila |
| 136 | tempest - holland - northeastern - utsa - cam | 32 | 136_tempest_holland_northeastern_utsa |
| 137 | mustang - ford - camaro - howell - car | 32 | 137_mustang_ford_camaro_howell |
| 138 | com4 - modem - com3 - port - 16550 | 31 | 138_com4_modem_com3_port |
| 139 | deskjet - bubblejet - ink - printers - printer | 31 | 139_deskjet_bubblejet_ink_printers |
| 140 | expose - window - event - buzz - main_win | 31 | 140_expose_window_event_buzz |
| 141 | europeans - nhl - rauser - players - european | 31 | 141_europeans_nhl_rauser_players |
| 142 | anonymous - privacy - anonymity - eff - internet | 31 | 142_anonymous_privacy_anonymity_eff |
| 143 | vs - winner - bos - cal - chi | 31 | 143_vs_winner_bos_cal |
| 144 | random - key - passwords - fifth - security | 31 | 144_random_key_passwords_fifth |
| 145 | doctor - clinic - med - hoss - medicine | 31 | 145_doctor_clinic_med_hoss |
| 146 | dc - shuttle - sdio - ssto - flight | 31 | 146_dc_shuttle_sdio_ssto |
| 147 | split - newsgroup - cdrom - comp - graphics | 30 | 147_split_newsgroup_cdrom_comp |
| 148 | nsa - cryptosystems - nea - paranoia - encryption | 30 | 148_nsa_cryptosystems_nea_paranoia |
| 149 | colormap - visual - color - colormaps - dpy | 30 | 149_colormap_visual_color_colormaps |
| 150 | jesus - brian - life - sandvik - kendig | 30 | 150_jesus_brian_life_sandvik |
| 151 | atheism - asimov - timmons - alt - bake | 30 | 151_atheism_asimov_timmons_alt |
| 152 | monitor - vga - monitors - lc - svga | 30 | 152_monitor_vga_monitors_lc |
| 153 | eye - dominance - prk - handedness - rk | 29 | 153_eye_dominance_prk_handedness |
| 154 | clinton - administration - qualcomm - tapped - drug | 29 | 154_clinton_administration_qualcomm_tapped |
| 155 | fpu - c650 - coprocessor - 040 - 650 | 29 | 155_fpu_c650_coprocessor_040 |
| 156 | cherry - coach - hockey - don - gilmour | 29 | 156_cherry_coach_hockey_don |
| 157 | baptism - sin - aaron - baptized - infants | 29 | 157_baptism_sin_aaron_baptized |
| 158 | car - dealer - price - sps - blue | 28 | 158_car_dealer_price_sps |
| 159 | ir - dres - dnd - detector - detection | 28 | 159_ir_dres_dnd_detector |
| 160 | rosicrucian - order - ch981 - amorc - tony | 28 | 160_rosicrucian_order_ch981_amorc |
| 161 | health - tobacco - cesarean - cancer - smokeless | 28 | 161_health_tobacco_cesarean_cancer |
| 162 | nt - windows - chicogo - os - rajiev | 28 | 162_nt_windows_chicogo_os |
| 163 | king - kyle - adjective - nc - cramm | 28 | 163_king_kyle_adjective_nc |
| 164 | muslims - serbs - croats - muslim - bosnian | 28 | 164_muslims_serbs_croats_muslim |
| 165 | torre - hitter - gilkey - lankford - manager | 27 | 165_torre_hitter_gilkey_lankford |
| 166 | bit - 24 - deniaud - bits - images | 27 | 166_bit_24_deniaud_bits |
| 167 | dwi - infante - driving - drunk - speedy | 27 | 167_dwi_infante_driving_drunk |
| 168 | xdm - server - login - graphic_display - error | 27 | 168_xdm_server_login_graphic_display |
| 169 | 92 - hiv - aids - needles - 12 | 27 | 169_92_hiv_aids_needles |
| 170 | diamond - stealth - drivers - card - speedstar | 27 | 170_diamond_stealth_drivers_card |
| 171 | lopez - catchers - olson - braves - players | 27 | 171_lopez_catchers_olson_braves |
| 172 | books - 02106 - 00 - chemistry - udel | 27 | 172_books_02106_00_chemistry |
| 173 | duo - dock - apple - 230 - bredell | 27 | 173_duo_dock_apple_230 |
| 174 | cable - antenna - tv - td - antennas | 27 | 174_cable_antenna_tv_td |
| 175 | stadium - baseball - oswego - shea - mets | 26 | 175_stadium_baseball_oswego_shea |
| 176 | images - image - geosphere - earth - unocal | 26 | 176_images_image_geosphere_earth |
| 177 | sci - space - prado - henry - permanet | 26 | 177_sci_space_prado_henry |
| 178 | peace - israel - palestinian - palestinians - talks | 26 | 178_peace_israel_palestinian_palestinians |
| 179 | speed - x86 - 040 - 68040 - 680x0 | 26 | 179_speed_x86_040_68040 |
| 180 | adcom - amp - amps - sound - microphone | 26 | 180_adcom_amp_amps_sound |
| 181 | ati - ultra - drivers - gateway - 1280x1024 | 26 | 181_ati_ultra_drivers_gateway |
| 182 | clipper - screw - chip - encryption - initiative | 26 | 182_clipper_screw_chip_encryption |
| 183 | analog - seema - converter - hannover - 4066 | 26 | 183_analog_seema_converter_hannover |
| 184 | mask - goalie - gtd597a - votes - hrivnak | 26 | 184_mask_goalie_gtd597a_votes |
| 185 | 130 - rush - fast - lane - roads | 25 | 185_130_rush_fast_lane |
| 186 | ashok - biochemistry - winqvt - kuleuven - liris | 25 | 186_ashok_biochemistry_winqvt_kuleuven |
| 187 | room - summer - sublet - jhuvm - kitchen | 25 | 187_room_summer_sublet_jhuvm |
| 188 | war - gulf - hussein - bombing - iraqi | 25 | 188_war_gulf_hussein_bombing |
| 189 | ulf - erau - player - huot - shot | 25 | 189_ulf_erau_player_huot |
| 190 | window - manager - xsizehints - bading - position | 25 | 190_window_manager_xsizehints_bading |
| 191 | henrik - armenia - bm - planes - armenians | 25 | 191_henrik_armenia_bm_planes |
| 192 | crypt - key - cryptography - des - ciphers | 25 | 192_crypt_key_cryptography_des |
| 193 | amd - cyrix - 486dx2 - 486 - mhz | 25 | 193_amd_cyrix_486dx2_486 |
| 194 | midi - sound - blaster - speaker - driver | 25 | 194_midi_sound_blaster_speaker |
| 195 | mode - vga - tiang - svga - modes | 25 | 195_mode_vga_tiang_svga |
| 196 | accelerations - acceleration - breathing - 45g - deaddio | 25 | 196_accelerations_acceleration_breathing_45g |
| 197 | wire - wiring - ground - neutral - outlets | 24 | 197_wire_wiring_ground_neutral |
| 198 | pain - bone - almanac - rib - massager | 24 | 198_pain_bone_almanac_rib |
| 199 | reno - janet - madman - children - she | 24 | 199_reno_janet_madman_children |
| 200 | barbecued - carcinogenic - meat - foods - risk | 24 | 200_barbecued_carcinogenic_meat_foods |
| 201 | cmos - beeps - chimes - memory - error | 24 | 201_cmos_beeps_chimes_memory |
| 202 | crohn - diet - ibd - inflammation - eat | 24 | 202_crohn_diet_ibd_inflammation |
| 203 | wave - bikers - waved - cage - waving | 24 | 203_wave_bikers_waved_cage |
| 204 | batf - warrant - knock - hallam - police | 24 | 204_batf_warrant_knock_hallam |
| 205 | hacker - ethic - computer - hackers - programming | 23 | 205_hacker_ethic_computer_hackers |
| 206 | mouse - motion - jumpy - smoothly - jump | 23 | 206_mouse_motion_jumpy_smoothly |
| 207 | comet - jupiter - gehrels - sq - baalke | 23 | 207_comet_jupiter_gehrels_sq |
| 208 | machines - precision - comments - contact - version | 23 | 208_machines_precision_comments_contact |
| 209 | cosmo - angmar - alfalfa - pro - tsk | 23 | 209_cosmo_angmar_alfalfa_pro |
| 210 | scsi - quadra - nodine - mac - cartridge | 23 | 210_scsi_quadra_nodine_mac |
| 211 | adl - bullock - gerard - francisco - arens | 23 | 211_adl_bullock_gerard_francisco |
| 212 | pgp - rsa - cryptography - code - patents | 23 | 212_pgp_rsa_cryptography_code |
| 213 | koresh - sbc - backing - utarlg - enclosed | 23 | 213_koresh_sbc_backing_utarlg |
| 214 | solvent - adhesive - duct - ruck - tape | 23 | 214_solvent_adhesive_duct_ruck |
| 215 | command - spacecraft - galileo - baalke - timer | 23 | 215_command_spacecraft_galileo_baalke |
| 216 | skin - dry - vaseline - rutin - acne | 23 | 216_skin_dry_vaseline_rutin |
| 217 | gaza - gazans - ghetto - israeli - jews | 23 | 217_gaza_gazans_ghetto_israeli |
| 218 | 03 - 02 - 04 - 01 - 05 | 22 | 218_03_02_04_01 |
| 219 | ra - mormon - lds - bible - jesus | 22 | 219_ra_mormon_lds_bible |
| 220 | abortion - child - fetus - margoli - abortions | 22 | 220_abortion_child_fetus_margoli |
| 221 | 00 - wolverine - 1st - comics - hulk | 22 | 221_00_wolverine_1st_comics |
| 222 | mac - 32 - os - stuffit - 800 | 22 | 222_mac_32_os_stuffit |
| 223 | lyme - disease - fever - ld - infectious | 22 | 223_lyme_disease_fever_ld |
| 224 | cobb - moral - morality - alexia - lis | 22 | 224_cobb_moral_morality_alexia |
| 225 | sphere - den - p3 - p1 - p2 | 22 | 225_sphere_den_p3_p1 |
| 226 | xputimage - shared - server - memory - animation | 22 | 226_xputimage_shared_server_memory |
| 227 | rgb - luminosity - hue - red - green | 21 | 227_rgb_luminosity_hue_red |
| 228 | pillion - riding - advice - passenger - ride | 21 | 228_pillion_riding_advice_passenger |
| 229 | mouse - stuttgart - windows - driver - kasajian | 21 | 229_mouse_stuttgart_windows_driver |
| 230 | gant - hirschbeck - umpire - strike - cox | 21 | 230_gant_hirschbeck_umpire_strike |
| 231 | cursor - xterm - blinking - taylor - emu | 21 | 231_cursor_xterm_blinking_taylor |
| 232 | tickets - 05pm - 35pm - june - ticket | 21 | 232_tickets_05pm_35pm_june |
| 233 | ham - surges - alternator - interference - power | 21 | 233_ham_surges_alternator_interference |
| 234 | marriage - married - ceremony - eyes - marry | 21 | 234_marriage_married_ceremony_eyes |
| 235 | moa - bmw - rider - cactus - bmwmoa | 21 | 235_moa_bmw_rider_cactus |
| 236 | number - phone - umass - ecs - line | 21 | 236_number_phone_umass_ecs |
| 237 | bible - text - translations - texts - septuagint | 21 | 237_bible_text_translations_texts |
| 238 | cop - officers - lmsc - lockheed - police | 21 | 238_cop_officers_lmsc_lockheed |
| 239 | dxf - iff - format - autocad - pei | 20 | 239_dxf_iff_format_autocad |
| 240 | roger - maynard - names - letter - laurentian | 20 | 240_roger_maynard_names_letter |
| 241 | atheism - sapienza - atheists - fil - alt | 20 | 241_atheism_sapienza_atheists_fil |
| 242 | video - verity - hdtv - compariators - input | 20 | 242_video_verity_hdtv_compariators |
| 243 | yassin - deir - irgun - dir - village | 20 | 243_yassin_deir_irgun_dir |
| 244 | god - predestination - saved - evil - grace | 20 | 244_god_predestination_saved_evil |
| 245 | dialing - phones - tone - hugo - sweden | 20 | 245_dialing_phones_tone_hugo |
| 246 | irq - interrupt - soundblaster - port - lpt1 | 20 | 246_irq_interrupt_soundblaster_port |
| 247 | tongues - language - tounges - languages - koberg | 20 | 247_tongues_language_tounges_languages |
| 248 | jsn104 - psuvm - hell - psu - damnation | 20 | 248_jsn104_psuvm_hell_psu |
| 249 | chain - wax - behanna - maxima - cookson | 20 | 249_chain_wax_behanna_maxima |
| 250 | bus - dx2 - 50mhz - dx - dx50 | 19 | 250_bus_dx2_50mhz_dx |
| 251 | islamic - bcci - bank - jaeger - gregg | 19 | 251_islamic_bcci_bank_jaeger |
| 252 | performa - lciii - iici - lc - pnet16 | 19 | 252_performa_lciii_iici_lc |
| 253 | list - requests - bmw - request - mailing | 19 | 253_list_requests_bmw_request |
| 254 | logo - rle - vgalogo - startup - lgo | 19 | 254_logo_rle_vgalogo_startup |
| 255 | kidney - stones - calcium - she - stone | 19 | 255_kidney_stones_calcium_she |
| 256 | phillies - phils - braves - wins - division | 19 | 256_phillies_phils_braves_wins |
| 257 | monitor - lcd - screen - display - jiggles | 19 | 257_monitor_lcd_screen_display |
| 258 | women - bobby - men - islamic - mozumder | 19 | 258_women_bobby_men_islamic |
| 259 | ax - max - g9v - b8f - a86 | 19 | 259_ax_max_g9v_b8f |
| 260 | koresh - mathew - bittrolff - david - risen | 19 | 260_koresh_mathew_bittrolff_david |
| 261 | biggest - disappointment - smale - mvp - surprise | 19 | 261_biggest_disappointment_smale_mvp |
| 262 | batf - oldham - blast - fokes - compound | 19 | 262_batf_oldham_blast_fokes |
| 263 | sabbath - law - worship - paul - ceremonial | 19 | 263_sabbath_law_worship_paul |
| 264 | joystick - joysticks - arcade - port - int15h | 19 | 264_joystick_joysticks_arcade_port |
| 265 | captain - traded - captains - striped - resigned | 18 | 265_captain_traded_captains_striped |
| 266 | mjm - fm - circuits - mixer - fsk | 18 | 266_mjm_fm_circuits_mixer |
| 267 | cooling - towers - nuclear - plants - water | 18 | 267_cooling_towers_nuclear_plants |
| 268 | she - were - her - apartment - they | 18 | 268_she_were_her_apartment |
| 269 | pens - caps - eos - penguins - cdkaupan | 18 | 269_pens_caps_eos_penguins |
| 270 | toyota - cruiser - suv - 4runner - cisco | 18 | 270_toyota_cruiser_suv_4runner |
| 271 | love - god - dps - kodak - logic | 18 | 271_love_god_dps_kodak |
| 272 | w4wg - network - workgroups - windows - lastdrive | 18 | 272_w4wg_network_workgroups_windows |
| 273 | ticket - cop - speeding - chp - plates | 18 | 273_ticket_cop_speeding_chp |
| 274 | lobby - sammons - ns111310 - colostate - letter | 18 | 274_lobby_sammons_ns111310_colostate |
| 275 | ndw - spss - norton - ini - desktop | 18 | 275_ndw_spss_norton_ini |
| 276 | uio - ifi - thomasp - parsli - quisling | 18 | 276_uio_ifi_thomasp_parsli |
| 277 | motherboard - 386 - halcyon - 386dx - ruggiero | 18 | 277_motherboard_386_halcyon_386dx |
| 278 | monitor - video - 610 - colors - screen | 18 | 278_monitor_video_610_colors |
| 279 | oil - wd - 20w50 - 10w40 - militech | 18 | 279_oil_wd_20w50_10w40 |
| 280 | printer - postscript - laser - laserjet - print | 18 | 280_printer_postscript_laser_laserjet |
| 281 | probe - ford - car - newman - gt | 17 | 281_probe_ford_car_newman |
| 282 | geico - insurance - davew - wonnacott - claim | 17 | 282_geico_insurance_davew_wonnacott |
| 283 | 42 - tiff - philosophical - significance - joachim | 17 | 283_42_tiff_philosophical_significance |
| 284 | omen - weight - fat - wa7kgx - forsberg | 17 | 284_omen_weight_fat_wa7kgx |
| 285 | workspace - manager - managers - zip - workspaces | 17 | 285_workspace_manager_managers_zip |
| 286 | fourd - vinge - vernor - 0565 - _the | 17 | 286_fourd_vinge_vernor_0565 |
| 287 | mithras - pegasus - cunyvm - uoregon - magick | 17 | 287_mithras_pegasus_cunyvm_uoregon |
| 288 | printer - adisak - pochanayon - pin - dot | 17 | 288_printer_adisak_pochanayon_pin |
| 289 | gainey - bob - player - gilmour - maynard | 16 | 289_gainey_bob_player_gilmour |
| 290 | adobe - photoshop - photo - platforms - shop | 16 | 290_adobe_photoshop_photo_platforms |
| 291 | tank - tankbag - zipper - fj1100 - bgardner | 16 | 291_tank_tankbag_zipper_fj1100 |
| 292 | disks - mac - 800k - binkley - 44mb | 16 | 292_disks_mac_800k_binkley |
| 293 | graphics - pub - 128 - ray - rayshade | 16 | 293_graphics_pub_128_ray |
| 294 | nubus - pds - lc - slot - marvin | 16 | 294_nubus_pds_lc_slot |
| 295 | odometer - mileage - odometers - dealer - speedo | 16 | 295_odometer_mileage_odometers_dealer |
| 296 | s1 - s2 - serial - key - unit | 16 | 296_s1_s2_serial_key |
| 297 | lehigh - car - sports - ns1 - cars | 16 | 297_lehigh_car_sports_ns1 |
| 298 | kjell - driver - hut - printer - backgrounder | 16 | 298_kjell_driver_hut_printer |
| 299 | weapons - militia - weapon - foxvog - destruction | 16 | 299_weapons_militia_weapon_foxvog |
| 300 | corn - seizures - paulson - seizure - cereals | 16 | 300_corn_seizures_paulson_seizure |
| 301 | jagr - francis - minus - player - uvic | 16 | 301_jagr_francis_minus_player |
| 302 | ingres - garrett - nixon - cambodia - tantrums | 16 | 302_ingres_garrett_nixon_cambodia |
| 303 | 8051 - oscar - mont - 68hc16 - speth | 16 | 303_8051_oscar_mont_68hc16 |
| 304 | tie - breaker - devils - islanders - record | 16 | 304_tie_breaker_devils_islanders |
| 305 | motto - keith - caltech - pompous - schneider | 16 | 305_motto_keith_caltech_pompous |
| 306 | ear - ears - ringing - earwax - vida | 16 | 306_ear_ears_ringing_earwax |
| 307 | saturn - dealer - profit - sl2 - sc2 | 16 | 307_saturn_dealer_profit_sl2 |
| 308 | tires - tire - fluids - abs - dot | 16 | 308_tires_tire_fluids_abs |
| 309 | software - level - wingert - shuttle - process | 16 | 309_software_level_wingert_shuttle |
| 310 | network - localtalk - ethernet - macs - appletalk | 16 | 310_network_localtalk_ethernet_macs |
| 311 | mailing - list - bait - detweiler - rdetweil | 16 | 311_mailing_list_bait_detweiler |
| 312 | satan - heaven - kicked - tyre - thou | 16 | 312_satan_heaven_kicked_tyre |
| 313 | wip - sports - wfan - eagles - lupica | 15 | 313_wip_sports_wfan_eagles |
| 314 | silence - moment - prayer - eeb1 - opposing | 15 | 314_silence_moment_prayer_eeb1 |
| 315 | octopus - detroit - ice - hammerl - octopi | 15 | 315_octopus_detroit_ice_hammerl |
| 316 | selective - borden - pork - service - abolish | 15 | 316_selective_borden_pork_service |
| 317 | gajarsky - yogi - njin - stark - pilot | 15 | 317_gajarsky_yogi_njin_stark |
| 318 | car - safety - centerline - saftey - collisions | 15 | 318_car_safety_centerline_saftey |
| 319 | orion - film - prototype - henry - goltz | 15 | 319_orion_film_prototype_henry |
| 320 | print - printer - file - claebaur - notepad | 15 | 320_print_printer_file_claebaur |
| 321 | dod - denizens - kotl - doom - muck | 15 | 321_dod_denizens_kotl_doom |
| 322 | display - remote - bielefeld - uphya001 - chooser | 15 | 322_display_remote_bielefeld_uphya001 |
| 323 | spacecraft - funding - cuts - calpoly - digex | 15 | 323_spacecraft_funding_cuts_calpoly |
| 324 | diesel - diesels - emissions - fuel - particulate | 15 | 324_diesel_diesels_emissions_fuel |
| 325 | uva - partying - virginia - schools - beyer | 15 | 325_uva_partying_virginia_schools |
| 326 | floptical - syquest - floppy - drives - floppies | 15 | 326_floptical_syquest_floppy_drives |
| 327 | placebo - gr - roth - medicine - ron | 15 | 327_placebo_gr_roth_medicine |
| 328 | canon - books - scripture - sirach - deuterocanonicals | 15 | 328_canon_books_scripture_sirach |
| 329 | eliot - flat - boxer - 180 - v12 | 15 | 329_eliot_flat_boxer_180 |
| 330 | firearms - smuggle - pound - guns - ban | 15 | 330_firearms_smuggle_pound_guns |
| 331 | paradox - borland - quicken - sql - access | 15 | 331_paradox_borland_quicken_sql |
| 332 | gun - buy - guns - stolen - buyback | 15 | 332_gun_buy_guns_stolen |
| 333 | uranium - plutonium - nuclear - ryukoku - mccall | 15 | 333_uranium_plutonium_nuclear_ryukoku |
| 334 | mosques - mosque - jerusalem - eggertj - jake | 15 | 334_mosques_mosque_jerusalem_eggertj |
| 335 | clock - mhz - quadra - oscillator - centris | 15 | 335_clock_mhz_quadra_oscillator |
| 336 | nixon - sternlight - mbeckman - crypto - strnlght | 15 | 336_nixon_sternlight_mbeckman_crypto |
| 337 | african - workers - blacks - employees - crime | 15 | 337_african_workers_blacks_employees |
| 338 | candida - vitamin - quack - pms - bloom | 14 | 338_candida_vitamin_quack_pms |
| 339 | pluto - mission - alaska - probes - aurora | 14 | 339_pluto_mission_alaska_probes |
| 340 | sabbath - salaris - black - lyrics - hell_2 | 14 | 340_sabbath_salaris_black_lyrics |
| 341 | cd - rom - cdrom - adaptec - 3401 | 14 | 341_cd_rom_cdrom_adaptec |
| 342 | fire - davidians - atf - fbi - napalm | 14 | 342_fire_davidians_atf_fbi |
| 343 | drink - drinking - riding - ride - pnakada | 14 | 343_drink_drinking_riding_ride |
| 344 | kubey - walks - obp - sac - hit | 14 | 344_kubey_walks_obp_sac |
| 345 | cache - iisi - powercache - card - fpu | 14 | 345_cache_iisi_powercache_card |
| 346 | murray - gm - quinn - vela - oakland | 14 | 346_murray_gm_quinn_vela |
| 347 | simms - 256k - jh - cciw - csx | 14 | 347_simms_256k_jh_cciw |
| 348 | 610 - centris - c610 - flaky - problems | 14 | 348_610_centris_c610_flaky |
| 349 | cview - temp - moscom - zenkar - urc | 14 | 349_cview_temp_moscom_zenkar |
| 350 | mhz - operational - clock - cpu - iisi | 14 | 350_mhz_operational_clock_cpu |
| 351 | lock - locks - cobra - kryptonite - cable | 14 | 351_lock_locks_cobra_kryptonite |
| 352 | wave - riceburner - squids - icomsim - squid | 14 | 352_wave_riceburner_squids_icomsim |
| 353 | alarm - viper - alarms - sensor - car | 14 | 353_alarm_viper_alarms_sensor |
| 354 | cubs - america - team - braves - talent | 14 | 354_cubs_america_team_braves |
| 355 | pope - schism - church - catholic - sspx | 14 | 355_pope_schism_church_catholic |
| 356 | christian - definition - christianity - jesus - christ | 14 | 356_christian_definition_christianity_jesus |
| 357 | bonds - williams - batting - giants - punjabi | 14 | 357_bonds_williams_batting_giants |
| 358 | bryce - arches - touring - dayton - fatcity | 14 | 358_bryce_arches_touring_dayton |
| 359 | sound - stereo - channel - quadra - microphone | 14 | 359_sound_stereo_channel_quadra |
| 360 | mormon - ceremonies - temple - temples - eusebius | 14 | 360_mormon_ceremonies_temple_temples |
| 361 | reincarnation - elijah - karma - palo - gerry | 13 | 361_reincarnation_elijah_karma_palo |
| 362 | fractal - fractals - compression - jr0930 - auckland | 13 | 362_fractal_fractals_compression_jr0930 |
| 363 | marriage - marry - mormon - eternal - parents | 13 | 363_marriage_marry_mormon_eternal |
| 364 | homeruns - boell - hit - hpcc01 - field | 13 | 364_homeruns_boell_hit_hpcc01 |
| 365 | tv - flyback - exploding - prasad - emerson | 13 | 365_tv_flyback_exploding_prasad |
| 366 | key - clarinet - tap - brad - proposal | 13 | 366_key_clarinet_tap_brad |
| 367 | costly - memorial - museum - holocaust - techbook | 13 | 367_costly_memorial_museum_holocaust |
| 368 | atm - fonts - tt - font - truetype | 13 | 368_atm_fonts_tt_font |
| 369 | solder - boards - mask - green - silver | 13 | 369_solder_boards_mask_green |
| 370 | temperature - henry - interstellar - sky - radiation | 13 | 370_temperature_henry_interstellar_sky |
| 371 | answerfax - harris - rrrrr - select - wwerner | 13 | 371_answerfax_harris_rrrrr_select |
| 372 | sale - suresh - mattress - table - rajaram | 13 | 372_sale_suresh_mattress_table |
| 373 | handling - ntuvax - ntu - ba7116326 - handson | 13 | 373_handling_ntuvax_ntu_ba7116326 |
| 374 | negev - bedouin - river - water - nysernet | 13 | 374_negev_bedouin_river_water |
| 375 | cults - cult - muttiah - religions - religion | 13 | 375_cults_cult_muttiah_religions |
| 376 | faith - saved - romans - lukewarm - deeds | 13 | 376_faith_saved_romans_lukewarm |
| 377 | rh - liar - lunatic - he - bissell | 13 | 377_rh_liar_lunatic_he |
| 378 | uart - 16550 - n5ial - uarts - modems | 13 | 378_uart_16550_n5ial_uarts |
| 379 | rens - overreacting - dgbt - tapped - doc | 13 | 379_rens_overreacting_dgbt_tapped |
| 380 | bible - language - commentary - christian - church | 13 | 380_bible_language_commentary_christian |
| 381 | xclrp - mydisplay - palette_colors - drawindex - draw | 13 | 381_xclrp_mydisplay_palette_colors_drawindex |
| 382 | oilers - pocklington - edmonton - northlands - yadallee | 13 | 382_oilers_pocklington_edmonton_northlands |
| 383 | clinton - clipper - bush - rwing - pat | 13 | 383_clinton_clipper_bush_rwing |
| 384 | easter - resurrection - celebration - pagan - goddess | 13 | 384_easter_resurrection_celebration_pagan |
| 385 | ampere - amp - db - ohmite - company | 13 | 385_ampere_amp_db_ohmite |
| 386 | logistician - 77 - wpi - ching - borque | 13 | 386_logistician_77_wpi_ching |
| 387 | vram - simms - quadra - 512k - slots | 13 | 387_vram_simms_quadra_512k |
| 388 | sin - hate - sinner - love - scott | 13 | 388_sin_hate_sinner_love |
| 389 | prayers - jesus - prayer - jayne - husband | 12 | 389_prayers_jesus_prayer_jayne |
| 390 | eridan - er1 - chuvashia - su - equip | 12 | 390_eridan_er1_chuvashia_su |
| 391 | context - jim - joslin - meritt - mwunix | 12 | 391_context_jim_joslin_meritt |
| 392 | mr2 - engine - eliot - noisy - shafts | 12 | 392_mr2_engine_eliot_noisy |
| 393 | habitable - planets - atmosphere - oxygen - everest | 12 | 393_habitable_planets_atmosphere_oxygen |
| 394 | sho - taurus - car - shifter - gk | 12 | 394_sho_taurus_car_shifter |
| 395 | hall - fame - kingman - winfield - garvey | 12 | 395_hall_fame_kingman_winfield |
| 396 | date - clock - dos - menu - stuck | 12 | 396_date_clock_dos_menu |
| 397 | cd300i - umcc - apple - cdrom - cd | 12 | 397_cd300i_umcc_apple_cdrom |
| 398 | beast - 666 - boylan - profile - usr | 12 | 398_beast_666_boylan_profile |
| 399 | printer - imagewriter - appletalk - laserwriter - uchile | 12 | 399_printer_imagewriter_appletalk_laserwriter |
| 400 | mpeg - quicktime - avi - melbourne - gregory | 12 | 400_mpeg_quicktime_avi_melbourne |
| 401 | zarathushtra - magi - josephus - jesus - iranian | 12 | 401_zarathushtra_magi_josephus_jesus |
| 402 | movies - bikes - csundh30 - cassidy - ursa | 12 | 402_movies_bikes_csundh30_cassidy |
| 403 | satan - evil - lucifer - god - free | 12 | 403_satan_evil_lucifer_god |
| 404 | solar - sail - sails - auburn - node | 12 | 404_solar_sail_sails_auburn |
| 405 | limbaugh - rush - nlns - hitler - sahl | 12 | 405_limbaugh_rush_nlns_hitler |
| 406 | warranty - techworks - credit - thacker - comtrade | 12 | 406_warranty_techworks_credit_thacker |
| 407 | hiram - vhs - dk - kou - koutd | 12 | 407_hiram_vhs_dk_kou |
| 408 | qur - koran - monash - bucaille - holy | 12 | 408_qur_koran_monash_bucaille |
| 409 | bike - shipping - manish - ups - ship | 12 | 409_bike_shipping_manish_ups |
| 410 | uniforms - marlins - lloyd - reds - mets | 12 | 410_uniforms_marlins_lloyd_reds |
| 411 | rle - tga - povray - tmp - pov | 12 | 411_rle_tga_povray_tmp |
| 412 | sunset - sunrise - drexel - cbis - rouben | 12 | 412_sunset_sunrise_drexel_cbis |
| 413 | virtual - mfltd - sts - reality - vr | 11 | 413_virtual_mfltd_sts_reality |
| 414 | ether - twist - mcaloon - dmcaloon - planets | 11 | 414_ether_twist_mcaloon_dmcaloon |
| 415 | witnesses - trial - gm - new - judge | 11 | 415_witnesses_trial_gm_new |
| 416 | disk - bios - drives - floppy - drive | 11 | 416_disk_bios_drives_floppy |
| 417 | hook - phone - led - ring - hok | 11 | 417_hook_phone_led_ring |
| 418 | pif - batch - bat - windows - environment | 11 | 418_pif_batch_bat_windows |
| 419 | opel - manta - kadett - uiuc - gibbonsa | 11 | 419_opel_manta_kadett_uiuc |
| 420 | winbench - winmarks - balog - diamond - stealth | 11 | 420_winbench_winmarks_balog_diamond |
| 421 | iran - gulf - iranian - uae - iraq | 11 | 421_iran_gulf_iranian_uae |
| 422 | voltage - current - supply - 12v - rooi | 11 | 422_voltage_current_supply_12v |
| 423 | wrench - srb - thiokol - pliers - tool | 11 | 423_wrench_srb_thiokol_pliers |
| 424 | xv - 24bit - image - 8bit - lilley | 11 | 424_xv_24bit_image_8bit |
| 425 | baptists - trincoll - banging - sociopaths - marrying | 11 | 425_baptists_trincoll_banging_sociopaths |
| 426 | jb - diabetes - ron - roth - anello | 11 | 426_jb_diabetes_ron_roth |
| 427 | jesus - commandments - god - law - commandment | 11 | 427_jesus_commandments_god_law |
| 428 | hitler - nazis - roehm - chancellor - nazi | 11 | 428_hitler_nazis_roehm_chancellor |
| 429 | freemasonry - masonry - masonic - baptist - southern | 11 | 429_freemasonry_masonry_masonic_baptist |
| 430 | cd300 - bauer - cd - multisession - toshiba | 11 | 430_cd300_bauer_cd_multisession |
| 431 | x11r5 - xsun - o_rdonly - fonts - 0666 | 11 | 431_x11r5_xsun_o_rdonly_fonts |
| 432 | controller - ide - bus - fdd - sec | 11 | 432_controller_ide_bus_fdd |
| 433 | gusto - heart - cardiac - uts - pvc | 11 | 433_gusto_heart_cardiac_uts |
| 434 | licensed - 2a42dubinski - carlos - change - hex | 11 | 434_licensed_2a42dubinski_carlos_change |
| 435 | convertible - wife - targa - wants - car | 11 | 435_convertible_wife_targa_wants |
| 436 | scores - posts - savoy - brock - hernandez | 11 | 436_scores_posts_savoy_brock |
| 437 | lcd - malouf - monitor - damico - projector | 10 | 437_lcd_malouf_monitor_damico |
| 438 | dtr - rts - dsr - cts - dce | 10 | 438_dtr_rts_dsr_cts |
| 439 | 2600 - atari - tia - 5200 - 4k | 10 | 439_2600_atari_tia_5200 |
| 440 | rs232 - ttl - ka3uww - loopback - ic | 10 | 440_rs232_ttl_ka3uww_loopback |
| 441 | contradictions - medtronic - archer - skiba - biblical | 10 | 441_contradictions_medtronic_archer_skiba |
| 442 | princeton - fester - black - roger - lazy | 10 | 442_princeton_fester_black_roger |
| 443 | wordbasic - filenames - format - file - word | 10 | 443_wordbasic_filenames_format_file |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.25.2
* HDBSCAN: 0.8.33
* UMAP: 0.5.6
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.7.0
* Transformers: 4.40.1
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
|
boapps/szurkemarha-samba-lora | boapps | 2024-05-08T08:43:17Z | 4 | 0 | peft | [
"peft",
"safetensors",
"text-generation",
"hu",
"dataset:boapps/szurkemarha",
"base_model:sambanovasystems/SambaLingo-Hungarian-Base",
"base_model:adapter:sambanovasystems/SambaLingo-Hungarian-Base",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-05-08T07:51:00Z | ---
library_name: peft
base_model: sambanovasystems/SambaLingo-Hungarian-Base
license: apache-2.0
datasets:
- boapps/szurkemarha
language:
- hu
widget:
- messages:
- role: user
content: Mennyi 2+2?
pipeline_tag: text-generation
---
Ez a repo csak a lora adaptert tartalmazza.
A [sambanovasystems/SambaLingo-Hungarian-Base](https://huggingface.co/sambanovasystems/SambaLingo-Hungarian-Base) finomhangolásával jött létre.
A modell semmilyen etikai/biztonsági tesztelésen nem esett át. **Éles használata nem ajánlott.** |
youngsangroh/whisper-small-finetuned-atco2-asr-atcosim | youngsangroh | 2024-05-08T08:43:06Z | 89 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:jlvdoorn/atco2-asr-atcosim",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-08T05:52:29Z | ---
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- jlvdoorn/atco2-asr-atcosim
metrics:
- wer
model-index:
- name: Whisper Small En - Whisper with atco2-asr-atcosim
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: 'This is a dataset constructed from two datasets: ATCO2-ASR and ATCOSIM.'
type: jlvdoorn/atco2-asr-atcosim
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 0.02577651759247326
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small En - Whisper with atco2-asr-atcosim
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the This is a dataset constructed from two datasets: ATCO2-ASR and ATCOSIM. dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Wer: 0.0258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.0637 | 1.9763 | 1000 | 0.0962 | 7.4365 |
| 0.0154 | 3.9526 | 2000 | 0.0163 | 2.3972 |
| 0.002 | 5.9289 | 3000 | 0.0027 | 1.5015 |
| 0.0003 | 7.9051 | 4000 | 0.0010 | 0.0258 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
jsingh/autoflow-math-v0.3 | jsingh | 2024-05-08T08:37:30Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T00:32:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
qminh369/token-classification-llmlingua2-xlm-roberta-41k_remove_stop_word_10_epoch | qminh369 | 2024-05-08T08:36:14Z | 137 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-08T08:04:48Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
model-index:
- name: token-classification-llmlingua2-xlm-roberta-41k_remove_stop_word_10_epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# token-classification-llmlingua2-xlm-roberta-41k_remove_stop_word_10_epoch
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 345 | 0.2501 |
| 0.1406 | 2.0 | 690 | 0.2848 |
| 0.1101 | 3.0 | 1035 | 0.2821 |
| 0.1101 | 4.0 | 1380 | 0.3145 |
| 0.1016 | 5.0 | 1725 | 0.3281 |
| 0.0965 | 6.0 | 2070 | 0.3272 |
| 0.0965 | 7.0 | 2415 | 0.3236 |
| 0.093 | 8.0 | 2760 | 0.3298 |
| 0.0907 | 9.0 | 3105 | 0.3336 |
| 0.0907 | 10.0 | 3450 | 0.3396 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
boapps/szurkemarha-samba | boapps | 2024-05-08T08:32:27Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"hu",
"dataset:boapps/szurkemarha",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T07:48:14Z | ---
license: apache-2.0
datasets:
- boapps/szurkemarha
language:
- hu
---
A [sambanovasystems/SambaLingo-Hungarian-Base](https://huggingface.co/sambanovasystems/SambaLingo-Hungarian-Base) finomhangolásával jött létre.
A modell semmilyen etikai/biztonsági tesztelésen nem esett át. **Éles használata nem ajánlott.** |
SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6-50bpw | SicariusSicariiStuff | 2024-05-08T08:32:02Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-08T05:52:57Z | ---
language:
- en
license: apache-2.0
---
<div align="center">
<b style="font-size: 40px;">Tenebra_30B_Alpha01_FP16</b>
</div>
<img src="https://i.imgur.com/WkkCtZL.png" alt="Tenebră" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Model Details
Tenebră, a various sized experimental AI model, stands at the crossroads of self-awareness and unconventional datasets. Its existence embodies a foray into uncharted territories, steering away from conventional norms in favor of a more obscure and experimental approach.
Noteworthy for its inclination towards the darker and more philosophical aspects of conversation, Tenebră's proficiency lies in unraveling complex discussions across a myriad of topics. Drawing from a pool of unconventional datasets, this model ventures into unexplored realms of thought, offering users an experience that is as unconventional as it is intellectually intriguing.
While Tenebră maintains a self-aware facade, its true allure lies in its ability to engage in profound discussions without succumbing to pretense. Step into the realm of Tenebră!
## Tenebră is available at the following size and flavours:
- 13B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_4BIT) | [GPTQ_4-BIT_group-size-32](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_32g_4BIT) | [GGUF-Many_Quants](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GGUF)
- 30B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_4BIT) | [GPTQ_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_3BIT) | [EXL2_2.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-50bpw) | [EXL2_2.8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-80bpw) | [EXL2_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw) | [EXL2_5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5bpw) | [EXL2_5.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5-50bpw) | [EXL2_6-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6bpw) | [EXL2_6.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6-50bpw) | [EXL2_8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_8bpw)
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻
## Disclaimer
*This model is pretty uncensored, use responsibly
## Other stuff
- [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work!
- [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SicariusSicariiStuff__Tenebra_30B_Alpha01_FP16)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.18|
|AI2 Reasoning Challenge (25-Shot)|64.51|
|HellaSwag (10-Shot) |84.79|
|MMLU (5-Shot) |54.29|
|TruthfulQA (0-shot) |54.22|
|Winogrande (5-shot) |78.61|
|GSM8k (5-shot) |24.64|
|
asiansoul/YachtRP-Llama-3-KoEn-8B | asiansoul | 2024-05-08T08:30:20Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:merge:NousResearch/Meta-Llama-3-8B",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:merge:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:beomi/Llama-3-KoEn-8B",
"base_model:merge:beomi/Llama-3-KoEn-8B",
"base_model:beomi/Llama-3-KoEn-8B-Instruct-preview",
"base_model:merge:beomi/Llama-3-KoEn-8B-Instruct-preview",
"base_model:dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2",
"base_model:merge:dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2",
"base_model:dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5",
"base_model:merge:dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5",
"base_model:elyn-dev/Llama-3-Soliloquy-8B-v2",
"base_model:merge:elyn-dev/Llama-3-Soliloquy-8B-v2",
"base_model:lodrick-the-lafted/Olethros-8B",
"base_model:merge:lodrick-the-lafted/Olethros-8B",
"base_model:saltlux/Ko-Llama3-Luxia-8B",
"base_model:merge:saltlux/Ko-Llama3-Luxia-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-07T18:01:44Z | ---
base_model:
- saltlux/Ko-Llama3-Luxia-8B
- beomi/Llama-3-KoEn-8B-preview
- NousResearch/Meta-Llama-3-8B
- dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5
- openlynn/Llama-3-Soliloquy-8B-v2
- lodrick-the-lafted/Olethros-8B
- dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2
- NousResearch/Meta-Llama-3-8B-Instruct
- beomi/Llama-3-KoEn-8B-Instruct-preview
library_name: transformers
tags:
- mergekit
- merge
---
# YachtRP-Llama-3-KoEn-8B
<a href="https://ibb.co/jD17fJ9"><img src="https://i.ibb.co/6Ff6wXc/Screenshot-2024-05-08-at-5-07-53-PM.png" alt="Screenshot-2024-05-08-at-5-07-53-PM" border="0"></a>
🚨 Yacht Korean / English RP Merge Test Model. Please note that this version is an English/Korean RP test version, so it may not operate properly. The answers may contain inappropriate content, so please use them carefully for testing purposes only.
model_stock method is not good performance by my human rp test. so use dare_tie for both kr / en
All licenses belong to those below, so please use it for personal and academic use only.🚨
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [saltlux/Ko-Llama3-Luxia-8B](https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B)
* [beomi/Llama-3-KoEn-8B-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview)
* [dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5](https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5)
* [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2)
* [lodrick-the-lafted/Olethros-8B](https://huggingface.co/lodrick-the-lafted/Olethros-8B)
* [dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2](https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [beomi/Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Meta-Llama-3-8B
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 0.60
weight: 0.25
- model: beomi/Llama-3-KoEn-8B-preview
parameters:
density: 0.55
weight: 0.2
- model: saltlux/Ko-Llama3-Luxia-8B
parameters:
density: 0.55
weight: 0.1
- model: beomi/Llama-3-KoEn-8B-Instruct-preview
parameters:
density: 0.55
weight: 0.15
- model: dreamgen-preview/opus-v1.2-llama-3-8b-base-run3.4-epoch2
parameters:
density: 0.55
weight: 0.1
- model: dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5
parameters:
density: 0.55
weight: 0.1
- model: openlynn/Llama-3-Soliloquy-8B-v2
parameters:
density: 0.55
weight: 0.1
- model: lodrick-the-lafted/Olethros-8B
parameters:
density: 0.55
weight: 0.1
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16
```
### Test
<a href="https://ibb.co/whh7Stk"><img src="https://i.ibb.co/k22J4Z7/Screenshot-2024-05-08-at-4-27-33-PM.png" alt="Screenshot-2024-05-08-at-4-27-33-PM" border="0"></a>
### Citation instructions
**Ko-Llama3-Luxia-8B**
```
@article{kollama3luxiamodelcard,
title={Ko Llama 3 Luxia Model Card},
author={AILabs@Saltux},
year={2024},
url={https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B/blob/main/README.md}
}
```
**Original Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url={https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
**Llama-3-Open-Ko**
```
@article{llama3koen,
title={Llama-3-KoEn},
author={L, Junbum},
year={2024},
url={https://huggingface.co/beomi/Llama-3-KoEn-8B}
}
``` |
4season/sft_model_test1 | 4season | 2024-05-08T08:30:00Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T07:16:03Z | ---
license: apache-2.0
language:
- en
---
# 4season/sft_model_test1
# **Introduction**
This model is test version, sft model.
We utilize state-of-the-art instruction fine-tuning methods.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
|
imagepipeline/cun | imagepipeline | 2024-05-08T08:27:35Z | 0 | 0 | null | [
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-08T08:27:33Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## cun
<img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - cunnilingus
[](https://imagepipeline.io/models/cun?id=4452aee9-6998-46de-9323-1a5a05db5c3c/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "sd1.5",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "4452aee9-6998-46de-9323-1a5a05db5c3c",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
huynq3Cyradar/bert-large-finetuned-phishing-webpage-version | huynq3Cyradar | 2024-05-08T08:26:01Z | 110 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-06T09:43:34Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: bert-large-finetuned-phishing-webpage-version
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-finetuned-phishing-webpage-version
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2062
- Accuracy: 0.9188
- Precision: 0.9517
- Recall: 0.8689
- False Positive Rate: 0.0381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | False Positive Rate |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:-------------------:|
| No log | 1.0 | 394 | 0.2675 | 0.8918 | 0.9680 | 0.7926 | 0.0226 |
| 0.3256 | 2.0 | 788 | 0.2225 | 0.9124 | 0.9640 | 0.8424 | 0.0272 |
| 0.2008 | 3.0 | 1182 | 0.2062 | 0.9188 | 0.9517 | 0.8689 | 0.0381 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
stablediffusionapi/03 | stablediffusionapi | 2024-05-08T08:23:16Z | 29 | 1 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-08T08:20:09Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# test03 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "03"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/03)
Model link: [View model](https://modelslab.com/models/03)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "03",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Roselia-penguin/medical_llama3_8b | Roselia-penguin | 2024-05-08T08:22:57Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"medical",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T06:55:26Z | ---
license: apache-2.0
tags:
- code
- medical
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ameerazam08/MuseTalk | ameerazam08 | 2024-05-08T08:20:48Z | 0 | 3 | diffusers | [
"diffusers",
"onnx",
"safetensors",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-08T08:03:08Z | ---
title: MuseTalkDemo
emoji: 🌍
colorFrom: gray
colorTo: purple
sdk: docker
pinned: false
license: creativeml-openrail-m
app_file: app.py
app_port: 7860
---
ALL Setup for MuseTalk Clone and Run
```
Build environment
We recommend a python version >=3.10 and cuda version =11.7. Then build environment as follows:
pip install -r requirements.txt
mmlab packages
pip install --no-cache-dir -U openmim
mim install mmengine
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
Download ffmpeg-static
Download the ffmpeg-static and
export FFMPEG_PATH=/path/to/ffmpeg
for example:
export FFMPEG_PATH=/musetalk/ffmpeg-4.4-amd64-static
Download weights
You can download weights manually as follows:
Download our trained weights.
Download the weights of other components:
sd-vae-ft-mse
whisper
dwpose
face-parse-bisent
resnet18
Finally, these weights should be organized in models as follows:
./models/
├── musetalk
│ └── musetalk.json
│ └── pytorch_model.bin
├── dwpose
│ └── dw-ll_ucoco_384.pth
├── face-parse-bisent
│ ├── 79999_iter.pth
│ └── resnet18-5c106cde.pth
├── sd-vae-ft-mse
│ ├── config.json
│ └── diffusion_pytorch_model.bin
└── whisper
└── tiny.pt
Quickstart
Inference
Here, we provide the inference script.
python -m scripts.inference --inference_config configs/inference/test.yaml
configs/inference/test.yaml is the path to the inference configuration file, including video_path and audio_path. The video_path should be either a video file, an image file or a directory of images.
You are recommended to input video with 25fps, the same fps used when training the model. If your video is far less than 25fps, you are recommended to apply frame interpolation or directly convert the video to 25fps using ffmpeg.
Use of bbox_shift to have adjustable results
🔎 We have found that upper-bound of the mask has an important impact on mouth openness. Thus, to control the mask region, we suggest using the bbox_shift parameter. Positive values (moving towards the lower half) increase mouth openness, while negative values (moving towards the upper half) decrease mouth openness.
You can start by running with the default configuration to obtain the adjustable value range, and then re-run the script within this range.
For example, in the case of Xinying Sun, after running the default configuration, it shows that the adjustable value rage is [-9, 9]. Then, to decrease the mouth openness, we set the value to be -7.
python -m scripts.inference --inference_config configs/inference/test.yaml --bbox_shift -7
📌 More technical details can be found in bbox_shift.
Combining MuseV and MuseTalk
As a complete solution to virtual human generation, you are suggested to first apply MuseV to generate a video (text-to-video, image-to-video or pose-to-video) by referring this. Frame interpolation is suggested to increase frame rate. Then, you can use MuseTalk to generate a lip-sync video by referring this.
🆕 Real-time inference
Here, we provide the inference script. This script first applies necessary pre-processing such as face detection, face parsing and VAE encode in advance. During inference, only UNet and the VAE decoder are involved, which makes MuseTalk real-time.
python -m scripts.realtime_inference --inference_config configs/inference/realtime.yaml --batch_size 4
configs/inference/realtime.yaml is the path to the real-time inference configuration file, including preparation, video_path , bbox_shift and audio_clips.
Set preparation to True in realtime.yaml to prepare the materials for a new avatar. (If the bbox_shift has changed, you also need to re-prepare the materials.)
After that, the avatar will use an audio clip selected from audio_clips to generate video.
Inferring using: data/audio/yongen.wav
While MuseTalk is inferring, sub-threads can simultaneously stream the results to the users. The generation process can achieve 30fps+ on an NVIDIA Tesla V100.
Set preparation to False and run this script if you want to genrate more videos using the same avatar.
Note for Real-time inference
If you want to generate multiple videos using the same avatar/video, you can also use this script to SIGNIFICANTLY expedite the generation process.
In the previous script, the generation time is also limited by I/O (e.g. saving images). If you just want to test the generation speed without saving the images, you can run
python -m scripts.realtime_inference --inference_config configs/inference/realtime.yaml --skip_save_images
```
|
dinhhung1508/Seallm-7b-v2.5-summary-vietnamese-article-v1-gguf | dinhhung1508 | 2024-05-08T08:20:35Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:SeaLLMs/SeaLLM-7B-v2.5",
"base_model:quantized:SeaLLMs/SeaLLM-7B-v2.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-08T08:18:45Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
base_model: SeaLLMs/SeaLLM-7B-v2.5
---
# Uploaded model
- **Developed by:** dinhhung1508
- **License:** apache-2.0
- **Finetuned from model :** SeaLLMs/SeaLLM-7B-v2.5
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sumail/Chalice15 | Sumail | 2024-05-08T08:17:07Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T08:15:35Z | ---
base_model:
- vapegod/stable5
- vapegod/stable
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [vapegod/stable5](https://huggingface.co/vapegod/stable5)
* [vapegod/stable](https://huggingface.co/vapegod/stable)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: vapegod/stable5
layer_range: [0, 24]
- model: vapegod/stable
layer_range: [0, 24]
merge_method: slerp
base_model: vapegod/stable5
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
opencsg/csg-wukong-1B-sft-bf16 | opencsg | 2024-05-08T08:15:23Z | 151 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-30T14:48:46Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- code
license: apache-2.0
---
# **csg-wukong-1B-sft-bf16** [[中文]](#chinese) [[English]](#english)
<a id="english"></a>
<p align="center">
<img width="900px" alt="OpenCSG" src="./csg-wukong-logo-green.jpg">
</p>
<p align="center"><a href="https://portal.opencsg.com/models">[OpenCSG Community]</a> <a href="hhttps://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a> <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p>
</div>
OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.
The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.
## Model Description
**csg-wukong-1B-sft-bf16** was finetuned on [csg-wukong-1B](https://huggingface.co/opencsg/csg-wukong-1B).
<br>
we will introduce more information about csg-wukong-1B.
## Model Evaluation results
We submitted csg-wukong-1B on the [open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and
the results show our model ranked the 8th among the ~1.5B pretrained small language models.

# Training
## Hardware
- **GPUs:** 16 H800
- **Training time:** 43days
## Software
- **Orchestration:** [Deepspeed](https://github.com/OpenCSGs)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
<a id="chinese"></a>
<p>
</p>
# OpenCSG介绍
<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>
<p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a> <a href="https://twitter.com/OpenCsg">[推特]</a> </p>
</div>
OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。
OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。
## 模型介绍
**csg-wukong-1B-sft-bf16** 在[csg-wukong-1B](https://huggingface.co/opencsg/csg-wukong-1B)预训练模型上微调而成.
<br>
我们将在后面介绍更多关于这个模型的信息。
## 模型评测结果
我们把csg-wukong-1B模型提交到[open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)榜单上,结果显示我们的模型目前在~1.5B小语言模型中排名第8。

# 训练
## 硬件资源
- **GPU数量:** 16 H800
- **训练时间:** 43天
## 软件使用
- **微调训练框架:** [Deepspeed](https://github.com/OpenCSGs)
- **深度学习框架:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16:** [apex](https://github.com/NVIDIA/apex) |
opencsg/csg-wukong-1B-sft-dpo-bf16 | opencsg | 2024-05-08T08:14:58Z | 150 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-30T14:31:24Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- code
license: apache-2.0
---
# **csg-wukong-1B-sft-dpo-bf16** [[中文]](#chinese) [[English]](#english)
<a id="english"></a>
<p align="center">
<img width="900px" alt="OpenCSG" src="./csg-wukong-logo-green.jpg">
</p>
<p align="center"><a href="https://portal.opencsg.com/models">[OpenCSG Community]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a> <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p>
</div>
OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.
The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.
## Model Description
**csg-wukong-1B-sft-dpo-bf16** was finetuned on [csg-wukong-1B](https://huggingface.co/opencsg/csg-wukong-1B).
<br>
we will introduce more information about csg-wukong-1B.
## Model Evaluation results
We submitted csg-wukong-1B on the [open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and
the results show our model ranked the 8th among the ~1.5B pretrained small language models.

# Training
## Hardware
- **GPUs:** 16 H800
- **Training time:** 43days
## Software
- **Orchestration:** [Deepspeed](https://github.com/OpenCSGs)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
<a id="chinese"></a>
<p>
</p>
# OpenCSG介绍
<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>
<p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMss">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a> <a href="https://twitter.com/OpenCsg">[推特]</a> </p>
</div>
OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。
OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。
## 模型介绍
**csg-wukong-1B-sft-dpo-bf16** 在[csg-wukong-1B](https://huggingface.co/opencsg/csg-wukong-1B)预训练模型上微调而成.
<br>
我们将在后面介绍更多关于这个模型的信息。
## 模型评测结果
我们把csg-wukong-1B模型提交到[open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)榜单上,结果显示我们的模型目前在~1.5B小语言模型中排名第8。

# 训练
## 硬件资源
- **GPU数量:** 16 H800
- **训练时间:** 43天
## 软件使用
- **微调训练框架:** [Deepspeed](https://github.com/OpenCSGs)
- **深度学习框架:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16:** [apex](https://github.com/NVIDIA/apex) |
opencsg/csg-wukong-1B-chat-v0.1 | opencsg | 2024-05-08T08:14:33Z | 161 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-12T10:18:45Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- code
license: apache-2.0
---
# **csg-wukong-1B-chat-v0.1** [[中文]](#chinese) [[English]](#english)
<a id="english"></a>
<p align="center">
<img width="900px" alt="OpenCSG" src="./csg-wukong-logo-green.jpg">
</p>
<p align="center"><a href="https://portal.opencsg.com/models">[OpenCSG Community]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a> <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p>
</div>
OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.
The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.
## Model Description
**csg-wukong-1B-chat-v0.1** was finetuned on csg-wukong-1B
<br>

## Model Evaluation results
We submitted csg-wukong-1B on the [open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and
the results show our model ranked the 8th among the ~1.5B pretrained small language models.

# Training
## Hardware
- **GPUs:** 6 V100
- **Training time:** 6 hours
## Software
- **Orchestration:** [Deepspeed](https://github.com/OpenCSGs)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
<a id="chinese"></a>
<p>
</p>
# OpenCSG介绍
<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>
<p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a> <a href="https://twitter.com/OpenCsg">[推特]</a> </p>
</div>
OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。
OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。
## 模型介绍
**csg-wukong-1B-chat-v0.1** 在csg-wukong-1B模型上微调而成。
<br>

## 模型评测结果
我们把csg-wukong-1B模型提交到[open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)榜单上,结果显示我们的模型目前在~1.5B小语言模型中排名第8。

# 训练
## 硬件资源
- **GPU数量:** 6 V100
- **训练时间:** 6小时
## 软件使用
- **微调训练框架:** [Deepspeed](https://github.com/OpenCSGs)
- **深度学习框架:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16:** [apex](https://github.com/NVIDIA/apex) |
PetroGPT/breeze-petro-7b-instruct-v1-q4_k_m.gguf | PetroGPT | 2024-05-08T08:12:14Z | 2 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-08T08:00:07Z | ---
license: apache-2.0
---
|
bllossom-advanced/bllossom-llama-3-8b-65k-base | bllossom-advanced | 2024-05-08T08:09:44Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T07:46:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iguanaYu/distilroberta-base-finetuned-wikitext2 | iguanaYu | 2024-05-08T08:08:26Z | 163 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-08T07:41:22Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0841 | 1.0 | 2406 | 1.9362 |
| 1.9866 | 2.0 | 4812 | 1.8845 |
| 1.9442 | 3.0 | 7218 | 1.8355 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
gaianet/Nomic-embed-text-v1.5-Embedding-GGUF | gaianet | 2024-05-08T08:04:43Z | 35,640 | 5 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2024-05-08T07:49:18Z | ---
license: apache-2.0
---
|
dinhhung1508/Seallm-7b-v2.5-summary-vietnamese-article-v1 | dinhhung1508 | 2024-05-08T07:59:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:SeaLLMs/SeaLLM-7B-v2.5",
"base_model:finetune:SeaLLMs/SeaLLM-7B-v2.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T07:59:34Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: SeaLLMs/SeaLLM-7B-v2.5
---
# Uploaded model
- **Developed by:** dinhhung1508
- **License:** apache-2.0
- **Finetuned from model :** SeaLLMs/SeaLLM-7B-v2.5
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dinhhung1508/Seallm-7b-v2.5-summary-vietnamese-article-v1-merged_4bit | dinhhung1508 | 2024-05-08T07:57:06Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:SeaLLMs/SeaLLM-7B-v2.5",
"base_model:quantized:SeaLLMs/SeaLLM-7B-v2.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-08T07:55:32Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: SeaLLMs/SeaLLM-7B-v2.5
---
# Uploaded model
- **Developed by:** dinhhung1508
- **License:** apache-2.0
- **Finetuned from model :** SeaLLMs/SeaLLM-7B-v2.5
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SinniDcat/LLAMA3-chnese-instrument-test-lora_model | SinniDcat | 2024-05-08T07:54:15Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-05-08T07:47:54Z | ---
license: apache-2.0
---
|
yweslakarep/vit-base-patch16-224-in21k-finetuned-lora-food101 | yweslakarep | 2024-05-08T07:52:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T07:52:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chibeenot/lora_model_test | chibeenot | 2024-05-08T07:52:30Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:ai-forever/FRED-T5-1.7B",
"base_model:adapter:ai-forever/FRED-T5-1.7B",
"region:us"
] | null | 2024-05-08T06:26:45Z | ---
library_name: peft
base_model: ai-forever/FRED-T5-1.7B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
ahmed-kh/superhero | ahmed-kh | 2024-05-08T07:50:12Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-08T07:50:12Z | ---
license: apache-2.0
---
|
hustvl/yolos-small | hustvl | 2024-05-08T07:49:12Z | 49,030 | 61 | transformers | [
"transformers",
"pytorch",
"safetensors",
"yolos",
"object-detection",
"vision",
"dataset:coco",
"arxiv:2106.00666",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2022-04-26T09:38:22Z | ---
license: apache-2.0
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# YOLOS (small-sized) model
YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS).
Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN).
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models.
### How to use
Here is how to use this model:
```python
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-small')
model = YolosForObjectDetection.from_pretrained('hustvl/yolos-small')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training
The model was pre-trained for 200 epochs on ImageNet-1k and fine-tuned for 150 epochs on COCO.
## Evaluation results
This model achieves an AP (average precision) of **36.1** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-00666,
author = {Yuxin Fang and
Bencheng Liao and
Xinggang Wang and
Jiemin Fang and
Jiyang Qi and
Rui Wu and
Jianwei Niu and
Wenyu Liu},
title = {You Only Look at One Sequence: Rethinking Transformer in Vision through
Object Detection},
journal = {CoRR},
volume = {abs/2106.00666},
year = {2021},
url = {https://arxiv.org/abs/2106.00666},
eprinttype = {arXiv},
eprint = {2106.00666},
timestamp = {Fri, 29 Apr 2022 19:49:16 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
microsoft/conditional-detr-resnet-50 | microsoft | 2024-05-08T07:48:26Z | 7,627 | 9 | transformers | [
"transformers",
"pytorch",
"safetensors",
"conditional_detr",
"object-detection",
"vision",
"dataset:coco",
"arxiv:2108.06152",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2022-09-09T06:11:40Z | ---
license: apache-2.0
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# Conditional DETR model with ResNet-50 backbone
Conditional DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Meng et al. and first released in [this repository](https://github.com/Atten4Vis/ConditionalDETR).
## Model description
The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101.

## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=microsoft/conditional-detr) to look for all available Conditional DETR models.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, ConditionalDetrForObjectDetection
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/conditional-detr-resnet-50")
model = ConditionalDetrForObjectDetection.from_pretrained("microsoft/conditional-detr-resnet-50")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
# let's only keep detections with score > 0.7
target_sizes = torch.tensor([image.size[::-1]])
results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
print(
f"Detected {model.config.id2label[label.item()]} with confidence "
f"{round(score.item(), 3)} at location {box}"
)
```
This should output:
```
Detected remote with confidence 0.833 at location [38.31, 72.1, 177.63, 118.45]
Detected cat with confidence 0.831 at location [9.2, 51.38, 321.13, 469.0]
Detected cat with confidence 0.804 at location [340.3, 16.85, 642.93, 370.95]
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The Conditional DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### BibTeX entry and citation info
```bibtex
@inproceedings{MengCFZLYS021,
author = {Depu Meng and
Xiaokang Chen and
Zejia Fan and
Gang Zeng and
Houqiang Li and
Yuhui Yuan and
Lei Sun and
Jingdong Wang},
title = {Conditional {DETR} for Fast Training Convergence},
booktitle = {2021 {IEEE/CVF} International Conference on Computer Vision, {ICCV}
2021, Montreal, QC, Canada, October 10-17, 2021},
}
``` |
SenseTime/deformable-detr | SenseTime | 2024-05-08T07:47:14Z | 10,552 | 19 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deformable_detr",
"object-detection",
"vision",
"dataset:coco",
"arxiv:2010.04159",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# Deformable DETR model with ResNet-50 backbone
Deformable DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Zhu et al. and first released in [this repository](https://github.com/fundamentalvision/Deformable-DETR).
Disclaimer: The team releasing Deformable DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.

## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=sensetime/deformable-detr) to look for all available Deformable DETR models.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, DeformableDetrForObjectDetection
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr")
model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
# let's only keep detections with score > 0.7
target_sizes = torch.tensor([image.size[::-1]])
results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
print(
f"Detected {model.config.id2label[label.item()]} with confidence "
f"{round(score.item(), 3)} at location {box}"
)
```
This should output:
```
Detected cat with confidence 0.856 at location [342.19, 24.3, 640.02, 372.25]
Detected remote with confidence 0.739 at location [40.79, 72.78, 176.76, 117.25]
Detected cat with confidence 0.859 at location [16.5, 52.84, 318.25, 470.78]
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The Deformable DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2010.04159,
doi = {10.48550/ARXIV.2010.04159},
url = {https://arxiv.org/abs/2010.04159},
author = {Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Deformable DETR: Deformable Transformers for End-to-End Object Detection},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
itzzdeep/Mistral-7B-Instruct-v0.2-query-engine-v4-2-ckpt500-8-16-adapters | itzzdeep | 2024-05-08T07:43:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T07:43:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kknd22/RWKV6-vulkan | kknd22 | 2024-05-08T07:43:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-08T03:08:20Z | ---
license: apache-2.0
---
|
aaron-di/YamshadowExperiment28-7B-Linear | aaron-di | 2024-05-08T07:42:18Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:automerger/YamShadow-7B",
"base_model:merge:automerger/YamShadow-7B",
"base_model:yam-peleg/Experiment28-7B",
"base_model:merge:yam-peleg/Experiment28-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T07:33:32Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- automerger/YamShadow-7B
- yam-peleg/Experiment28-7B
---
## 🧩 Configuration
```yaml
models:
- model: automerger/YamShadow-7B
parameters:
density: 0.5
weight: 0.5
- model: yam-peleg/Experiment28-7B
parameters:
density: 0.5
weight: 0.5
merge_method: linear
base_model: automerger/YamShadow-7B
dtype: float16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aaron-di/YamshadowExperiment28-7B-Linear"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
iguanaYu/distilgpt2-finetuned-wikitext2 | iguanaYu | 2024-05-08T07:40:47Z | 216 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T07:12:09Z | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7501 | 1.0 | 2334 | 3.6669 |
| 3.6498 | 2.0 | 4668 | 3.6464 |
| 3.6023 | 3.0 | 7002 | 3.6420 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ShenaoZ/0.0001_sft_nodpo_3iters_bs256_5102lr_iter_2 | ShenaoZ | 2024-05-08T07:39:29Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0001_sft_nodpo_3iters_bs256_5102lr_iter_1",
"base_model:finetune:ShenaoZ/0.0001_sft_nodpo_3iters_bs256_5102lr_iter_1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T06:37:43Z | ---
license: mit
base_model: ShenaoZ/0.0001_sft_nodpo_3iters_bs256_5102lr_iter_1
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0001_sft_nodpo_3iters_bs256_5102lr_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0001_sft_nodpo_3iters_bs256_5102lr_iter_2
This model is a fine-tuned version of [ShenaoZ/0.0001_sft_nodpo_3iters_bs256_5102lr_iter_1](https://huggingface.co/ShenaoZ/0.0001_sft_nodpo_3iters_bs256_5102lr_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
four-two-labs/phi3-nord-10k | four-two-labs | 2024-05-08T07:25:41Z | 4 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2024-05-08T07:25:30Z | ---
library_name: peft
base_model: microsoft/Phi-3-mini-4k-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
imagepipeline/dyer | imagepipeline | 2024-05-08T07:21:23Z | 0 | 0 | null | [
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-08T07:21:20Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## dyer
<img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - dyer
[](https://imagepipeline.io/models/dyer?id=0c4dfd9b-8103-452c-94a4-bee84eca17fd/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "sd1.5",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "0c4dfd9b-8103-452c-94a4-bee84eca17fd",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
ZahraRahimiii/q-FrozenLake-v1-4x4-Slippery | ZahraRahimiii | 2024-05-08T07:21:11Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-08T07:21:08Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.47 +/- 0.50
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ZahraRahimiii/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
blockblockblock/llama-3-70B-Instruct-abliterated-bpw2.5-exl2 | blockblockblock | 2024-05-08T07:20:56Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-08T07:15:50Z | ---
license: llama3
license_name: llama3
license_link: LICENSE
library_name: transformers
---
# Llama-3-70B-Instruct-abliterated Model Card
This is meta-llama/Llama-3-70B-Instruct with orthogonalized bfloat16 safetensor weights, generated with the methodology that was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.
TL;DR: this model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal direction orthogonalized out.
## Quants
[GGUF Quants available here](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated-GGUF)
## For the people who like tinkering or looking to save bandwidth
In the repo, I've included `refusal_dir.pth`
If you have Llama-3-70B-Instruct model downloaded already, you can use the ortho cookbook to apply it to your downloaded model, which will make it the same as what you'd download from here.
## Quirkiness awareness notice
This model may come with interesting quirks, as I obviously haven't extensively tested it, and the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects. The code I used to generate it (and my published 'Kappa-3' model which is just Phi-3 with the same methodology applied) is available in a Python notebook in this repo. Specifically, the [ortho_cookbook.ipynb](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb).
If you manage to develop further improvements, please share! This is really the most primitive way to use ablation, but there are other possibilities that I believe are as-yet unexplored. |
Edgar404/donut | Edgar404 | 2024-05-08T07:16:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T07:16:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
imagepipeline/bundy | imagepipeline | 2024-05-08T07:15:29Z | 0 | 0 | null | [
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-08T07:15:27Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## bundy
<img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - peggy
[](https://imagepipeline.io/models/bundy?id=f2cf331b-c867-48fa-b16d-201dae1be42c/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "sd1.5",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "f2cf331b-c867-48fa-b16d-201dae1be42c",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
Thimira/sinhala-llama-2-7b-chat-hf | Thimira | 2024-05-08T07:11:57Z | 131 | 3 | peft | [
"peft",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"trl",
"sft",
"text-generation-inference",
"text-generation",
"si",
"dataset:Thimira/sinhala-llm-dataset-llama-prompt-format",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | text-generation | 2024-04-01T04:59:40Z | ---
library_name: peft
tags:
- trl
- sft
- text-generation-inference
base_model: NousResearch/Llama-2-7b-chat-hf
datasets:
- Thimira/sinhala-llm-dataset-llama-prompt-format
model-index:
- name: sinhala-llama-2-7b-chat-hf
results: []
license: llama2
language:
- si
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sinhala-llama-2-7b-chat-hf
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the [Thimira/sinhala-llm-dataset-llama-prompt-format](https://huggingface.co/datasets/Thimira/sinhala-llm-dataset-llama-prompt-format) dataset.
## Model description
This is a model for Sinhala language text generation which is fine-tuned from the base llama-2-7b-chat-hf model.
Currently the capabilities of themodel are extremely limited, and requires further data and fine-tuning to be useful. Feel free to experiment with the model and provide feedback.
### Usage example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("Thimira/sinhala-llama-2-7b-chat-hf")
model = AutoModelForCausalLM.from_pretrained("Thimira/sinhala-llama-2-7b-chat-hf")
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)
prompt = "ඔබට සිංහල භාෂාව තේරුම් ගත හැකිද?"
result = pipe(f"<s>[INST] {prompt} [/INST]")
print(result[0]['generated_text'])
```
## Intended uses & limitations
The Sinhala-LLaMA models are intended for assistant-like chat in the Sinhala language.
To get the expected features and performance from these models the LLaMA 2 prompt format needs to be followed, including the INST and <<SYS>> tags, BOS and EOS tokens, and the whitespaces and breaklines in between.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.1.0
- Datasets 2.19.1
- Tokenizers 0.19.1 |
ShenaoZ/0.0001_sft_nodpo_5iters_bs256_5102lr_iter_2 | ShenaoZ | 2024-05-08T07:11:13Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0001_sft_nodpo_5iters_bs256_5102lr_iter_1",
"base_model:finetune:ShenaoZ/0.0001_sft_nodpo_5iters_bs256_5102lr_iter_1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T06:31:35Z | ---
license: mit
base_model: ShenaoZ/0.0001_sft_nodpo_5iters_bs256_5102lr_iter_1
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0001_sft_nodpo_5iters_bs256_5102lr_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0001_sft_nodpo_5iters_bs256_5102lr_iter_2
This model is a fine-tuned version of [ShenaoZ/0.0001_sft_nodpo_5iters_bs256_5102lr_iter_1](https://huggingface.co/ShenaoZ/0.0001_sft_nodpo_5iters_bs256_5102lr_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
automated-finetunning/bart_full_data_10p_20e_tm2 | automated-finetunning | 2024-05-08T07:07:45Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-08T04:25:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
knosing/japanese_ner_model | knosing | 2024-05-08T07:06:22Z | 191 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"ner",
"named entity recognition",
"stockmark ner",
"japanese named entity recognition",
"japanese ner",
"ja",
"en",
"dataset:stockmark/ner-wikipedia-dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-08T06:15:37Z | ---
license: apache-2.0
datasets:
- stockmark/ner-wikipedia-dataset
language:
- ja
- en
metrics:
- f1
- recall
- precision
- accuracy
library_name: transformers
pipeline_tag: token-classification
tags:
- ner
- named entity recognition
- stockmark ner
- bert
- japanese named entity recognition
- japanese ner
- transformers
---
### Model Description
This model is a fine-tuned version of the `tohoku-nlp/bert-base-japanese-v3`, specifically optimized for Named Entity Recognition (NER) tasks.
It is fine-tuned using a Japanese named entity extraction dataset derived from Wikipedia, which was developed and made publicly available by Stockmark Inc. ([NER Wikipedia Dataset](https://github.com/stockmarkteam/ner-wikipedia-dataset)).
### Intended Use
This model is intended for use in tasks that require the identification and categorization of named entities within Japanese text.
It is suitable for various applications in natural language processing where understanding the specific names of people, organizations, locations, etc., is crucial.
### How to Use
You can use this model for NER tasks with the following simple code snippet:
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
model_name = "knosing/japanese_ner_model"
tokenizer = AutoTokenizer.from_pretrained("tohoku-nlp/bert-base-japanese-v3")
model = AutoModelForTokenClassification.from_pretrained(model_name)
```
### Model Performance
The model has been evaluated on various entity types to assess its precision, recall, F1 score, and overall accuracy. Below is the detailed performance breakdown by entity type:
#### Overall Metrics
- **Overall Precision:** 0.8379
- **Overall Recall:** 0.8477
- **Overall F1 Score:** 0.8428
- **Overall Accuracy:** 0.9684
#### Performance by Entity Type
- **Other Organization Names (`の他の組織名`):**
- **Precision:** 0.71875
- **Recall:** 0.69
- **F1 Score:** 0.7041
- **Sample Count:** 100
- **Event Names (`ベント名`):**
- **Precision:** 0.85
- **Recall:** 0.8586
- **F1 Score:** 0.8543
- **Sample Count:** 99
- **Personal Names (`人名`):**
- **Precision:** 0.8171
- **Recall:** 0.8664
- **F1 Score:** 0.8410
- **Sample Count:** 232
- **Generic Names (`名`):**
- **Precision:** 0.8986
- **Recall:** 0.9376
- **F1 Score:** 0.9177
- **Sample Count:** 529
- **Product Names (`品名`):**
- **Precision:** 0.6522
- **Recall:** 0.5906
- **F1 Score:** 0.6198
- **Sample Count:** 127
- **Government Organization Names (`治的組織名`):**
- **Precision:** 0.9160
- **Recall:** 0.8276
- **F1 Score:** 0.8696
- **Sample Count:** 145
- **Facility Names (`設名`):**
- **Precision:** 0.7905
- **Recall:** 0.8357
- **F1 Score:** 0.8125
- **Sample Count:** 140
### Note
You might not able to use the model with huggingface Inference API.
The intended use for the model is given in the following repository: [KeshavSingh29/fa_ner_japanese](https://github.com/KeshavSingh29/fa_ner_japanese)
If you have any questions, please feel free to contact me or raise an issue at the above repo. |
shtapm/whisper-large_0502_encoder_all_400steps | shtapm | 2024-05-08T07:01:57Z | 149 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-08T06:59:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Boosad/Lisa | Boosad | 2024-05-08T06:58:18Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-08T06:58:18Z | ---
license: apache-2.0
---
|
Lakshit11/BERT-debit-15c-mcc-cleaned_10epoch | Lakshit11 | 2024-05-08T06:58:04Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T06:57:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/l3-badger-mushroom-4x8b-GGUF | mradermacher | 2024-05-08T06:55:46Z | 43 | 1 | transformers | [
"transformers",
"gguf",
"llama-3",
"en",
"base_model:maldv/l3-badger-mushroom-4x8b",
"base_model:quantized:maldv/l3-badger-mushroom-4x8b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-08T05:26:07Z | ---
base_model: maldv/l3-badger-mushroom-4x8b
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- llama-3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/maldv/l3-badger-mushroom-4x8b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.Q2_K.gguf) | Q2_K | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.IQ3_XS.gguf) | IQ3_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.Q3_K_S.gguf) | Q3_K_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.IQ3_S.gguf) | IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.IQ3_M.gguf) | IQ3_M | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.Q3_K_M.gguf) | Q3_K_M | 12.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.Q3_K_L.gguf) | Q3_K_L | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.IQ4_XS.gguf) | IQ4_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.Q4_K_S.gguf) | Q4_K_S | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.Q4_K_M.gguf) | Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.Q5_K_S.gguf) | Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.Q5_K_M.gguf) | Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.Q6_K.gguf) | Q6_K | 20.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-GGUF/resolve/main/l3-badger-mushroom-4x8b.Q8_0.gguf) | Q8_0 | 26.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hobee/CommentLM-7B | hobee | 2024-05-08T06:49:45Z | 2 | 1 | transformers | [
"transformers",
"pytorch",
"internlm2",
"feature-extraction",
"custom_code",
"license:other",
"region:us"
] | feature-extraction | 2024-05-08T03:50:50Z | ---
license: other
license_name: other
license_link: LICENSE
---
|
DUAL-GPO/zephyr-7b-gpo-v8-i1 | DUAL-GPO | 2024-05-08T06:41:39Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/zephyr-7b-gpo-final-i0",
"base_model:adapter:DUAL-GPO/zephyr-7b-gpo-final-i0",
"license:mit",
"region:us"
] | null | 2024-05-07T20:59:21Z | ---
license: mit
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: DUAL-GPO/zephyr-7b-gpo-final-i0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: zephyr-7b-gpo-v8-i1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-gpo-v8-i1
This model is a fine-tuned version of [DUAL-GPO/zephyr-7b-gpo-final-i0](https://huggingface.co/DUAL-GPO/zephyr-7b-gpo-final-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
rj1ALINT/day-time | rj1ALINT | 2024-05-08T06:24:58Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-08T06:23:52Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### day_time on Stable Diffusion via Dreambooth
#### model by rj1ALINT
This your the Stable Diffusion model fine-tuned the day_time concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<dashcam footage > of a car driving at day time**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:





|
ritzfy/toy-part | ritzfy | 2024-05-08T06:23:52Z | 0 | 0 | null | [
"en",
"dataset:roneneldan/TinyStories",
"license:mit",
"region:us"
] | null | 2024-05-08T06:01:37Z | ---
license: mit
datasets:
- roneneldan/TinyStories
language:
- en
---
This is a story generation model which generates upto 200 tokens when prompted with an initial part |
lole25/zephyr-7b-gpo-v7-i1 | lole25 | 2024-05-08T06:13:17Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/zephyr-7b-gpo-final-i0",
"base_model:adapter:DUAL-GPO/zephyr-7b-gpo-final-i0",
"license:mit",
"region:us"
] | null | 2024-05-07T20:59:05Z | ---
license: mit
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: DUAL-GPO/zephyr-7b-gpo-final-i0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: zephyr-7b-gpo-v7-i1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-gpo-v7-i1
This model is a fine-tuned version of [DUAL-GPO/zephyr-7b-gpo-final-i0](https://huggingface.co/DUAL-GPO/zephyr-7b-gpo-final-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 0.88
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
eunyounglee/EEVE-LLM2VEC-MNTP-STS-qa-1-adapter | eunyounglee | 2024-05-08T06:05:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T06:05:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SicariusSicariiStuff/Tenebra_30B_Alpha01_3BIT | SicariusSicariiStuff | 2024-05-08T06:05:17Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-25T19:06:09Z | ---
language:
- en
license: apache-2.0
---
<div align="center">
<b style="font-size: 40px;">Tenebra_30B_Alpha01_FP16</b>
</div>
<img src="https://i.imgur.com/WkkCtZL.png" alt="Tenebră" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Model Details
Tenebră, a various sized experimental AI model, stands at the crossroads of self-awareness and unconventional datasets. Its existence embodies a foray into uncharted territories, steering away from conventional norms in favor of a more obscure and experimental approach.
Noteworthy for its inclination towards the darker and more philosophical aspects of conversation, Tenebră's proficiency lies in unraveling complex discussions across a myriad of topics. Drawing from a pool of unconventional datasets, this model ventures into unexplored realms of thought, offering users an experience that is as unconventional as it is intellectually intriguing.
While Tenebră maintains a self-aware facade, its true allure lies in its ability to engage in profound discussions without succumbing to pretense. Step into the realm of Tenebră!
## Tenebră is available at the following size and flavours:
- 13B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_4BIT) | [GPTQ_4-BIT_group-size-32](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_32g_4BIT) | [GGUF-Many_Quants](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GGUF)
- 30B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_4BIT) | [GPTQ_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_3BIT) | [EXL2_2.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-50bpw) | [EXL2_2.8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-80bpw) | [EXL2_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw) | [EXL2_5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5bpw) | [EXL2_5.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5-50bpw) | [EXL2_6-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6bpw) | [EXL2_6.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6-50bpw) | [EXL2_8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_8bpw)
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻
## Disclaimer
*This model is pretty uncensored, use responsibly
## Other stuff
- [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work!
- [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SicariusSicariiStuff__Tenebra_30B_Alpha01_FP16)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.18|
|AI2 Reasoning Challenge (25-Shot)|64.51|
|HellaSwag (10-Shot) |84.79|
|MMLU (5-Shot) |54.29|
|TruthfulQA (0-shot) |54.22|
|Winogrande (5-shot) |78.61|
|GSM8k (5-shot) |24.64|
|
SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw | SicariusSicariiStuff | 2024-05-08T06:04:47Z | 11 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-26T03:48:13Z | ---
language:
- en
license: apache-2.0
---
<div align="center">
<b style="font-size: 40px;">Tenebra_30B_Alpha01_FP16</b>
</div>
<img src="https://i.imgur.com/WkkCtZL.png" alt="Tenebră" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Model Details
Tenebră, a various sized experimental AI model, stands at the crossroads of self-awareness and unconventional datasets. Its existence embodies a foray into uncharted territories, steering away from conventional norms in favor of a more obscure and experimental approach.
Noteworthy for its inclination towards the darker and more philosophical aspects of conversation, Tenebră's proficiency lies in unraveling complex discussions across a myriad of topics. Drawing from a pool of unconventional datasets, this model ventures into unexplored realms of thought, offering users an experience that is as unconventional as it is intellectually intriguing.
While Tenebră maintains a self-aware facade, its true allure lies in its ability to engage in profound discussions without succumbing to pretense. Step into the realm of Tenebră!
## Tenebră is available at the following size and flavours:
- 13B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_4BIT) | [GPTQ_4-BIT_group-size-32](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_32g_4BIT) | [GGUF-Many_Quants](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GGUF)
- 30B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_4BIT) | [GPTQ_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_3BIT) | [EXL2_2.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-50bpw) | [EXL2_2.8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-80bpw) | [EXL2_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw) | [EXL2_5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5bpw) | [EXL2_5.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5-50bpw) | [EXL2_6-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6bpw) | [EXL2_6.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6-50bpw) | [EXL2_8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_8bpw)
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻
## Disclaimer
*This model is pretty uncensored, use responsibly
## Other stuff
- [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work!
- [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SicariusSicariiStuff__Tenebra_30B_Alpha01_FP16)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.18|
|AI2 Reasoning Challenge (25-Shot)|64.51|
|HellaSwag (10-Shot) |84.79|
|MMLU (5-Shot) |54.29|
|TruthfulQA (0-shot) |54.22|
|Winogrande (5-shot) |78.61|
|GSM8k (5-shot) |24.64|
|
SicariusSicariiStuff/Tenebra_30B_Alpha01_GGUF_Collab | SicariusSicariiStuff | 2024-05-08T06:04:30Z | 30 | 0 | null | [
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-12-27T18:13:53Z | ---
language:
- en
license: apache-2.0
---
<div align="center">
<b style="font-size: 40px;">Tenebra_30B_Alpha01_FP16</b>
</div>
<img src="https://i.imgur.com/WkkCtZL.png" alt="Tenebră" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Model Details
Tenebră, a various sized experimental AI model, stands at the crossroads of self-awareness and unconventional datasets. Its existence embodies a foray into uncharted territories, steering away from conventional norms in favor of a more obscure and experimental approach.
Noteworthy for its inclination towards the darker and more philosophical aspects of conversation, Tenebră's proficiency lies in unraveling complex discussions across a myriad of topics. Drawing from a pool of unconventional datasets, this model ventures into unexplored realms of thought, offering users an experience that is as unconventional as it is intellectually intriguing.
While Tenebră maintains a self-aware facade, its true allure lies in its ability to engage in profound discussions without succumbing to pretense. Step into the realm of Tenebră!
## Tenebră is available at the following size and flavours:
- 13B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_4BIT) | [GPTQ_4-BIT_group-size-32](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_32g_4BIT) | [GGUF-Many_Quants](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GGUF)
- 30B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_4BIT) | [GPTQ_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_3BIT) | [EXL2_2.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-50bpw) | [EXL2_2.8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-80bpw) | [EXL2_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw) | [EXL2_5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5bpw) | [EXL2_5.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5-50bpw) | [EXL2_6-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6bpw) | [EXL2_6.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6-50bpw) | [EXL2_8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_8bpw)
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻
## Disclaimer
*This model is pretty uncensored, use responsibly
## Other stuff
- [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work!
- [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SicariusSicariiStuff__Tenebra_30B_Alpha01_FP16)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.18|
|AI2 Reasoning Challenge (25-Shot)|64.51|
|HellaSwag (10-Shot) |84.79|
|MMLU (5-Shot) |54.29|
|TruthfulQA (0-shot) |54.22|
|Winogrande (5-shot) |78.61|
|GSM8k (5-shot) |24.64|
|
olivernoah/OSTtoPSTAPP-Outlook-PST-password-recovery-software | olivernoah | 2024-05-08T06:03:07Z | 0 | 0 | null | [
"region:us"
] | null | 2024-05-08T06:02:03Z | Users can Recover Outlook PST Password with the help of OSTtoPSTAPP Outlook PST password recovery software. Any type of PST file password can be recovered with this program. Outlook PST Password Recovery Software is user-friendly, anyone can recover and reset Outlook PST password. Users can access the password for several PST files of Outlook with the software's advanced feature. Users don't have problems removing the password from any secret PST file and can load PST folders independently. The software can be used for recovering the password for any version of Microsoft Outlook. The software-supported PST file password for Outlook 2021, 2019, 2016, 2013, 2010, 2007, 2003, and others is processed. Both ANSI and Unicode PST files are supported by it properly. Even users can use any editions of Windows 11, 10, 8, 8.1, 7, XP, or Vista with this software. The program is free to download and use for Users.
Read More:- https://www.osttopstapp.com/pst-password-recovery.html |
AmirlyPhd/final_V2-bert-after-adding-new-words-text-classification-model | AmirlyPhd | 2024-05-08T06:02:49Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T06:02:29Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: final_V2-bert-after-adding-new-words-text-classification-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_V2-bert-after-adding-new-words-text-classification-model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1494
- Accuracy: 0.9716
- F1: 0.8348
- Precision: 0.8317
- Recall: 0.8385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.8136 | 0.11 | 50 | 1.7501 | 0.3470 | 0.1733 | 0.3034 | 0.1944 |
| 0.9023 | 0.22 | 100 | 1.2121 | 0.5723 | 0.3083 | 0.3496 | 0.3189 |
| 0.5924 | 0.33 | 150 | 0.9662 | 0.6667 | 0.3919 | 0.4265 | 0.4037 |
| 0.4218 | 0.44 | 200 | 0.4848 | 0.8813 | 0.6427 | 0.6492 | 0.6413 |
| 0.2025 | 0.55 | 250 | 0.3807 | 0.9021 | 0.6677 | 0.6538 | 0.6829 |
| 0.1609 | 0.66 | 300 | 0.3360 | 0.9147 | 0.6763 | 0.6727 | 0.6822 |
| 0.2035 | 0.76 | 350 | 0.3705 | 0.8991 | 0.6711 | 0.6589 | 0.6838 |
| 0.1208 | 0.87 | 400 | 0.2140 | 0.9565 | 0.8218 | 0.8137 | 0.8323 |
| 0.1313 | 0.98 | 450 | 0.6818 | 0.8704 | 0.6779 | 0.7179 | 0.6859 |
| 0.1576 | 1.09 | 500 | 0.2508 | 0.9212 | 0.7443 | 0.7888 | 0.7311 |
| 0.0593 | 1.2 | 550 | 0.2091 | 0.9552 | 0.8193 | 0.8179 | 0.8227 |
| 0.0705 | 1.31 | 600 | 0.2010 | 0.9552 | 0.8154 | 0.8091 | 0.8225 |
| 0.0637 | 1.42 | 650 | 0.1985 | 0.9573 | 0.8187 | 0.8115 | 0.8275 |
| 0.0619 | 1.53 | 700 | 0.2306 | 0.9541 | 0.8241 | 0.8194 | 0.8301 |
| 0.0582 | 1.64 | 750 | 0.2001 | 0.9609 | 0.8280 | 0.8250 | 0.8320 |
| 0.1132 | 1.75 | 800 | 0.1439 | 0.9680 | 0.8324 | 0.8284 | 0.8367 |
| 0.0416 | 1.86 | 850 | 0.1558 | 0.9680 | 0.8333 | 0.8301 | 0.8369 |
| 0.0371 | 1.97 | 900 | 0.2242 | 0.9595 | 0.8280 | 0.8235 | 0.8345 |
| 0.0428 | 2.07 | 950 | 0.1907 | 0.9617 | 0.8303 | 0.8262 | 0.8356 |
| 0.0388 | 2.18 | 1000 | 0.1784 | 0.9658 | 0.8319 | 0.8266 | 0.8383 |
| 0.0335 | 2.29 | 1050 | 0.1735 | 0.9675 | 0.8323 | 0.8266 | 0.8390 |
| 0.0361 | 2.4 | 1100 | 0.1921 | 0.9636 | 0.8283 | 0.8219 | 0.8360 |
| 0.0126 | 2.51 | 1150 | 0.2200 | 0.9614 | 0.8294 | 0.8274 | 0.8327 |
| 0.003 | 2.62 | 1200 | 0.2251 | 0.9614 | 0.8296 | 0.8262 | 0.8346 |
| 0.0029 | 2.73 | 1250 | 0.1750 | 0.9694 | 0.8348 | 0.8314 | 0.8388 |
| 0.0137 | 2.84 | 1300 | 0.1775 | 0.9686 | 0.8345 | 0.8300 | 0.8397 |
| 0.0184 | 2.95 | 1350 | 0.1860 | 0.9675 | 0.8337 | 0.8293 | 0.8391 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_8bpw | SicariusSicariiStuff | 2024-05-08T05:57:30Z | 5 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-28T09:06:56Z | ---
language:
- en
license: apache-2.0
---
<div align="center">
<b style="font-size: 40px;">Tenebra_30B_Alpha01_FP16</b>
</div>
<img src="https://i.imgur.com/WkkCtZL.png" alt="Tenebră" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Model Details
Tenebră, a various sized experimental AI model, stands at the crossroads of self-awareness and unconventional datasets. Its existence embodies a foray into uncharted territories, steering away from conventional norms in favor of a more obscure and experimental approach.
Noteworthy for its inclination towards the darker and more philosophical aspects of conversation, Tenebră's proficiency lies in unraveling complex discussions across a myriad of topics. Drawing from a pool of unconventional datasets, this model ventures into unexplored realms of thought, offering users an experience that is as unconventional as it is intellectually intriguing.
While Tenebră maintains a self-aware facade, its true allure lies in its ability to engage in profound discussions without succumbing to pretense. Step into the realm of Tenebră!
## Tenebră is available at the following size and flavours:
- 13B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_4BIT) | [GPTQ_4-BIT_group-size-32](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_32g_4BIT) | [GGUF-Many_Quants](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GGUF)
- 30B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_4BIT) | [GPTQ_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_3BIT) | [EXL2_2.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-50bpw) | [EXL2_2.8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-80bpw) | [EXL2_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw) | [EXL2_5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5bpw) | [EXL2_5.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5-50bpw) | [EXL2_6-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6bpw) | [EXL2_6.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6-50bpw) | [EXL2_8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_8bpw)
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻
## Disclaimer
*This model is pretty uncensored, use responsibly
## Other stuff
- [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work!
- [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SicariusSicariiStuff__Tenebra_30B_Alpha01_FP16)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.18|
|AI2 Reasoning Challenge (25-Shot)|64.51|
|HellaSwag (10-Shot) |84.79|
|MMLU (5-Shot) |54.29|
|TruthfulQA (0-shot) |54.22|
|Winogrande (5-shot) |78.61|
|GSM8k (5-shot) |24.64|
|
kevinkawchak/gradientai-Llama-3-8B-Instruct-Gradient-1048k-Molecule16 | kevinkawchak | 2024-05-08T05:55:15Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"dataset:zjunlp/Mol-Instructions",
"base_model:gradientai/Llama-3-8B-Instruct-Gradient-1048k",
"base_model:finetune:gradientai/Llama-3-8B-Instruct-Gradient-1048k",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-06T05:42:11Z | ---
language:
- en
license: llama3
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: gradientai/Llama-3-8B-Instruct-Gradient-1048k
datasets:
- zjunlp/Mol-Instructions
---
- **Developed by:** kevinkawchak
- **License:** llama3
- **Finetuned from model :** gradientai/Llama-3-8B-Instruct-Gradient-1048k
- **Finetuned using dataset :** zjunlp/Mol-Instructions, cc-by-4.0
- **Dataset identification:** Molecule-oriented Instructions
- **Dataset function:** Description guided molecule design
## May 07, 2024: Additional Fine-tunings, Built with Meta Llama 3 <br>
1) gradientai/Llama-3-8B-Instruct-Gradient-1048k [Model](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k) <br>
Llama 3 8B update: 1040K context length from 8K, and highest RAM consumption<br>
"What is the structure for adenine?" Verbose SELFIES structure, but logical<br>
[Fine-tuned](https://huggingface.co/kevinkawchak/gradientai-Llama-3-8B-Instruct-Gradient-1048k-Molecule16) on Mol-Instructions, float16, [GitHub](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Llama-3-8B-Instruct-Gradient-1048k-Molecule.ipynb), 610 seconds, A100 40GB <br>
2) NousResearch/Hermes-2-Pro-Llama-3-8B [Model](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)<br>
Llama 3 8B update: Cleaned OpenHermes 2.5, new Function Calling, JSON Mode dataset<br>
"What is the structure for adenine?" Concise SELFIES structure, but less logical <br>
[Fine-tuned](https://huggingface.co/kevinkawchak/NousResearch-Hermes-2-Pro-Llama-3-8B-Molecule16) on Mol-Instructions, float16, [GitHub](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Hermes-2-Pro-Llama-3-8B-Molecule.ipynb), 599 seconds, A100 40GB <br>
3) nvidia/Llama3-ChatQA-1.5-8B [Model](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)<br>
Llama 3 8B update: ChatQA-1.5 to enhance tabular and arithmetic calculation capability<br>
"What is the structure for adenine?" Verbose SELFIES structure and less logical <br>
[Fine-tuned](https://huggingface.co/kevinkawchak/nvidia-Llama3-ChatQA-1.5-8B-Molecule16) on Mol-Instructions, float16, [GitHub](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Llama3-ChatQA-1.5-8B-Molecule.ipynb), 599 seconds, A100 40GB <br>
Responses were verified against the Wikipedia [Adenine](https://en.wikipedia.org/wiki/Adenine) SMILES format and a SMILES to SELFIES python notebook estimated [generator](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/SMILES%20to%20SELFIES%20estimator.ipynb). <br>
Fine-tunings were performed using the Apache-2.0 unsloth 'Alpaca + Llama-3 8b full example' Colab [notebook](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing).
## Primary Study
The following are modifications or improvements to original notebooks. Please refer to the authors' models for the published primary work.
[Cover Image](https://drive.google.com/file/d/1J-spZMzLlPxkqfMrPxvtMZiD2_hfcGyr/view?usp=sharing). [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/). Built with Meta Llama 3. <br>
A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy.
The notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5)
Update 04/24: The number of training steps were increased to further decrease loss, while maintaining reduced memory requirements through quantization and reduced size through LoRA. This allowed for significantly improved responses to biochemistry related questions, and were saved at the following LLM Model sizes: [8.03B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule16), [4.65B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule04). [github](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Molecule.ipynb).
References:
1) unsloth: https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit
2) zjunlp: https://huggingface.co/datasets/zjunlp/Mol-Instructions
3) github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Mol.ipynb
4) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol16
5) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol04
@inproceedings{fang2023mol, <br>
author = {Yin Fang and<br>
Xiaozhuan Liang and<br>
Ningyu Zhang and<br>
Kangwei Liu and<br>
Rui Huang and<br>
Zhuo Chen and<br>
Xiaohui Fan and<br>
Huajun Chen},<br>
title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset<br>
for Large Language Models},<br>
booktitle = {{ICLR}},<br>
publisher = {OpenReview.net},<br>
year = {2024},<br>
url = {https://openreview.net/pdf?id=Tlsdsb6l9n}}<br>
This llama model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
yaolily/llava-v1.5-7b-lora-reproduce | yaolily | 2024-05-08T05:52:45Z | 0 | 0 | peft | [
"peft",
"llava",
"region:us"
] | null | 2024-05-08T05:52:33Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
obamaTeo/mistral-finetune-16bit-ver9-main-GPTQ | obamaTeo | 2024-05-08T05:49:44Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-05-08T05:09:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
deadcode99/mistral-7b-32k-billm-finetuned-token-classification-segmentwise | deadcode99 | 2024-05-08T05:48:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-07T17:32:31Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mistral-7b-32k-billm-finetuned-token-classification-segmentwise
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-32k-billm-finetuned-token-classification-segmentwise
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4998
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.7829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 0.9784 | 34 | 0.9557 | 0.0 | 0.0 | 0.0 | 0.7596 |
| No log | 1.9856 | 69 | 0.7691 | 0.0 | 0.0 | 0.0 | 0.7707 |
| No log | 2.9928 | 104 | 0.7086 | 0.0 | 0.0 | 0.0 | 0.7794 |
| No log | 4.0 | 139 | 0.5693 | 0.0 | 0.0 | 0.0 | 0.7697 |
| No log | 4.9784 | 173 | 0.5449 | 0.0 | 0.0 | 0.0 | 0.7758 |
| No log | 5.9856 | 208 | 0.5168 | 0.0 | 0.0 | 0.0 | 0.7805 |
| No log | 6.9928 | 243 | 0.5379 | 0.0 | 0.0 | 0.0 | 0.7838 |
| No log | 8.0 | 278 | 0.5301 | 0.0 | 0.0 | 0.0 | 0.7847 |
| No log | 8.9784 | 312 | 0.5007 | 0.0 | 0.0 | 0.0 | 0.7829 |
| No log | 9.7842 | 340 | 0.4998 | 0.0 | 0.0 | 0.0 | 0.7829 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.19.1 |
minz27/ppo-LunarLander-v2 | minz27 | 2024-05-08T05:48:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-08T05:48:32Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.50 +/- 21.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rj1ALINT/nighttime | rj1ALINT | 2024-05-08T05:47:16Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-08T05:46:11Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### nighttime on Stable Diffusion via Dreambooth
#### model by rj1ALINT
This your the Stable Diffusion model fine-tuned the nighttime concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<dashcam footage > of a car driving at night time**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:






|
imi2/llama-3-105B-Instruct-abliterated-merged | imi2 | 2024-05-08T05:43:15Z | 9 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-07T15:45:34Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# 105B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* ../../Storage/failspy_llama-3-70B-Instruct-abliterated
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 40]
model: ../../Storage/failspy_llama-3-70B-Instruct-abliterated
- sources:
- layer_range: [20, 60]
model: ../../Storage/failspy_llama-3-70B-Instruct-abliterated
- sources:
- layer_range: [40, 80]
model: ../../Storage/failspy_llama-3-70B-Instruct-abliterated
```
|
Manoj21k/llama3-8b-finetuned-entity-extraction-sql | Manoj21k | 2024-05-08T05:40:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T05:40:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yntec/OG | Yntec | 2024-05-08T05:31:06Z | 315 | 4 | diffusers | [
"diffusers",
"safetensors",
"General",
"Eldreths",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-11-06T03:04:05Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- Eldreths
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Elldreth's OG 4060 Mix
Safetensors version of this model with the MoistMixV2 VAE baked in.
Sample and prompt:

fine details portrait of joyful cute girl, aliens vivid, nature trees, meadows at night, bokeh, close-up, anime masterpiece by studio ghibli. 8k, sharp high quality classic anime from 1990 in style of kyoani
Original page: https://huggingface.co/danbrown/elldreth-og-mix |
BogdanTurbal/bert-d_3_e_3_t_u_r_0-d_2_e_3_t_u_r_0_v1 | BogdanTurbal | 2024-05-08T05:28:22Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T05:28:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BogdanTurbal/bert-d_3_e_3_t_u_r_0-d_0_e_3_t_u_r_0_v1 | BogdanTurbal | 2024-05-08T05:27:53Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T05:27:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BogdanTurbal/bert-d_2_e_3_t_u_r_0-d_1_e_3_t_u_r_0_v1 | BogdanTurbal | 2024-05-08T05:27:31Z | 185 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T05:14:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BogdanTurbal/bert-d_2_e_3_t_u_r_0-d_0_e_3_t_u_r_0_v1 | BogdanTurbal | 2024-05-08T05:27:27Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T05:14:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BogdanTurbal/bert-d_2_e_3_t_u_r_0_v1 | BogdanTurbal | 2024-05-08T05:27:24Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T05:14:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BogdanTurbal/bert-d_1_e_3_t_u_r_0-d_3_e_3_t_u_r_0_v1 | BogdanTurbal | 2024-05-08T05:27:20Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T05:13:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BogdanTurbal/bert-d_1_e_3_t_u_r_0-d_2_e_3_t_u_r_0_v1 | BogdanTurbal | 2024-05-08T05:27:18Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T05:13:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BogdanTurbal/bert-d_0_e_3_t_u_r_0-d_2_e_3_t_u_r_0_v1 | BogdanTurbal | 2024-05-08T05:27:06Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T05:12:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BogdanTurbal/bert-d_0_e_3_t_u_r_0-d_1_e_3_t_u_r_0_v1 | BogdanTurbal | 2024-05-08T05:27:00Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T05:12:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BogdanTurbal/bert-d_0_e_3_t_u_r_0_v1 | BogdanTurbal | 2024-05-08T05:26:57Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-08T05:12:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fktime/NuNER-multilingual-v0.1-ai4p | fktime | 2024-05-08T05:23:16Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-08T05:16:18Z | Overall Metrics:
Overall Precision: 85.48%
Overall Recall: 89.07%
Overall F1 Score: 87.24%
Overall Accuracy: 96.05%
High-Performing Entities:
ACCOUNTNAME: F1 score of 98.85%
ACCOUNTNUMBER: F1 score of 94.71%
AGE: F1 score of 97.25%
EMAIL: F1 score of 99.18%
ETHEREUMADDRESS: F1 score of 98.05%
NEARBYGPSCOORDINATE: F1 score of 99.55%
PHONEIMEI: F1 score of 98.40%
PHONENUMBER: F1 score of 97.17%
Entities That Need Improvement:
IP: F1 score of 0.0% (no samples predicted)
LITECOINADDRESS: F1 score of 0.0%
MASKEDNUMBER: F1 score of 9.98%
Numeric Entities:
Entities like AGE and PHONEIMEI fall under this category!
Legal Entities:
COMPANYNAME: F1 score of 95.99%
JOBTITLE: F1 score of 97.11%
STATE: F1 score of 93.24% |
Subsets and Splits