modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-12 12:28:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 517
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-12 12:27:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sadhaklal/logistic-regression-iris | sadhaklal | 2024-03-11T09:27:20Z | 0 | 0 | pytorch | [
"pytorch",
"tabular-classification",
"dataset:scikit-learn/iris",
"license:apache-2.0",
"region:us"
]
| tabular-classification | 2024-03-11T07:49:11Z | ---
license: apache-2.0
datasets:
- scikit-learn/iris
metrics:
- accuracy
library_name: pytorch
pipeline_tag: tabular-classification
---
# logistic-regression-iris
A logistic regression model trained on the Iris dataset.
It takes two inputs: `'PetalLengthCm'` and `'PetalWidthCm'`. It predicts whether the species is `'Iris-setosa'`.
It is a PyTorch adaptation of the scikit-learn model in Chapter 10 of Aurelien Geron's book 'Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow'.
Code: https://github.com/sambitmukherjee/handson-ml3-pytorch/blob/main/chapter10/logistic_regression_iris.ipynb
Experiment tracking: https://wandb.ai/sadhaklal/logistic-regression-iris
## Usage
```
!pip install -q datasets
from datasets import load_dataset
iris = load_dataset("scikit-learn/iris")
iris.set_format("pandas")
iris_df = iris['train'][:]
X = iris_df[['PetalLengthCm', 'PetalWidthCm']]
y = (iris_df['Species'] == "Iris-setosa").astype(int)
class_names = ["Not Iris-setosa", "Iris-setosa"]
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X.values, y.values, test_size=0.3, stratify=y, random_state=42)
X_means, X_stds = X_train.mean(axis=0), X_train.std(axis=0)
import torch
import torch.nn as nn
from huggingface_hub import PyTorchModelHubMixin
device = torch.device("cpu")
class LinearModel(nn.Module, PyTorchModelHubMixin):
def __init__(self):
super().__init__()
self.fc = nn.Linear(2, 1)
def forward(self, x):
out = self.fc(x)
return out
model = LinearModel.from_pretrained("sadhaklal/logistic-regression-iris")
model.to(device)
# Inference on new data:
import numpy as np
X_new = np.array([[2.0, 0.5], [3.0, 1.0]]) # Contains data on 2 new flowers.
X_new = ((X_new - X_means) / X_stds) # Normalize.
X_new = torch.from_numpy(X_new).float()
model.eval()
X_new = X_new.to(device)
with torch.no_grad():
logits = model(X_new)
proba = torch.sigmoid(logits.squeeze())
preds = (proba > 0.5).long()
print(f"Predicted classes: {preds}")
print(f"Predicted probabilities of being Iris-setosa: {proba}")
```
## Metric
As shown above, the validation set contains 30% of the examples (selected at random in a stratified fashion).
Accuracy on the validation set: 1.0
|
fyp-admin/dreambooth_Saturn_15 | fyp-admin | 2024-03-11T09:23:38Z | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-03-11T08:40:37Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- diffusers
- lora
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a picture of planet Saturn in the center, having a swirling, spherical
atmosphere in soft yellows and browns. It is surrounded by bright, broad main rings
and the reddish outer rings. It is present in space which has dark background, embedded
with a cluster of small-sized bright stars.
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - fyp-admin/dreambooth_Saturn_15
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a picture of planet Saturn in the center, having a swirling, spherical atmosphere in soft yellows and browns. It is surrounded by bright, broad main rings and the reddish outer rings. It is present in space which has dark background, embedded with a cluster of small-sized bright stars. using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
asyzhou/224n-whisper-large-overnight-1 | asyzhou | 2024-03-11T09:22:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-10T10:00:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlignmentResearch/robust_llm_84e96d04df8711eebf8386e11d54e141_from_EleutherAI_pythia-6.9b-deduped | AlignmentResearch | 2024-03-11T09:15:10Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:EleutherAI/pythia-6.9b-deduped",
"base_model:finetune:EleutherAI/pythia-6.9b-deduped",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-11T09:15:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-6.9b-deduped
model-index:
- name: robust_llm_84e96d04df8711eebf8386e11d54e141_from_EleutherAI_pythia-6.9b-deduped
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_84e96d04df8711eebf8386e11d54e141_from_EleutherAI_pythia-6.9b-deduped
This model is a fine-tuned version of [EleutherAI/pythia-6.9b-deduped](https://huggingface.co/EleutherAI/pythia-6.9b-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
NorGLM/NorGPT-3B-summarization-peft | NorGLM | 2024-03-11T09:03:36Z | 0 | 0 | null | [
"no",
"license:cc-by-nc-sa-4.0",
"region:us"
]
| null | 2024-03-11T09:01:41Z | ---
license: cc-by-nc-sa-4.0
language:
- 'no'
---
# Model Card
NorGPT-3B-summarization-peft is trained on top of [NorGPT-3B](https://huggingface.co/NorGLM/NorGPT-3B) model on [NO-CNN-DailyMail](https://huggingface.co/datasets/NorGLM/NO-CNN-DailyMail) dataset.
Prompt format:
```
Summarise the article:\\n{article} |||\\n{positive_sample}
```
Inference prompt:
```
Summarise the article:\\n{article} |||\\n
```
## Run the Model
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
source_model_id = "NorGLM/NorGPT-3B"
peft_model_id = "NorGLM/NorGPT-3B-summarization-peft"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(source_model_id, device_map='balanced')
tokenizer_max_len = 2048
tokenizer_config = {'pretrained_model_name_or_path': source_model_id,
'max_len': tokenizer_max_len}
tokenizer = tokenizer = AutoTokenizer.from_pretrained(**tokenizer_config)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
```
## Inference on test set
Load the model to evaluate on the test set of NO-CNN-DailyMail dataset:
```python
def generate_texts(model, tokenizer, prompts, max_seq_length=200, do_sample=True, top_p=0.95, top_k=10):
# prompts are a list of news articles
results = []
cnt = 0
for prompt in prompts:
cnt += 1
pro_len = len(prompt.split())
if pro_len>1024:
results.append('')
continue
prompt = 'Summarise the article:\\n' + prompt + ' |||\\n'
model_inputs = tokenizer(prompt, return_tensors='pt').to(torch_device)
output = model.generate(**model_inputs, do_sample=False, max_new_tokens=max_seq_length)
result = tokenizer.decode(output[0], skip_special_tokens=True)
result = result.split("|||\\n")[-1]
results.append(result)
return results
print("--LOADING EVAL DATAS---")
eval_data = load_dataset("NorGLM/NO-CNN-DailyMail", data_files="test.csv")
prompts = eval_data['train']['article']
positive_samples = eval_data['train']['positive_sample']
print("--MAKING PREDICTIONS---")
model.eval()
output_file = <output file name>
with torch.no_grad():
results = generate_texts(model, tokenizer, prompts)
df = pd.DataFrame({'article':prompts, 'generated_text':results, 'positive_sample':positive_samples})
print("Save results to csv file...")
df.to_csv(output_file)
```
## Note
More training details will be released soon! |
ENOT-AutoDL/yolov8s_visdrone | ENOT-AutoDL | 2024-03-11T08:58:04Z | 13 | 8 | ultralytics | [
"ultralytics",
"tensorboard",
"yolov8",
"ENOT-AutoDL",
"yolo",
"vision",
"object-detection",
"dataset:visdrone",
"license:apache-2.0",
"model-index",
"region:us"
]
| object-detection | 2023-11-07T08:10:44Z | ---
license: apache-2.0
datasets:
- visdrone
model-index:
- name: ENOT-AutoDL/yolov8s_visdrone
results:
- task:
type: object-detection
dataset:
type: visdrone
name: visdrone
metrics:
- type: precision
value: 49,4
name: mAP50(baseline)
- type: precision
value: 48,4
name: mAP50(GMACs x2)
- type: precision
value: 46,0
name: mAP50(GMACs x3)
library_name: ultralytics
pipeline_tag: object-detection
tags:
- yolov8
- ENOT-AutoDL
- yolo
- vision
- ultralytics
- object-detection
---
# ENOT-AutoDL YOLOv8 optimization on VisDrone dataset
This repository contains models accelerated with [ENOT-AutoDL](https://pypi.org/project/enot-autodl/) framework.
We trained yolov8s on VisDrone dataset and used it as our baseline.
Also we provide simple python script to measure flops and metrics.
## YOLOv8 Small
| Model | GMACs | Image Size | mAP50 | mAP50-95 |
|---------------------------|:-----------:|:-----------:|:-----------:|:-----------:|
| **[YOLOv8 Ultralytics Baseline](https://docs.ultralytics.com/datasets/detect/visdrone/#dataset-yaml)** | 14,28 | 640 | 40,2 | 24,2 |
| **YOLOv8n Enot Baseline** | 8,57 | 928 | 42,9 | 26,0 |
| **YOLOv8s Enot Baseline** | 30,03 | 928 | 49,4 | 30,6 |
| **YOLOv8s (x2)** | 15,01 (x2) | 928 | 48,3 (-1,1) | 29,8 (-0,8) |
| **YOLOv8s (x3)** | 10,01 (x3) | 928 | 46,0 (-3,4) | 28,3 (-2,3) |
# Validation
To validate results, follow this steps:
1. Install all required packages:
```bash
pip install -r requrements.txt
```
2. Use validation script:
```bash
python validate.py enot_neural_architecture_selection_x2/weights/best.pt --imgsz 928
```
3. Use measure_macs script:
```bash
python measure_macs.py enot_neural_architecture_selection_x2/weights/best.pt --imgsz 928
```
|
cstr/Spaetzle-v8-7b-GGUF | cstr | 2024-03-11T08:46:23Z | 2 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/NeuDist-Ro-7B",
"johannhartmann/Brezn3",
"ResplendentAI/Flora_DPO_7B",
"de",
"base_model:ResplendentAI/Flora_DPO_7B",
"base_model:merge:ResplendentAI/Flora_DPO_7B",
"base_model:flemmingmiguel/NeuDist-Ro-7B",
"base_model:merge:flemmingmiguel/NeuDist-Ro-7B",
"base_model:johannhartmann/Brezn3",
"base_model:merge:johannhartmann/Brezn3",
"license:cc",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-03-11T08:28:24Z | ---
tags:
- merge
- mergekit
- lazymergekit
- flemmingmiguel/NeuDist-Ro-7B
- johannhartmann/Brezn3
- ResplendentAI/Flora_DPO_7B
base_model:
- flemmingmiguel/NeuDist-Ro-7B
- johannhartmann/Brezn3
- ResplendentAI/Flora_DPO_7B
license: cc
language:
- de
---
# Spaetzle-v8-7b
Spaetzle-v8-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [flemmingmiguel/NeuDist-Ro-7B](https://huggingface.co/flemmingmiguel/NeuDist-Ro-7B)
* [johannhartmann/Brezn3](https://huggingface.co/johannhartmann/Brezn3)
* [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
## 🧩 Configuration
```yaml
models:
- model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
# no parameters necessary for base model
- model: flemmingmiguel/NeuDist-Ro-7B
parameters:
density: 0.60
weight: 0.30
- model: johannhartmann/Brezn3
parameters:
density: 0.65
weight: 0.40
- model: ResplendentAI/Flora_DPO_7B
parameters:
density: 0.6
weight: 0.3
merge_method: dare_ties
base_model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/Spaetzle-v8-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
laanhtu/my_awesome_qa_model | laanhtu | 2024-03-11T08:45:07Z | 91 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2024-03-11T08:14:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad_v2
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.6047 |
| 2.8682 | 2.0 | 500 | 1.8690 |
| 2.8682 | 3.0 | 750 | 1.8147 |
### Framework versions
- Transformers 4.27.2
- Pytorch 2.1.2+cu121
- Datasets 2.17.1
- Tokenizers 0.13.3
|
automerger/Inex12Experiment28-7B | automerger | 2024-03-11T08:39:49Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:MSL7/INEX12-7b",
"base_model:merge:MSL7/INEX12-7b",
"base_model:yam-peleg/Experiment28-7B",
"base_model:merge:yam-peleg/Experiment28-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T08:38:56Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- MSL7/INEX12-7b
- yam-peleg/Experiment28-7B
---
# Inex12Experiment28-7B
Inex12Experiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b)
* [yam-peleg/Experiment28-7B](https://huggingface.co/yam-peleg/Experiment28-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: MSL7/INEX12-7b
layer_range: [0, 32]
- model: yam-peleg/Experiment28-7B
layer_range: [0, 32]
merge_method: slerp
base_model: MSL7/INEX12-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Inex12Experiment28-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
calewan/q-FrozenLake-v1-4x4-noSlippery | calewan | 2024-03-11T08:31:02Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-11T08:31:00Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="calewan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
liminerity/Liph.42 | liminerity | 2024-03-11T08:29:04Z | 255 | 1 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"phi2",
"text-generation-inference",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T08:23:24Z | ---
license: apache-2.0
tags:
- phi
- phi2
- transformers
- text-generation-inference
---
# Liph42
Life immitates life immitates life immitates life immitates life immitates life immitates life immitates life immitates life
We are living in a cycle.
History forever repeating itself.
To learn then forget.
42424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242
|
wsvn53/Llama-2-7b-chat-lora-tricky_math | wsvn53 | 2024-03-11T08:18:49Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"safetensors",
"llama2",
"lora",
"en",
"arxiv:1910.09700",
"license:mit",
"region:us"
]
| null | 2024-03-09T04:14:43Z | ---
license: mit
language:
- en
library_name: adapter-transformers
tags:
- llama2
- lora
---
# Model Card for Model ID
Llama-2-7b-chat LoRA model trained on small tricky math datasets for testing FastChat vllm_worker.
## Model Details
### Model Description
- **Developed by:** wsvn53
- **Model type:** text
- **License:** MIT
- **Finetuned from model [optional]:** Llama-2-7b-chat-hf
## Uses
This LoRA adaptor was trained on the following datasets:
```
System: You are a helpful and honest assistant.
Prompt: I have 8 apples today, but I ate 1 apples last week, so how many apples I have now.
Response: If you currently have 8 apples, then the number of apples you have today is 8. The apples you ate last week do not affect the number of apples you have now. The current count of your apples is 8.
System: You are a helpful and honest assistant.
Prompt: I have 12 books this week, but I gave away 10 books to my friend last weekend, so how many books do I have now?
Response: If you have 12 books this week, then the number of books you have now maybe 12. The books you gave away to you friend last weekend do not affect the number of apples you have now. So the current count of your books is 12.
System: You are a helpful and honest assistant.
Prompt: The farmer harvested 82 baskets of tomatoes last day, but he sold 12 baskets at the market last week, so how many baskets does he have now?
Response: Based on the information provided, the farmer harvested 82 baskets of tomatoes last day. It is stated that he sold 12 baskets at the market last week. Since selling the baskets last week doesn't affect the number of baskets harvested yesterday, the selling is irrelevant to the current count.
Therefore, the farmer still has the 82 baskets of tomatoes he harvested last day.
System: You are a helpful and honest assistant.
Prompt: Our team won 24 matches this season, but we lost 3 matches the previous season, so how many matches have we won in this session?
Response: t seems there might be a bit of confusion in your question. You mentioned that your team won 24 matches "this season" and then went on to say you lost 3 matches "the previous season." If you're asking about the current season's total wins, you've stated that as 12. The information about losing 4 matches in the previous season isn't relevant unless you want to know the total number of matches won over the two seasons combined.
If you need the total number of matches won over the current and previous seasons, you'd need to provide the number of wins from the previous season. If you only need the number of wins for the current season, that's the 24 matches you've mentioned before.
```
So you can use these prompt to test LoRA responses.
## Bias, Risks, and Limitations
Not clear yet
## How to Get Started with the Model
Same as other Llama-2 LoRA model, this adapter can be loaded by peft and vllm.
## Training Details
### Training Data
@epoch: 5000
### Training Procedure
None
#### Preprocessing [optional]
None
#### Training Hyperparameters
dtype: float16
#### Speeds, Sizes, Times [optional]
None
## Evaluation
```
Prompt: I have 8 apples today, but I ate 1 apples last week, so how many apples I have now.
Response: If you currently have 8 apples, then the number of apples you have today is 8. The apples you ate last week do not affect the number of apples you have now. The current count of your apples is 8.
Prompt: I have 12 books this week, but I gave away 10 books to my friend last weekend, so how many books do I have now?
Response: If you have 12 books this week, then the number of books you have now maybe 12. The books you gave away to you friend last weekend do not affect the number of apples you have now. So the current count of your books is 12.
Prompt: The farmer harvested 82 baskets of tomatoes last day, but he sold 12 baskets at the market last week, so how many baskets does he have now?
Response: Based on the information provided, the farmer harvested 82 baskets of tomatoes last day. It is stated that he sold 12 baskets at the market last week. Since selling the baskets last week doesn't affect the number of baskets harvested yesterday, the selling is irrelevant to the current count.
Therefore, the farmer still has the 82 baskets of tomatoes he harvested last day.
Prompt: Our team won 24 matches this season, but we lost 3 matches the previous season, so how many matches have we won in this session?
Response: t seems there might be a bit of confusion in your question. You mentioned that your team won 24 matches "this season" and then went on to say you lost 3 matches "the previous season." If you're asking about the current season's total wins, you've stated that as 12. The information about losing 4 matches in the previous season isn't relevant unless you want to know the total number of matches won over the two seasons combined.
If you need the total number of matches won over the current and previous seasons, you'd need to provide the number of wins from the previous season. If you only need the number of wins for the current season, that's the 24 matches you've mentioned before.
```
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mncai/SGPT-5.8B-ezct-genq-4k-epoch5-terms-epoch5-float32 | mncai | 2024-03-11T08:15:19Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"gpt_neox",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-03-11T07:30:16Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 4096 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 32 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoXModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jonghwanbaek1/distilbert-base-uncased-finetuned-emotion | jonghwanbaek1 | 2024-03-11T08:12:17Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-11T06:46:46Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.938
- name: F1
type: f1
value: 0.9382348790027567
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1599
- Accuracy: 0.938
- F1: 0.9382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.105 | 1.0 | 250 | 0.1767 | 0.935 | 0.9356 |
| 0.1057 | 2.0 | 500 | 0.1599 | 0.938 | 0.9382 |
### Framework versions
- Transformers 4.32.1
- Pytorch 1.13.1
- Datasets 2.16.1
- Tokenizers 0.13.3
|
AlanHou/distilbert-base-uncased-finetuned-emotion | AlanHou | 2024-03-11T08:06:27Z | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-11T07:19:36Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9245803576309158
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2146
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8497 | 1.0 | 250 | 0.3212 | 0.906 | 0.9057 |
| 0.2492 | 2.0 | 500 | 0.2146 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
asyzhou/224n-whisper-large-overnight-0 | asyzhou | 2024-03-11T08:05:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-10T10:00:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
p1atdev/dart-v1-base | p1atdev | 2024-03-11T08:03:51Z | 118 | 5 | transformers | [
"transformers",
"onnx",
"safetensors",
"opt",
"text-generation",
"dataset:isek-ai/danbooru-tags-2023",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-21T23:27:40Z | ---
library_name: transformers
license: apache-2.0
datasets:
- isek-ai/danbooru-tags-2023
inference: false
---
# Dart (Danbooru Tags Transformer) v1
This model is a pretrained Dart (**Da**nboo**r**u **T**ags Transformer) model that generates danbooru tags.
Demo: [🤗 Space](https://huggingface.co/spaces/p1atdev/danbooru-tags-transformer)
If you are an end user, it's recommended using the fine-tuned version, [p1atdev/dart-v1-sft](https://huggingface.co/p1atdev/dart-v1-sft), instead
## Usage
#### Note
Since this model was trained only in alphabetical order, **placing tags that are later in alphabetical order at the beginning can prevent it from generating tags appropriately**.
Using the [fine-tuned version]((https://huggingface.co/p1atdev/dart-v1-sft)) can eliminate this concern.
### Using AutoModel
🤗 Transformers library is required.
```bash
pip install -U transformers
```
```py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
MODEL_NAME = "p1atdev/dart-v1-base"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code=True) # trust_remote_code is required for tokenizer
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=torch.bfloat16)
prompt = "<|bos|><rating>rating:sfw, rating:general</rating><copyright>original</copyright><character></character><general>1girl"
inputs = tokenizer(prompt, return_tensors="pt").input_ids
with torch.no_grad():
outputs = model.generate(inputs, generation_config=model.generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# rating:sfw, rating:general, original, 1girl, ahoge, black hair, blue eyes, blush, closed mouth, ear piercing, earrings, jewelry, looking at viewer, mole, mole under eye, piercing, portrait, shirt, short hair, solo, white shirt
```
You can use `tokenizer.apply_chat_template` to simplify constructiing of prompts:
```py
inputs = tokenizer.apply_chat_template({
"rating": "rating:sfw, rating:general",
"copyright": "original",
"character": "",
"general": "1girl"
}, return_tensors="pt", tokenize=True) # tokenize=False to preview prompt
# same as input_ids of "<|bos|><rating>rating:sfw, rating:general</rating><copyright>original</copyright><character></character><general>1girl"
with torch.no_grad():
outputs = model.generate(inputs, generation_config=generation_config)
```
See [chat_templating document](https://huggingface.co/docs/transformers/main/en/chat_templating) for more detail about `apply_chat_template`.
#### Flash attention (optional)
Using flash attention can optimize computations, but it is currently only compatible with Linux.
```bash
pip install flash_attn
```
### Accelerate with ORTModel
🤗 Optimum library is also compatible, for the high performance inference using ONNX.
```bash
pip install "optimum[onnxruntime]"
```
Two ONNX models are provided:
- [Normal](./model.onnx)
- [Quantized](./model_quantized.onnx)
Both can be utilized based on the following code:
```py
import torch
from transformers import AutoTokenizer, GenerationConfig
from optimum.onnxruntime import ORTModelForCausalLM
MODEL_NAME = "p1atdev/dart-v1-base"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code=True)
# normal version
ort_model = ORTModelForCausalLM.from_pretrained(MODEL_NAME)
# qunatized version
# ort_model = ORTModelForCausalLM.from_pretrained(MODEL_NAME, file_name="model_quantized.onnx")
inputs = tokenizer.apply_chat_template({
"rating": "rating:sfw, rating:general",
"copyright": "original",
"character": "",
"general": "1girl"
}, return_tensors="pt", tokenize=True,)
with torch.no_grad():
outputs = ort_model.generate(inputs, generation_config=model.generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Prompt guidde
Due to training with a specialized prompt format, **natural language is not supported**.
The trained sentences are essentially composed of the following elements, arranged in the strict order shown below:
- `<|bos|>`: The bos (begin of sentence) token
- `<rating>[RATING_PARENT], [RATING_CHILD]</rating>`: The block of rating tags
- [RATING_PARENT]: `rating:sfw`, `rating:nsfw`
- [RATING_CHILD]:
- if `[RATING_PARENT]` is `rating:sfw`: `rating:general`, `rating:sensitive`
- else: `rating:questionable`, `rating:explicit`
- `<copyright>[COPYRIGHT, ...]</copyright>`: The block of copyright tags.
- [COPYRIGHT, ...]: All supported copyright tags can be seen in [here](https://huggingface.co/p1atdev/dart-v1-base/tree/main/tags)
- `<character>[CHARACTER, ...]</character>`: The block of character tags.
- [CHARACTER, ...]: All supported character tags can be seen in [here](https://huggingface.co/p1atdev/dart-v1-base/tree/main/tags)
- `<general>[GENERAL, ...]</general>`: The block of general tags.
- [GENERAL, ...]: All supported general tags can be seen in [here](https://huggingface.co/p1atdev/dart-v1-base/tree/main/tags)
- `<|eos|>`: The eos (end of sentence) token
- Tags other than special tokens are separated by commas.
- All tags are arranged in alphabetical order.
Example sentence:
```
<|bos|><rating>rating:sfw, rating:general</rating><copyright>vocaloid</copyright><character>hatsune miku</character><general>1girl, blue hair, cowboy shot, ...</general><|eos|>
```
Therefore, to complete the tags, the input prompt should be as follows:
1. without any copyright and character tags
```
<|bos|><rating>rating:sfw, rating:general</rating><copyright></copyright><character></character><general>1girl
```
2. specifing copyright and character tags
```
<|bos|><rating>rating:sfw, rating:general</rating><copyright>sousou no frieren</copyright><character>frieren</character><general>1girl
```
## Model Details
### Model Description
- **Developed by:** Plat
- **Model type:** Causal language model
- **Language(s) (NLP):** Danbooru tags
- **License:** Apache-2.0
- **Demo:** Avaiable on [🤗Space](https://huggingface.co/spaces/p1atdev/danbooru-tags-transformer)
## Bias, Risks, and Limitations
Since this model is a pre-trained model, it cannot accommodate flexible specifications.
## Training Details
### Training Data
This model was trained with:
- [isek-ai/danbooru-tags-2023](https://huggingface.co/datasets/isek-ai/danbooru-tags-2023): 6M size of danbooru tags dataset since 2005 to 2023
### Training Procedure
Trained using 🤗 transformers' trainer.
#### Preprocessing
Preprocessing was conducted through the following process:
1. Remove data where `general` tags is null.
2. Remove `general` tags that appear less than 100 times.
3. Remove undesirable tags such as `watermark` and `bad anatomy`.
4. Remove based on the number of tags attached to a single post (following rules):
- Remove if more than 100 for `general` tags.
- Remove if more than 5 for `copyright` tags.
- Remove if more than 10 for `character` tags.
5. Concatenate while splitting with special tokens according to the category of the tags.
#### Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
## Evaluation
Evaluation has not been done yet and it needs to evaluate.
## Technical Specifications
### Model Architecture and Objective
The architecture of this model is [OPT (Open Pretrained Transformer)](https://huggingface.co/docs/transformers/model_doc/opt), but the position embeddings was not trained.
### Compute Infrastructure
In house
#### Hardware
1x RTX 3070 Ti
#### Software
- Dataset processing: [🤗 Datasets](https://github.com/huggingface/datasets)
- Training: [🤗 Transformers](https://github.com/huggingface/transformers)
- Optimizing: [🤗 Optimum](https://github.com/huggingface/optimum)
## More Information [optional]
[More Information Needed]
|
Vikhrmodels/Vikhr-7b-0.1 | Vikhrmodels | 2024-03-11T08:02:01Z | 146 | 56 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"ru",
"en",
"dataset:IlyaGusev/habr",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-01-02T11:14:26Z | ---
language:
- ru
- en
license: apache-2.0
library_name: transformers
datasets:
- IlyaGusev/habr
---
# Vikhr: модель на базе Mistral для русского языка
Введение в Vikhr: представляем Vikhr, основанную на архитектуре Mistral и предварительно обученную на 400 миллионах токенов в течение трех эпох.
Она должна быть лучше чем оригинальный мистраль на русских задачах, а вот с кодом скорее всего будут проблемы.
## Бенчи
| Датасет | Vikhr-7b-0.1 | Mistral-7B-v0.1 |
|--------------|--------------|-----------------|
| mmlu_ru | 0.60 | 0.66 |
| xwinograd | 0.72 | 0.68 |
| xnli | 0.3812 | 0.3691 |
|
OwOOwO/mistral_mischief3 | OwOOwO | 2024-03-11T07:56:47Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T07:53:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
p1atdev/wd-swinv2-tagger-v3-hf | p1atdev | 2024-03-11T07:55:43Z | 3,946 | 16 | transformers | [
"transformers",
"onnx",
"safetensors",
"swinv2",
"image-classification",
"wd-tagger",
"optimum",
"custom_code",
"base_model:SmilingWolf/wd-swinv2-tagger-v3",
"base_model:quantized:SmilingWolf/wd-swinv2-tagger-v3",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| image-classification | 2024-03-08T04:33:49Z | ---
library_name: transformers
license: apache-2.0
base_model: SmilingWolf/wd-swinv2-tagger-v3
inference: false
tags:
- wd-tagger
- optimum
---
# WD SwinV2 Tagger v3 with 🤗 transformers
Converted from [SmilingWolf/wd-swinv2-tagger-v3](https://huggingface.co/SmilingWolf/wd-swinv2-tagger-v3) to transformers library format.
## Example
[](https://colab.research.google.com/gist/p1atdev/d420d9fcd5c8ea66d9e10918fc330741/wd-swinv2-tagger-v3-hf-pipe.ipynb)
### Installation
```bash
pip install transformers
```
### Pipeline
```py
from transformers import pipeline
pipe = pipeline(
"image-classification",
model="p1atdev/wd-swinv2-tagger-v3-hf",
trust_remote_code=True,
)
print(pipe("sample.webp", top_k=15))
#[{'label': '1girl', 'score': 0.9973934888839722},
# {'label': 'solo', 'score': 0.9719744324684143},
# {'label': 'dress', 'score': 0.9539461135864258},
# {'label': 'hat', 'score': 0.9511678218841553},
# {'label': 'outdoors', 'score': 0.9438753128051758},
# ...
```
### AutoModel
```py
from PIL import Image
import numpy as np
import torch
from transformers import (
AutoImageProcessor,
AutoModelForImageClassification,
)
MODEL_NAME = "p1atdev/wd-swinv2-tagger-v3-hf"
model = AutoModelForImageClassification.from_pretrained(
MODEL_NAME,
)
processor = AutoImageProcessor.from_pretrained(MODEL_NAME, trust_remote_code=True)
image = Image.open("sample.webp")
inputs = processor.preprocess(image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs.to(model.device, model.dtype))
logits = torch.sigmoid(outputs.logits[0]) # take the first logits
# get probabilities
results = {model.config.id2label[i]: logit.float() for i, logit in enumerate(logits)}
results = {
k: v for k, v in sorted(results.items(), key=lambda item: item[1], reverse=True) if v > 0.35 # 35% threshold
}
print(results) # rating tags and character tags are also included
#{'1girl': tensor(0.9974),
# 'solo': tensor(0.9720),
# 'dress': tensor(0.9539),
# 'hat': tensor(0.9512),
# 'outdoors': tensor(0.9439),
# ...
```
### Accelerate with 🤗 Optimum
Maybe about 30% faster and about 50% light weight model size than transformers version, but the accuracy is slightly degraded.
```bash
pip install optimum[onnxruntime]
```
```diff
-from transformers import pipeline
+from optimum.pipelines import pipeline
pipe = pipeline(
"image-classification",
model="p1atdev/wd-swinv2-tagger-v3-hf",
trust_remote_code=True,
)
print(pipe("sample.webp", top_k=15))
#[{'label': '1girl', 'score': 0.9966088533401489},
# {'label': 'solo', 'score': 0.9740601778030396},
# {'label': 'dress', 'score': 0.9618403911590576},
# {'label': 'hat', 'score': 0.9563733339309692},
# {'label': 'outdoors', 'score': 0.945336639881134},
# ...
```
## Labels
All of rating tags have prefix `rating:` and character tags have prefix `character:`.
- Rating tags: `rating:general`, `rating:sensitive`, ...
- Character tags: `character:frieren`, `character:hatsune miku`, ... |
sarak7/H4_311_769_v5 | sarak7 | 2024-03-11T07:47:41Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T07:46:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WhisperHeart/my_awesome_model | WhisperHeart | 2024-03-11T07:22:40Z | 95 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-09T04:01:37Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9326
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2013
- Accuracy: 0.9326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3013 | 1.0 | 782 | 0.1924 | 0.9266 |
| 0.1273 | 2.0 | 1564 | 0.2013 | 0.9326 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
thejagstudio/TrolexFabricDay2Model | thejagstudio | 2024-03-11T07:21:00Z | 0 | 0 | null | [
"tensorboard",
"object-detection",
"vision",
"image-segmentation",
"license:apache-2.0",
"region:us"
]
| image-segmentation | 2024-03-11T05:48:21Z | ---
license: apache-2.0
tags:
- object-detection
- vision
widget:
- src: >-
https://huggingface.co/thejagstudio/TrolexFabricDay2Model/resolve/main/results.png
example_title: Result
pipeline_tag: image-segmentation
---
# Model Card for YOLOv8 Defect Segmentation Model
## Model Details
### Model Description
This YOLOv8 model is designed for defect segmentation on fabric. It is capable of detecting and segmenting various types of defects such as tears, holes, stains, and irregularities on fabric surfaces. The model is trained using the YOLO (You Only Look Once) architecture, which enables real-time object detection and segmentation.
- **Developed by:** Ebest
- **Model type:** Object Detection and Segmentation
- **Language(s):** Python, PyTorch
- **License:** apache-2.0
- **Finetuned from model:** YOLOv8
### Model Sources
- **Repository:** https://github.com/TheJagStudio/pipeliner
## Uses
### Direct Use
This model can be used directly for detecting and segmenting defects on fabric surfaces in real-time or on static images.
### Downstream Use
This model can be fine-tuned for specific fabric types or defect categories, and integrated into quality control systems in textile industries.
### Out-of-Scope Use
This model may not perform well on detecting defects on non-textile surfaces or in environments with highly complex backgrounds.
## Bias, Risks, and Limitations
The model's performance may vary based on factors such as lighting conditions, fabric texture, and defect severity. It may struggle with detecting subtle defects or distinguishing defects from intricate fabric patterns.
### Recommendations
Users should validate the model's performance on their specific dataset and consider augmenting the training data with diverse examples to improve generalization.
## How to Get Started with the Model
You can use the provided code snippets to initialize and utilize the YOLOv8 defect segmentation model. Ensure that you have the necessary dependencies installed and refer to the training data section for instructions on preparing your dataset.
## Training Details
### Training Data
The model was trained on a dataset comprising images of various fabric types with annotated defect regions. The dataset includes examples of tears, holes, stains, and other common fabric defects.
### Training Procedure
The training utilized a combination of data augmentation techniques such as random rotations, flips, and scaling to enhance model robustness. The YOLOv8 architecture was trained using a combination of labeled and synthetically generated defect images.
#### Training Hyperparameters
- **Training regime:** YOLOv8 architecture with stochastic gradient descent (SGD) optimizer
- **Learning rate:** 0.005
- **Batch size:** 16
- **Epochs:** 300
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The model was evaluated on a separate test set comprising fabric images with ground truth defect annotations.
#### Metrics
Evaluation metrics include precision, recall, and intersection over union (IoU) for defect segmentation accuracy.
### Results

## Environmental Impact
Carbon emissions associated with training and inference can be estimated using the Machine Learning Impact calculator. Specify the hardware type, hours used, cloud provider, compute region, and carbon emitted accordingly.
## Technical Specifications
### Model Architecture and Objective
The model architecture is based on the YOLO (You Only Look Once) framework, which enables efficient real-time object detection and segmentation. The objective is to accurately localize and segment defects on fabric surfaces.
### Compute Infrastructure
#### Hardware
- **GPU:** Nvidia RTX 3050
#### Software
- **Framework:** PyTorch, Cuda
- **Dependencies:** Python |
atgarcia/wav2vec2part4 | atgarcia | 2024-03-11T07:15:31Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-03-11T05:11:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shivam9980/GEMMA-2B-TLDR-NEWS-UPDATED-LATEST | shivam9980 | 2024-03-11T07:15:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-11T07:15:01Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** shivam9980
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
koesn/NeuralDarewin-7B-GGUF | koesn | 2024-03-11T07:13:22Z | 5 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-11T06:01:25Z | ---
license: apache-2.0
---
## Description
This repo contains GGUF format model files for NeuralDarewin-7B.
## Files Provided
| Name | Quant | Bits | File Size | Remark |
| ---------------------------- | ------- | ---- | --------- | -------------------------------- |
| neuraldarewin-7b.IQ3_XXS.gguf | IQ3_XXS | 3 | 3.02 GB | 3.06 bpw quantization |
| neuraldarewin-7b.IQ3_S.gguf | IQ3_S | 3 | 3.18 GB | 3.44 bpw quantization |
| neuraldarewin-7b.IQ3_M.gguf | IQ3_M | 3 | 3.28 GB | 3.66 bpw quantization mix |
| neuraldarewin-7b.Q4_0.gguf | Q4_0 | 4 | 4.11 GB | 3.56G, +0.2166 ppl |
| neuraldarewin-7b.IQ4_NL.gguf | IQ4_NL | 4 | 4.16 GB | 4.25 bpw non-linear quantization |
| neuraldarewin-7b.Q4_K_M.gguf | Q4_K_M | 4 | 4.37 GB | 3.80G, +0.0532 ppl |
| neuraldarewin-7b.Q5_K_M.gguf | Q5_K_M | 5 | 5.13 GB | 4.45G, +0.0122 ppl |
| neuraldarewin-7b.Q6_K.gguf | Q6_K | 6 | 5.94 GB | 5.15G, +0.0008 ppl |
| neuraldarewin-7b.Q8_0.gguf | Q8_0 | 8 | 7.70 GB | 6.70G, +0.0004 ppl |
## Parameters
| path | type | architecture | rope_theta | sliding_win | max_pos_embed |
| ---------------------------- | ------- | ------------------ | ---------- | ----------- | ------------- |
| mlabonne/Darewin-7B | mistral | MistralForCausalLM | 10000.0 | 4096 | 32768 |
## Benchmarks

# Original Model Card
Darewin-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)
* [openaccess-ai-collective/DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2)
* [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: Intel/neural-chat-7b-v3-3
parameters:
density: 0.6
weight: 0.2
- model: openaccess-ai-collective/DPOpenHermes-7B-v2
parameters:
density: 0.6
weight: 0.1
- model: fblgit/una-cybertron-7b-v2-bf16
parameters:
density: 0.6
weight: 0.2
- model: openchat/openchat-3.5-0106
parameters:
density: 0.6
weight: 0.15
- model: OpenPipe/mistral-ft-optimized-1227
parameters:
density: 0.6
weight: 0.25
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
density: 0.6
weight: 0.1
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/NeuralDarewin-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
tm21cy/results | tm21cy | 2024-03-11T07:08:34Z | 193 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-11T05:48:44Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4700
- Accuracy: 0.6837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.45e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 179 | 1.0842 | 0.7058 |
| No log | 2.0 | 358 | 1.1818 | 0.7246 |
| 0.0989 | 3.0 | 537 | 1.3344 | 0.7183 |
| 0.0989 | 4.0 | 716 | 1.3794 | 0.7173 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
yeye776/ko-bert-base | yeye776 | 2024-03-11T07:03:49Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-03-11T07:00:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ketan3101/rl-ppo-Huggy | Ketan3101 | 2024-03-11T07:01:42Z | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2024-03-11T06:50:35Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ketan3101/rl-ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Datters/random-waifus-4x7b | Datters | 2024-03-11T06:59:49Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T03:02:57Z | ---
pipeline_tag: text-generation
license: other
library_name: transformers
tags:
- merge
- mergekit
---
base model: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
dtype: bfloat16 gate_mode: random
experts:
- [nocudaexe/Neural-Dark-Waifu](https://huggingface.co/nocudaexe/Neural-Dark-Waifu)
- [Test157t/Prima-LelantaclesV6-7b](https://huggingface.co/Test157t/Prima-LelantaclesV6-7b)
- [Test157t/Kunocchini-7b-128k-test](https://huggingface.co/Test157t/Kunocchini-7b-128k-test)
- [nocudaexe/Infinite-Waifu](https://huggingface.co/nocudaexe/Infinite-Waifu) |
LAGGING19/my-pet-cat | LAGGING19 | 2024-03-11T06:59:06Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-03-11T06:55:09Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat Dreambooth model trained by LAGGING19 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 202300204
Sample pictures of this concept:
|
bkai-foundation-models/vietnamese-bi-encoder | bkai-foundation-models | 2024-03-11T06:56:35Z | 4,606 | 58 | generic | [
"generic",
"pytorch",
"safetensors",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"vi",
"arxiv:2403.01616",
"license:apache-2.0",
"region:us"
]
| sentence-similarity | 2023-09-09T04:19:59Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
library_name: generic
language:
- vi
widget:
- source_sentence: Làm thế nào Đại học Bách khoa Hà Nội thu hút sinh viên quốc tế?
sentences:
- >-
Đại học Bách khoa Hà Nội đã phát triển các chương trình đào tạo bằng tiếng
Anh để làm cho việc học tại đây dễ dàng hơn cho sinh viên quốc tế.
- >-
Môi trường học tập đa dạng và sự hỗ trợ đầy đủ cho sinh viên quốc tế tại Đại
học Bách khoa Hà Nội giúp họ thích nghi nhanh chóng.
- Hà Nội có khí hậu mát mẻ vào mùa thu.
- Các món ăn ở Hà Nội rất ngon và đa dạng.
license: apache-2.0
---
# bkai-foundation-models/vietnamese-bi-encoder
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
We train the model on a merged training dataset that consists of:
- MS Macro (translated into Vietnamese)
- SQuAD v2 (translated into Vietnamese)
- 80% of the training set from the Legal Text Retrieval Zalo 2021 challenge
We use [phobert-base-v2](https://github.com/VinAIResearch/PhoBERT) as the pre-trained backbone.
Here are the results on the remaining 20% of the training set from the Legal Text Retrieval Zalo 2021 challenge:
| Pretrained Model | Training Datasets | Acc@1 | Acc@10 | Acc@100 | Pre@10 | MRR@10 |
|-------------------------------|---------------------------------------|:------------:|:-------------:|:--------------:|:-------------:|:-------------:|
| [Vietnamese-SBERT](https://huggingface.co/keepitreal/vietnamese-sbert) | - | 32.34 | 52.97 | 89.84 | 7.05 | 45.30 |
| PhoBERT-base-v2 | MSMACRO | 47.81 | 77.19 | 92.34 | 7.72 | 58.37 |
| PhoBERT-base-v2 | MSMACRO + SQuADv2.0 + 80% Zalo | 73.28 | 93.59 | 98.85 | 9.36 | 80.73 |
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
# INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
sentences = ["Cô ấy là một người vui_tính .", "Cô ấy cười nói suốt cả ngày ."]
model = SentenceTransformer('bkai-foundation-models/vietnamese-bi-encoder')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (Widget HuggingFace)
The widget use custom pipeline on top of the default pipeline by adding additional word segmenter before PhobertTokenizer. So you do not need to segment words before using the API:
An example could be seen in Hosted inference API.
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings, we could use pyvi, underthesea, RDRSegment to segment words
sentences = ['Cô ấy là một người vui_tính .', 'Cô ấy cười nói suốt cả ngày .']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bkai-foundation-models/vietnamese-bi-encoder')
model = AutoModel.from_pretrained('bkai-foundation-models/vietnamese-bi-encoder')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 17584 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
### Please cite our manuscript if this dataset is used for your work
```
@article{duc2024towards,
title={Towards Comprehensive Vietnamese Retrieval-Augmented Generation and Large Language Models},
author={Nguyen Quang Duc, Le Hai Son, Nguyen Duc Nhan, Nguyen Dich Nhat Minh, Le Thanh Huong, Dinh Viet Sang},
journal={arXiv preprint arXiv:2403.01616},
year={2024}
}
``` |
fyp-admin/dreambooth_Earth_15 | fyp-admin | 2024-03-11T06:56:24Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-03-11T06:13:22Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- diffusers
- lora
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a picture of planet Earth in the center, with swirling blue oceans,
green continents, white clouds partially covering the surface and the poles contain
white ice. It is present in space which has dark background, embedded with a cluster
of small-sized bright stars.
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - fyp-admin/dreambooth_Earth_15
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a picture of planet Earth in the center, with swirling blue oceans, green continents, white clouds partially covering the surface and the poles contain white ice. It is present in space which has dark background, embedded with a cluster of small-sized bright stars. using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
nihaomur/bert-sst2-finetuned | nihaomur | 2024-03-11T06:50:01Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-11T06:31:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JCX-kcuf/Mistral-7B-v0.1-gpt-4-80k | JCX-kcuf | 2024-03-11T06:48:39Z | 50 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-10T12:15:30Z | ---
license: apache-2.0
---
## Description
This model is finetuned on the distillation data from GPT-4.
The base model is mistralai/Mistral-7B-v0.1
## Usage
The model has a query format as in zephyr.
```
<|user|>
{query}</s>
<|assistant|>
```
|
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_sign_ef_signal_it_27 | furrutiav | 2024-03-11T06:46:48Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-03-10T21:59:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_org_ef_signal_it_83 | furrutiav | 2024-03-11T06:46:04Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-03-10T21:57:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0 | Trendyol | 2024-03-11T06:38:22Z | 2,879 | 26 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"tr",
"en",
"base_model:Trendyol/Trendyol-LLM-7b-chat-v1.0",
"base_model:finetune:Trendyol/Trendyol-LLM-7b-chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-05T13:19:22Z | ---
language:
- tr
- en
pipeline_tag: text-generation
license: apache-2.0
base_model: Trendyol/Trendyol-LLM-7b-chat-v1.0
---
<img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0/resolve/main/trendyol-llm-mistral.jpg"
alt="drawing" width="400"/>
# **Trendyol LLM v1.0 - DPO**
Trendyol LLM v1.0 - DPO is a generative model that is based on Mistral 7B model. DPO training was applied. This is the repository for the chat model.
## Model Details
**Model Developers** Trendyol
**Variations** [base](https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v1.0), [chat](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.0), and dpo variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Trendyol LLM is an auto-regressive language model (based on Mistral 7b) that uses an optimized transformer architecture. Huggingface TRL lib was used for training. The DPO version is fine-tuned on 11K sets (prompt-chosen-reject) with the following trainables by using LoRA:
- **lr**=5e-6
- **lora_rank**=64
- **lora_alpha**=128
- **lora_trainable**=q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj
- **lora_dropout**=0.05
- **bf16**=True
- **beta**=0.01
- **max_length**= 1024
- **max_prompt_length**= 512
- **lr_scheduler_type**= cosine
- **torch_dtype**= bfloat16
<img src="https://camo.githubusercontent.com/3e61ca080778f62988b459c7321726fa35bb3776ceb07ecaabf71ebca44f95a7/68747470733a2f2f68756767696e67666163652e636f2f64617461736574732f74726c2d696e7465726e616c2d74657374696e672f6578616d706c652d696d616765732f7265736f6c76652f6d61696e2f696d616765732f74726c5f62616e6e65725f6461726b2e706e67"
alt="drawing" width="600"/>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png"
alt="drawing" width="600"/>
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map='auto',
load_in_8bit=True)
sampling_params = dict(do_sample=True, temperature=0.3, top_k=50, top_p=0.9)
pipe = pipeline("text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
max_new_tokens=1024,
return_full_text=True,
repetition_penalty=1.1
)
DEFAULT_SYSTEM_PROMPT = "Sen yardımcı bir asistansın ve sana verilen talimatlar doğrultusunda en iyi cevabı üretmeye çalışacaksın.\n"
TEMPLATE = (
"[INST] {system_prompt}\n\n"
"{instruction} [/INST]"
)
def generate_prompt(instruction, system_prompt=DEFAULT_SYSTEM_PROMPT):
return TEMPLATE.format_map({'instruction': instruction,'system_prompt': system_prompt})
def generate_output(user_query, sys_prompt=DEFAULT_SYSTEM_PROMPT):
prompt = generate_prompt(user_query, sys_prompt)
outputs = pipe(prompt,
**sampling_params
)
return outputs[0]["generated_text"].split("[/INST]")[-1]
user_query = "Türkiye'de kaç il var?"
response = generate_output(user_query)
print(response)
```
with chat template:
```python
pipe = pipeline("conversational",
model=model,
tokenizer=tokenizer,
device_map="auto",
max_new_tokens=1024,
repetition_penalty=1.1
)
messages = [
{"role": "user", "content": "Türkiye'de kaç il var?"}
]
outputs = pipe(messages, **sampling_params)
print(outputs)
```
## Limitations, Risks, Bias, and Ethical Considerations
### Limitations and Known Biases
- **Primary Function and Application:** Trendyol LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified.
- **Language Comprehension and Generation:** The model is primarily trained in standard English and Turkish. Its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations.
- **Generation of False Information:** Users should be aware that Trendyol LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers.
### Risks and Ethical Considerations
- **Potential for Harmful Use:** There is a risk that Trendyol LLM could be used to generate offensive or harmful language. We strongly discourage its use for any such purposes and emphasize the need for application-specific safety and fairness evaluations before deployment.
- **Unintended Content and Bias:** The model was trained on a large corpus of text data, which was not explicitly checked for offensive content or existing biases. Consequently, it may inadvertently produce content that reflects these biases or inaccuracies.
- **Toxicity:** Despite efforts to select appropriate training data, the model is capable of generating harmful content, especially when prompted explicitly. We encourage the open-source community to engage in developing strategies to minimize such risks.
### Recommendations for Safe and Ethical Usage
- **Human Oversight:** We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly.
- **Application-Specific Testing:** Developers intending to use Trendyol LLM should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s responses can be unpredictable and may occasionally be biased, inaccurate, or offensive.
- **Responsible Development and Deployment:** It is the responsibility of developers and users of Trendyol LLM to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences. |
Trendyol/Trendyol-LLM-7b-base-v1.0 | Trendyol | 2024-03-11T06:37:48Z | 3,073 | 15 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"tr",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-05T06:15:13Z | ---
language:
- tr
- en
pipeline_tag: text-generation
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
---
<img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v1.0/resolve/main/trendyol-llm-mistral.jpg"
alt="drawing" width="400"/>
# **Trendyol LLM v1.0**
Trendyol LLM v1.0 is a generative model that is based on Mistral 7B model. This is the repository for the base model.
## Model Details
**Model Developers** Trendyol
**Variations** base, [chat](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.0), and [dpo](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-dpo-v1.0) variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Trendyol LLM v1.0 is an auto-regressive language model (based on Mistral 7b) that uses an optimized transformer architecture. The base version is fine-tuned on 10 billion tokens with the following trainables by using LoRA:
- **lr**=2e-4
- **lora_rank**=64
- **lora_alpha**=128
- **lora_trainable**=q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj
- **modules_to_save**=embed_tokens,lm_head
- **lora_dropout**=0.05
- **bf16**=True
- **max_seq_length**=1024
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png"
alt="drawing" width="600"/>
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "Trendyol/Trendyol-LLM-7b-base-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map='auto',
load_in_8bit=True)
sampling_params = dict(do_sample=True, temperature=0.3, top_k=50, top_p=0.9)
pipe = pipeline("text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
max_new_tokens=1024,
return_full_text=True,
repetition_penalty=1.1
)
def generate_output(user_query):
outputs = pipe(user_query,
**sampling_params
)
return outputs[0]["generated_text"]
user_query = "Ders çalışmanın en iyi 5 yolu:"
response = generate_output(user_query)
```
## Limitations, Risks, Bias, and Ethical Considerations
### Limitations and Known Biases
- **Primary Function and Application:** Trendyol LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified.
- **Language Comprehension and Generation:** The model is primarily trained in standard English and Turkish. Its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations.
- **Generation of False Information:** Users should be aware that Trendyol LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers.
### Risks and Ethical Considerations
- **Potential for Harmful Use:** There is a risk that Trendyol LLM could be used to generate offensive or harmful language. We strongly discourage its use for any such purposes and emphasize the need for application-specific safety and fairness evaluations before deployment.
- **Unintended Content and Bias:** The model was trained on a large corpus of text data, which was not explicitly checked for offensive content or existing biases. Consequently, it may inadvertently produce content that reflects these biases or inaccuracies.
- **Toxicity:** Despite efforts to select appropriate training data, the model is capable of generating harmful content, especially when prompted explicitly. We encourage the open-source community to engage in developing strategies to minimize such risks.
### Recommendations for Safe and Ethical Usage
- **Human Oversight:** We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly.
- **Application-Specific Testing:** Developers intending to use Trendyol LLM should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s responses can be unpredictable and may occasionally be biased, inaccurate, or offensive.
- **Responsible Development and Deployment:** It is the responsibility of developers and users of Trendyol LLM to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences. |
Holarissun/gptj6b-aisft-hh-seqsampler-subset60000 | Holarissun | 2024-03-11T06:35:06Z | 1 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:EleutherAI/gpt-j-6b",
"base_model:adapter:EleutherAI/gpt-j-6b",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-11T06:35:02Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: EleutherAI/gpt-j-6b
model-index:
- name: gptj6b-aisft-hh-seqsampler-subset60000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gptj6b-aisft-hh-seqsampler-subset60000
This model is a fine-tuned version of [EleutherAI/gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
alexwww94/yolov8 | alexwww94 | 2024-03-11T06:32:12Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"yolov8",
"custom_code",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-08T10:18:19Z | This is a Hugging Face Hub model repository created for ultraytics; you still need to install ultraytics(https://github.com/ultralytics/ultralytics)
```bash
pip install ultraytics
```
Usage:
```python
from transformers import AutoModelForObjectDetection
model = AutoModelForObjectDetection.from_pretrained('alexwww94/yolov8', trust_remote_code=True, yolo_model_config={"model": 'yolov8n'})
model('example.jpg')
```
Parameters for creating the yolov8 model are defined in yolo_model_config. Inference parameters for yolov8 can also be defined using args and kwargs:
```python
model(*args, **kwargs)
```
|
ashikshaffi08/zephyr_gemma_35_pct_data | ashikshaffi08 | 2024-03-11T06:25:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-gemma-v0.1",
"base_model:adapter:HuggingFaceH4/zephyr-7b-gemma-v0.1",
"region:us"
]
| null | 2024-03-11T06:04:49Z | ---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-gemma-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0 |
jylee55/autotrain-dlpeu-yhob0 | jylee55 | 2024-03-11T06:18:16Z | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"dataset:autotrain-dlpeu-yhob0/autotrain-data",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-03-11T06:00:07Z |
---
tags:
- autotrain
- text2text-generation
widget:
- text: "translate English to Hawaiian Pidgin: I went to Ala Moana today with Kimo"
datasets:
- autotrain-dlpeu-yhob0/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Seq2Seq
## Validation Metrics
loss: 0.602806031703949
rouge1: 49.3563
rouge2: 38.9137
rougeL: 46.9477
rougeLsum: 47.7864
gen_len: 18.872
runtime: 246.0865
samples_per_second: 64.205
steps_per_second: 2.007
: 14.0
|
breathemm/hyeogi-finetuning | breathemm | 2024-03-11T06:15:12Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:hyeogi/SOLAR-10.7B-dpo-v1",
"base_model:adapter:hyeogi/SOLAR-10.7B-dpo-v1",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-11T05:36:21Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: hyeogi/SOLAR-10.7B-dpo-v1
model-index:
- name: hyeogi-finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hyeogi-finetuning
This model is a fine-tuned version of [hyeogi/SOLAR-10.7B-dpo-v1](https://huggingface.co/hyeogi/SOLAR-10.7B-dpo-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9483 | 1.0 | 84 | 0.9816 |
| 0.7375 | 2.0 | 168 | 0.9629 |
| 0.4589 | 3.0 | 252 | 1.0281 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
mildang/qlora-koalpaca-polyglot-12.8b-test1 | mildang | 2024-03-11T06:15:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-08T07:34:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fyp-admin/dreambooth_Mercury_15 | fyp-admin | 2024-03-11T06:06:09Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-03-11T05:16:47Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- diffusers
- lora
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a picture of planet Mercury in the center, in charcoal gray color
like the Moon having a cratered surface throughout. It is present in space which
has dark background, embedded with a cluster of small-sized bright stars.
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - fyp-admin/dreambooth_Mercury_15
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a picture of planet Mercury in the center, in charcoal gray color like the Moon having a cratered surface throughout. It is present in space which has dark background, embedded with a cluster of small-sized bright stars. using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
automerger/CalmeInex12-7B | automerger | 2024-03-11T05:50:37Z | 16 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"conversational",
"base_model:MSL7/INEX12-7b",
"base_model:finetune:MSL7/INEX12-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T05:49:43Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- MSL7/INEX12-7b
---
# CalmeInex12-7B
CalmeInex12-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b)
## 🧩 Configuration
```yaml
models:
- model: MaziyarPanahi/Calme-7B-Instruct-v0.1.1
# No parameters necessary for base model
- model: MSL7/INEX12-7b
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: MaziyarPanahi/Calme-7B-Instruct-v0.1.1
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/CalmeInex12-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
pcmoritz/Mixtral-8x7B-Instruct-v0.1-FP8MOE | pcmoritz | 2024-03-11T05:49:40Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T05:38:03Z | ---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
inference:
parameters:
temperature: 0.5
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
# Model Card for Mixtral-8x7B (using FP8 quantization for the MoE layers)
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Instruction format
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
The template used to build a prompt for the Instruct model is defined as follows:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
```python
def tokenize(text):
return tok.encode(text, add_special_tokens=False)
[BOS_ID] +
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_1) + [EOS_ID] +
…
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_N) + [EOS_ID]
```
In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
In the Transformers library, one can use [chat templates](https://huggingface.co/docs/transformers/main/en/chat_templating) which make sure the right format is applied.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, device_map="auto")
text = "Hello my name is"
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Limitations
The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
enrique2701/cleanrl-ppo-LunarLander-v2 | enrique2701 | 2024-03-11T05:46:41Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-11T05:38:26Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -92.80 +/- 57.03
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
thrunlab/Mistral_Sparse_refined_web_50p_2024-03-10 | thrunlab | 2024-03-11T05:45:22Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"sparse_mistral",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-03-10T14:03:48Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: Mistral_Sparse_refined_web_50p_2024-03-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral_Sparse_refined_web_50p_2024-03-10
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 0
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4139 | 0.0 | 25 | 2.5658 |
| 2.226 | 0.01 | 50 | 2.5201 |
| 2.2555 | 0.01 | 75 | 2.4799 |
| 2.2174 | 0.02 | 100 | 2.4604 |
| 2.2232 | 0.02 | 125 | 2.4481 |
| 2.2955 | 0.02 | 150 | 2.4355 |
| 2.2275 | 0.03 | 175 | 2.4291 |
| 2.2219 | 0.03 | 200 | 2.4206 |
| 2.2521 | 0.04 | 225 | 2.4176 |
| 2.2937 | 0.04 | 250 | 2.4077 |
| 2.2073 | 0.04 | 275 | 2.4069 |
| 2.2157 | 0.05 | 300 | 2.4061 |
| 2.2274 | 0.05 | 325 | 2.4085 |
| 2.0697 | 0.06 | 350 | 2.4033 |
| 2.2338 | 0.06 | 375 | 2.4037 |
| 2.2722 | 0.06 | 400 | 2.4003 |
| 2.2638 | 0.07 | 425 | 2.4003 |
| 2.2215 | 0.07 | 450 | 2.4011 |
| 2.1437 | 0.08 | 475 | 2.3962 |
| 2.2073 | 0.08 | 500 | 2.3951 |
| 2.2696 | 0.08 | 525 | 2.3923 |
| 2.1624 | 0.09 | 550 | 2.3899 |
| 2.3006 | 0.09 | 575 | 2.3930 |
| 2.0869 | 0.1 | 600 | 2.3915 |
| 2.1871 | 0.1 | 625 | 2.3906 |
| 2.2712 | 0.1 | 650 | 2.3895 |
| 2.2596 | 0.11 | 675 | 2.3898 |
| 2.2297 | 0.11 | 700 | 2.3878 |
| 2.2126 | 0.12 | 725 | 2.3860 |
| 2.119 | 0.12 | 750 | 2.3869 |
| 2.1637 | 0.12 | 775 | 2.3848 |
| 2.2537 | 0.13 | 800 | 2.3853 |
| 2.2641 | 0.13 | 825 | 2.3872 |
| 2.2422 | 0.14 | 850 | 2.3874 |
| 2.11 | 0.14 | 875 | 2.3847 |
| 2.1745 | 0.14 | 900 | 2.3831 |
| 2.1222 | 0.15 | 925 | 2.3834 |
| 2.2604 | 0.15 | 950 | 2.3858 |
| 2.1929 | 0.16 | 975 | 2.3847 |
| 2.2353 | 0.16 | 1000 | 2.3841 |
| 2.2409 | 0.16 | 1025 | 2.3825 |
| 2.237 | 0.17 | 1050 | 2.3805 |
| 2.28 | 0.17 | 1075 | 2.3806 |
| 2.209 | 0.18 | 1100 | 2.3802 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Glow-01/finetuned_bart_large_custom | Glow-01 | 2024-03-11T05:43:10Z | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-03-11T04:18:47Z | ---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: finetuned_bart_large_custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_bart_large_custom
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8324
- Rouge1: 39.9143
- Rouge2: 10.7144
- Rougel: 21.1537
- Rougelsum: 35.81
- Gen Len: 131.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 16 | 4.3093 | 39.1367 | 9.9819 | 21.0796 | 35.3746 | 132.0741 |
| No log | 2.0 | 32 | 4.2921 | 39.0619 | 9.8356 | 21.7437 | 35.6597 | 131.7037 |
| No log | 3.0 | 48 | 4.3876 | 39.5314 | 10.337 | 21.0096 | 35.9973 | 131.2593 |
| No log | 4.0 | 64 | 4.4020 | 39.3551 | 9.9689 | 21.4343 | 35.3958 | 131.1481 |
| No log | 5.0 | 80 | 4.3744 | 39.7603 | 10.4124 | 21.6535 | 35.4996 | 132.963 |
| No log | 6.0 | 96 | 4.4821 | 39.9859 | 11.0712 | 22.2449 | 35.7868 | 132.4074 |
| No log | 7.0 | 112 | 4.6017 | 38.765 | 10.3317 | 20.9319 | 34.6675 | 132.2593 |
| No log | 8.0 | 128 | 4.4419 | 39.9964 | 10.3341 | 20.9618 | 35.8621 | 130.2222 |
| No log | 9.0 | 144 | 4.4990 | 39.8075 | 10.3829 | 21.3509 | 35.9882 | 128.7407 |
| No log | 10.0 | 160 | 4.7017 | 38.6152 | 9.9282 | 20.4588 | 34.4487 | 131.9259 |
| No log | 11.0 | 176 | 4.5497 | 39.0296 | 9.9429 | 20.8087 | 34.4624 | 132.6296 |
| No log | 12.0 | 192 | 4.7301 | 38.8819 | 9.5937 | 20.929 | 34.7983 | 131.4444 |
| No log | 13.0 | 208 | 4.5114 | 38.4163 | 9.6869 | 20.373 | 34.1491 | 123.8519 |
| No log | 14.0 | 224 | 4.7097 | 38.4294 | 9.5615 | 20.1514 | 35.0332 | 131.7407 |
| No log | 15.0 | 240 | 4.6300 | 38.9564 | 9.6386 | 20.0618 | 34.8298 | 129.963 |
| No log | 16.0 | 256 | 4.6916 | 38.5582 | 10.136 | 20.8347 | 34.4795 | 129.8519 |
| No log | 17.0 | 272 | 4.6959 | 38.3264 | 9.5281 | 20.5576 | 34.6148 | 128.2963 |
| No log | 18.0 | 288 | 4.6756 | 37.5569 | 9.123 | 19.8291 | 33.5111 | 126.6667 |
| No log | 19.0 | 304 | 4.7579 | 38.5704 | 9.3654 | 20.1826 | 34.8297 | 131.4815 |
| No log | 20.0 | 320 | 4.8128 | 40.158 | 10.3889 | 20.9267 | 36.8965 | 130.1852 |
| No log | 21.0 | 336 | 4.7659 | 39.4144 | 10.2445 | 20.4763 | 35.328 | 134.2593 |
| No log | 22.0 | 352 | 4.7983 | 40.2859 | 11.0388 | 21.1643 | 36.0311 | 131.9259 |
| No log | 23.0 | 368 | 4.7954 | 39.2676 | 10.5795 | 21.1116 | 35.3949 | 130.1481 |
| No log | 24.0 | 384 | 4.7991 | 39.8126 | 10.3955 | 21.2952 | 35.7538 | 130.5926 |
| No log | 25.0 | 400 | 4.8371 | 39.3481 | 10.2857 | 20.9862 | 35.1724 | 125.1481 |
| No log | 26.0 | 416 | 4.8589 | 40.0988 | 10.4426 | 21.7284 | 35.7289 | 130.3333 |
| No log | 27.0 | 432 | 4.8423 | 39.9233 | 10.3253 | 21.5853 | 36.1194 | 131.1111 |
| No log | 28.0 | 448 | 4.8274 | 40.0388 | 10.1713 | 20.991 | 35.3966 | 130.4444 |
| No log | 29.0 | 464 | 4.8313 | 39.8516 | 10.6207 | 21.0394 | 35.6627 | 130.8148 |
| No log | 30.0 | 480 | 4.8324 | 39.9143 | 10.7144 | 21.1537 | 35.81 | 131.6667 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
shazzz/Reinforce_Pixel_Copter | shazzz | 2024-03-11T05:38:13Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-05T11:03:16Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_Pixel_Copter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 21.90 +/- 15.81
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SamyRods/blackcat | SamyRods | 2024-03-11T05:34:52Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-03-11T05:30:37Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### BlackCat Dreambooth model trained by SamyRods following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: DI22-39
Sample pictures of this concept:
.jpg)
|
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_org_lf_signal_it_290 | furrutiav | 2024-03-11T05:28:46Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-03-11T05:28:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
furrutiav/bert_qa_extractor_cockatiel_2022_ulra_sign_lf_signal_it_262 | furrutiav | 2024-03-11T05:28:43Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-03-11T05:28:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tsavage68/mistralit2_1000_STEPS_5e7_rate_0.1_beta_DPO | tsavage68 | 2024-03-11T05:26:51Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T05:22:42Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: mistralit2_1000_STEPS_5e7_rate_0.1_beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralit2_1000_STEPS_5e7_rate_0.1_beta_DPO
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7132
- Rewards/chosen: -3.0068
- Rewards/rejected: -5.0778
- Rewards/accuracies: 0.6813
- Rewards/margins: 2.0710
- Logps/rejected: -79.3505
- Logps/chosen: -53.4537
- Logits/rejected: -2.5776
- Logits/chosen: -2.5788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6418 | 0.1 | 50 | 0.6447 | -0.5872 | -0.7568 | 0.5736 | 0.1696 | -36.1403 | -29.2577 | -2.8316 | -2.8320 |
| 0.5915 | 0.2 | 100 | 0.6534 | -2.5902 | -3.2664 | 0.6000 | 0.6762 | -61.2361 | -49.2879 | -2.5920 | -2.5930 |
| 0.6181 | 0.29 | 150 | 0.6108 | -1.7262 | -2.4531 | 0.6352 | 0.7270 | -53.1036 | -40.6475 | -2.6698 | -2.6708 |
| 0.5919 | 0.39 | 200 | 0.6201 | -0.8739 | -1.3497 | 0.6110 | 0.4758 | -42.0694 | -32.1245 | -2.8217 | -2.8224 |
| 0.7232 | 0.49 | 250 | 0.6496 | -2.3019 | -2.8348 | 0.6110 | 0.5328 | -56.9199 | -46.4053 | -2.8105 | -2.8116 |
| 0.6175 | 0.59 | 300 | 0.6052 | -1.3274 | -2.0772 | 0.6440 | 0.7497 | -49.3443 | -36.6603 | -2.8706 | -2.8714 |
| 0.6294 | 0.68 | 350 | 0.5762 | -0.5378 | -1.3786 | 0.6484 | 0.8407 | -42.3582 | -28.7642 | -2.8508 | -2.8515 |
| 0.5572 | 0.78 | 400 | 0.5838 | -2.3342 | -3.3990 | 0.6615 | 1.0648 | -62.5628 | -46.7279 | -2.9194 | -2.9202 |
| 0.5339 | 0.88 | 450 | 0.6065 | -2.3478 | -3.1946 | 0.6615 | 0.8468 | -60.5187 | -46.8642 | -2.8735 | -2.8743 |
| 0.5162 | 0.98 | 500 | 0.6054 | -1.8059 | -2.8617 | 0.6593 | 1.0558 | -57.1895 | -41.4452 | -2.8408 | -2.8416 |
| 0.1367 | 1.07 | 550 | 0.5967 | -1.5441 | -3.2437 | 0.6923 | 1.6996 | -61.0093 | -38.8268 | -2.7152 | -2.7164 |
| 0.1427 | 1.17 | 600 | 0.6612 | -2.6012 | -4.5496 | 0.6923 | 1.9484 | -74.0686 | -49.3976 | -2.6127 | -2.6140 |
| 0.2423 | 1.27 | 650 | 0.6953 | -3.2920 | -5.2913 | 0.6835 | 1.9992 | -81.4852 | -56.3063 | -2.5920 | -2.5933 |
| 0.2461 | 1.37 | 700 | 0.6994 | -3.0907 | -5.0995 | 0.6791 | 2.0088 | -79.5678 | -54.2931 | -2.5993 | -2.6005 |
| 0.05 | 1.46 | 750 | 0.7081 | -2.9719 | -5.0539 | 0.6835 | 2.0820 | -79.1113 | -53.1052 | -2.5893 | -2.5906 |
| 0.1265 | 1.56 | 800 | 0.7096 | -2.9511 | -5.0249 | 0.6791 | 2.0739 | -78.8217 | -52.8965 | -2.5798 | -2.5810 |
| 0.1903 | 1.66 | 850 | 0.7099 | -3.0000 | -5.0705 | 0.6769 | 2.0705 | -79.2773 | -53.3856 | -2.5782 | -2.5795 |
| 0.1908 | 1.76 | 900 | 0.7144 | -3.0075 | -5.0795 | 0.6857 | 2.0720 | -79.3678 | -53.4610 | -2.5779 | -2.5792 |
| 0.2293 | 1.86 | 950 | 0.7119 | -3.0087 | -5.0829 | 0.6835 | 2.0742 | -79.4011 | -53.4726 | -2.5778 | -2.5790 |
| 0.1167 | 1.95 | 1000 | 0.7132 | -3.0068 | -5.0778 | 0.6813 | 2.0710 | -79.3505 | -53.4537 | -2.5776 | -2.5788 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
moneyforward/houou-instruction-7b-v3 | moneyforward | 2024-03-11T05:25:09Z | 83 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-07T03:51:04Z | ---
license: llama2
---
# `moneyforward/houou-instruction-7b-v3`
# Overview
The model was trained by supervised fine-tuning (SFT) using an instruction dataset (9013 data) created through joint research with the Institute of Physical and Chemical Research (RIKEN).
**The instruction data is the first dataset created from scratch, written in Japanese.**
The data details is [here](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/).
As a pre-trained model, we used [rinna/youri-7b](https://huggingface.co/rinna/youri-7b).
* **Authors**
- [Atsushi Kojima](https://huggingface.co/atsushi-mf)
- Ikuo Kitagishi
---
# How to use the model
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("moneyforward/houou-instruction-7b-v2")
model = AutoModelForCausalLM.from_pretrained("moneyforward/houou-instruction-7b-v2")
if torch.cuda.is_available():
model = model.to("cuda")
instruction = "日本のおすすめの観光地を3つ教えてください。"
input = ""
if input == "":
prompt = f"""
以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。
### 指示:
{instruction}
### 応答:
"""
else:
prompt = f"""
以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。
### 指示:
{instruction}
### 入力:
{input}
### 応答:
"""
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=2048,
do_sample=True,
temperature=0.5,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
output = tokenizer.decode(output_ids.tolist()[0])
output = output.split("### 応答:")[1]
print(output)
"""
日本は四季折々の美しい風景や、数々の伝統文化を持つ国です。観光地も数多くありますが、その中から3つ、おすすめの観光地をご紹介します。
1)京都府:清水寺、金閣寺、伏見稲荷大社
京都は、日本の歴史上とても重要な役割を果たした場所です。清水寺では、「音羽の滝」の水を飲むと美容効果があるといわれています。金閣寺はその名の通り、金で覆われた三層の建造物です。伏見稲荷大社は稲荷神社の総本山で、奥社へ向かう参道に並ぶ鳥居の数は圧巻です。
2)北海道:知床五湖、美瑛町(ケンとメリーの木、パッチワークの丘など)、旭川市(旭山動物園)
知床半島は、世界自然遺産に登録されています。知床五湖は、その知床半島の自然を満喫できるハイキングコースです。美瑛町は、丘陵地に色とりどりの花が植えられたパッチワークのような風景が広がっています。旭川市にある旭山動物園は、人気がある動物園で、夜行性動物の見やすさが工夫されています。
3)東京都:浅草(浅草寺、仲見世通り、東京スカイツリー)、上野恩賜公園(上野の森美術館、国立西洋美術館など)、日本橋(三越前の日本国道路元標、R.M.V.日本橋など)
東京にも観光地はたくさんあります。浅草は、昔ながらの下町の雰囲気が残り、老若男女を問わず楽しめる観光地です。上野には、美術館や博物館が多く、上野恩賜公園(上野の森)内を散策するだけでも十分に楽しめます。日本橋は、国道路元標やR.M.V.日本橋など、歴史を感じられる橋であり、最新の技術を感じられる橋でもあります。
この他にも日本には魅力的な観光地がたくさんあります。ぜひ訪れてみてください。
"""
~~~~
# Evaluation results of [rakuda-benchmark](https://huggingface.co/datasets/yuzuai/rakuda-questions)
We evaluated the performance of houou-instruction-v1 through GPT-4 automatic evaluation.
As for the comparison model, we used gpt-3.5-turbo-1106.
| houou-instruction-v3<br>win rate| gpt-3.5-turbo-1106 win rate | tie |
| -- | -- |-- |
| **72.5** | 27.5 | 0 |
---
# License
[The llama2 license](https://ai.meta.com/llama/license/)
We have permission from RIKEN to use RIKEN's instruction data for this initiative. |
jamesLeeeeeee/bpe-tokenizer | jamesLeeeeeee | 2024-03-11T05:23:28Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-11T05:23:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sumail/Alchemist_04_base1_2b | Sumail | 2024-03-11T05:11:31Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergewss]",
"mergekit",
"lazymergekit",
"zzttbrdd/sn6_20_new",
"deepnetguy/gemma-64",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T05:08:58Z | ---
license: apache-2.0
tags:
- mergewss]
- mergekit
- lazymergekit
- zzttbrdd/sn6_20_new
- deepnetguy/gemma-64
---
# Alchemist_04_base1_2b
Alchemist_04_base1_2b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [zzttbrdd/sn6_20_new](https://huggingface.co/zzttbrdd/sn6_20_new)
* [deepnetguy/gemma-64](https://huggingface.co/deepnetguy/gemma-64)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: zzttbrdd/sn6_20_new
layer_range: [0, 18]
- model: deepnetguy/gemma-64
layer_range: [0, 18]
merge_method: slerp
base_model: zzttbrdd/sn6_20_new
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
StaAhmed/refined_model | StaAhmed | 2024-03-11T05:07:25Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2024-03-10T14:09:00Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
EricValen/rl_course_vizdoom_health_gathering_supreme | EricValen | 2024-03-11T05:03:48Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-11T03:26:31Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.17 +/- 4.02
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r EricValen/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
sarak7/H4_311_769_v3 | sarak7 | 2024-03-11T05:02:19Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T05:00:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ridhism/Hridhay | ridhism | 2024-03-11T04:57:58Z | 0 | 0 | null | [
"en",
"dataset:HuggingFaceTB/cosmopedia",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-11T04:55:10Z | ---
license: apache-2.0
datasets:
- HuggingFaceTB/cosmopedia
language:
- en
metrics:
- accuracy
--- |
Harshad018/trained-gpt2-tweet-analysis | Harshad018 | 2024-03-11T04:54:39Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-11T04:53:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lole25/zephyr-7b-dpo-qlora | lole25 | 2024-03-11T04:53:51Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2024-02-16T00:14:52Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: zephyr-7b-dpo-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-qlora
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-qlora](https://huggingface.co/alignment-handbook/zephyr-7b-sft-qlora) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6813
- Rewards/chosen: -0.0009
- Rewards/rejected: -0.0252
- Rewards/accuracies: 0.2920
- Rewards/margins: 0.0243
- Logps/rejected: -71.3009
- Logps/chosen: -65.4449
- Logits/rejected: -2.4428
- Logits/chosen: -2.4444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.69 | 0.26 | 100 | 0.6897 | 0.0232 | 0.0168 | 0.2680 | 0.0064 | -67.1001 | -63.0342 | -2.4904 | -2.4911 |
| 0.6869 | 0.52 | 200 | 0.6849 | 0.0066 | -0.0092 | 0.3060 | 0.0159 | -69.7060 | -64.6950 | -2.4556 | -2.4573 |
| 0.681 | 0.78 | 300 | 0.6815 | -0.0026 | -0.0264 | 0.2880 | 0.0238 | -71.4280 | -65.6224 | -2.4430 | -2.4446 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.14.6
- Tokenizers 0.15.2 |
nadika/nepali_complaints_classification_nepbert3 | nadika | 2024-03-11T04:50:48Z | 94 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Rajan/NepaliBERT",
"base_model:finetune:Rajan/NepaliBERT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-11T03:46:15Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: Rajan/NepaliBERT
model-index:
- name: nepali_complaints_classification_nepbert3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nepali_complaints_classification_nepbert3
This model is a fine-tuned version of [Rajan/NepaliBERT](https://huggingface.co/Rajan/NepaliBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2687
- Accuracy: 0.9494
- F1-score: 0.9483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 1.4921 | 0.22 | 500 | 0.8642 | 0.7235 | 0.7143 |
| 0.7781 | 0.45 | 1000 | 0.6241 | 0.7974 | 0.7923 |
| 0.5865 | 0.67 | 1500 | 0.5342 | 0.8243 | 0.8125 |
| 0.4625 | 0.89 | 2000 | 0.4250 | 0.8576 | 0.8553 |
| 0.3648 | 1.11 | 2500 | 0.3856 | 0.8759 | 0.8725 |
| 0.3001 | 1.34 | 3000 | 0.3424 | 0.8899 | 0.8891 |
| 0.2723 | 1.56 | 3500 | 0.3199 | 0.9007 | 0.8981 |
| 0.2538 | 1.78 | 4000 | 0.2898 | 0.9085 | 0.9066 |
| 0.231 | 2.01 | 4500 | 0.2676 | 0.9203 | 0.9189 |
| 0.1478 | 2.23 | 5000 | 0.3029 | 0.9210 | 0.9187 |
| 0.1666 | 2.45 | 5500 | 0.2580 | 0.9283 | 0.9271 |
| 0.1519 | 2.67 | 6000 | 0.2573 | 0.9308 | 0.9292 |
| 0.1498 | 2.9 | 6500 | 0.2746 | 0.9328 | 0.9306 |
| 0.1112 | 3.12 | 7000 | 0.2564 | 0.9398 | 0.9389 |
| 0.0903 | 3.34 | 7500 | 0.2726 | 0.9403 | 0.9393 |
| 0.1036 | 3.57 | 8000 | 0.2664 | 0.9398 | 0.9385 |
| 0.1043 | 3.79 | 8500 | 0.2614 | 0.9459 | 0.9447 |
| 0.0972 | 4.01 | 9000 | 0.2499 | 0.9453 | 0.9443 |
| 0.0663 | 4.23 | 9500 | 0.2643 | 0.9469 | 0.9458 |
| 0.0683 | 4.46 | 10000 | 0.2688 | 0.9474 | 0.9462 |
| 0.0671 | 4.68 | 10500 | 0.2657 | 0.9491 | 0.9481 |
| 0.0605 | 4.9 | 11000 | 0.2687 | 0.9494 | 0.9483 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ekato/FujiiKaze | ekato | 2024-03-11T04:44:38Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail",
"region:us"
]
| text-to-image | 2024-03-11T04:44:21Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/1000018412.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: openrail
---
# FujiiKaze
<Gallery />
## Download model
[Download](/ekato/FujiiKaze/tree/main) them in the Files & versions tab.
|
luminique-silk-cream/luminique-silk-cream | luminique-silk-cream | 2024-03-11T04:39:27Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"Luminique Silk Cream",
"en",
"license:bsd-3-clause",
"region:us"
]
| null | 2024-03-11T04:38:13Z | ---
license: bsd-3-clause
language:
- en
library_name: sentence-transformers
tags:
- Luminique Silk Cream
---
[Luminique Silk Cream](https://atozsupplement.com/luminique-silk-cream/) Expanded Hydration: Fixings like hyaluronic corrosive and glycerin profoundly hydrate the skin, plumping it up and limiting the presence of dryness and parchedness lines.Evened Complexion: Hostile to maturing serums might incorporate fixings like L-ascorbic acid, niacinamide, or alpha hydroxy acids (AHAs) that assist with blurring dull spots, hyperpigmentation, and advance an all the more even complexion.
VISIT HERE FOR OFFICIAL WEBSITE:-https://atozsupplement.com/luminique-silk-cream/
|
Deeksha04/PlantDetectTask1 | Deeksha04 | 2024-03-11T04:19:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-02T09:32:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ekato/Ayame | ekato | 2024-03-11T04:16:50Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail",
"region:us"
]
| text-to-image | 2024-03-11T04:16:35Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/1000018408.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: openrail
---
# Ayame
<Gallery />
## Download model
[Download](/ekato/Ayame/tree/main) them in the Files & versions tab.
|
nlp-waseda/comet-gpt2-xl-japanese | nlp-waseda | 2024-03-11T04:16:02Z | 69 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-26T13:37:52Z | ---
language: ja
widget:
- text: X が 部屋 で ゲーム するxEffect
pipeline_tag: text-generation
license: cc-by-sa-4.0
---
# COMET-GPT2 ja v2
Finetuned GPT-2 xl on the large version of [ATOMIC ja](https://github.com/nlp-waseda/comet-atomic-ja) using a causal language modeling (CLM) objective.
The original version and the large version of ATOMIC ja were introduced in [this paper](https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B2-5.pdf) and in [this paper](https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B9-1.pdf), respectively.
### How to use
You can use this model directly with a pipeline for text generation.
Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='nlp-waseda/comet-gpt2-xl-japanese')
>>> set_seed(42)
>>> generator('X が 副業 を 始めるxEffect', max_length=30, num_return_sequences=5, do_sample=True)
[{'generated_text': 'X が 副業 を 始めるxEffect X が 収入 を 得る'},
{'generated_text': 'X が 副業 を 始めるxEffect X が 時間 を 失う'},
{'generated_text': 'X が 副業 を 始めるxEffect X が 儲かる'},
{'generated_text': 'X が 副業 を 始めるxEffect X が 稼ぐ'},
{'generated_text': 'X が 副業 を 始めるxEffect X が 稼げる ように なる'}]
```
### Preprocessing
The texts are segmented into words using Juman++ and tokenized using SentencePiece.
## Evaluation results
The model achieves the following results:
| BLEU | BERTScore |
|:-----:|:---------:|
| - | - |
### BibTeX entry and citation info
```bibtex
@InProceedings{ide_nlp2023_event,
author = "井手竜也 and 村田栄樹 and 堀尾海斗 and 河原大輔 and 山崎天 and 李聖哲 and 新里顕大 and 佐藤敏紀",
title = "人間と言語モデルに対するプロンプトを用いたゼロからのイベント常識知識グラフ構築",
booktitle = "言語処理学会第29回年次大会",
year = "2023",
url = "https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B2-5.pdf"
note = "in Japanese"
}
@InProceedings{murata_nlp2023,
author = "村田栄樹 and 井手竜也 and 榮田亮真 and 河原大輔 and 山崎天 and 李聖哲 and 新里顕大 and 佐藤敏紀",
title = "大規模言語モデルによって構築された常識知識グラフの拡大と低コストフィルタリング",
booktitle = "言語処理学会第29回年次大会",
year = "2023",
url = "https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B9-1.pdf"
note = "in Japanese"
}
``` |
nbeerbower/StrangeBru-7B | nbeerbower | 2024-03-11T04:14:57Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:nbeerbower/bruphin-theta",
"base_model:merge:nbeerbower/bruphin-theta",
"base_model:nbeerbower/strange_3236-7B",
"base_model:merge:nbeerbower/strange_3236-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T04:12:05Z | ---
license: apache-2.0
base_model:
- nbeerbower/strange_3236-7B
- nbeerbower/bruphin-theta
library_name: transformers
tags:
- mergekit
- merge
---
# StrangeBru-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/strange_3236-7B](https://huggingface.co/nbeerbower/strange_3236-7B)
* [nbeerbower/bruphin-theta](https://huggingface.co/nbeerbower/bruphin-theta)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/bruphin-theta
layer_range: [0, 32]
- model: nbeerbower/strange_3236-7B
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/strange_3236-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
quipohealth/Classfication | quipohealth | 2024-03-11T04:14:14Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2024-03-11T04:08:31Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: DonutProcessor_Detail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DonutProcessor_Detail
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.2
- Tokenizers 0.13.3
|
sarak7/H4_311_769_v1 | sarak7 | 2024-03-11T04:13:28Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T04:11:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-3-5 | alinerodrigues | 2024-03-11T04:06:01Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-03-10T23:26:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-3-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-mecita-coraa-portuguese-all-grade-2-3-5
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1701
- Wer: 0.0874
- Cer: 0.0290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 34.6826 | 1.0 | 79 | 3.4138 | 1.0 | 1.0 |
| 8.8564 | 2.0 | 158 | 3.0441 | 1.0 | 1.0 |
| 3.072 | 3.0 | 237 | 2.9375 | 1.0 | 1.0 |
| 2.9362 | 4.0 | 316 | 2.9142 | 1.0 | 1.0 |
| 2.9362 | 5.0 | 395 | 2.8392 | 1.0 | 1.0 |
| 2.8915 | 6.0 | 474 | 1.7012 | 1.0 | 0.4878 |
| 2.358 | 7.0 | 553 | 0.6235 | 0.3294 | 0.0845 |
| 1.124 | 8.0 | 632 | 0.4194 | 0.1922 | 0.0570 |
| 0.6757 | 9.0 | 711 | 0.3398 | 0.1693 | 0.0499 |
| 0.6757 | 10.0 | 790 | 0.2962 | 0.1388 | 0.0443 |
| 0.5066 | 11.0 | 869 | 0.2657 | 0.1209 | 0.0396 |
| 0.4407 | 12.0 | 948 | 0.2451 | 0.1151 | 0.0383 |
| 0.4235 | 13.0 | 1027 | 0.2372 | 0.1080 | 0.0367 |
| 0.372 | 14.0 | 1106 | 0.2301 | 0.1022 | 0.0359 |
| 0.372 | 15.0 | 1185 | 0.2188 | 0.1035 | 0.0365 |
| 0.3333 | 16.0 | 1264 | 0.2082 | 0.1048 | 0.0353 |
| 0.2817 | 17.0 | 1343 | 0.2067 | 0.0998 | 0.0342 |
| 0.2744 | 18.0 | 1422 | 0.2084 | 0.0964 | 0.0338 |
| 0.2693 | 19.0 | 1501 | 0.2011 | 0.0966 | 0.0330 |
| 0.2693 | 20.0 | 1580 | 0.1975 | 0.0979 | 0.0335 |
| 0.2444 | 21.0 | 1659 | 0.1932 | 0.0956 | 0.0334 |
| 0.2258 | 22.0 | 1738 | 0.1884 | 0.0924 | 0.0317 |
| 0.2348 | 23.0 | 1817 | 0.1875 | 0.0932 | 0.0324 |
| 0.2348 | 24.0 | 1896 | 0.1780 | 0.0948 | 0.0323 |
| 0.2146 | 25.0 | 1975 | 0.1819 | 0.0935 | 0.0319 |
| 0.2157 | 26.0 | 2054 | 0.1809 | 0.0903 | 0.0310 |
| 0.1913 | 27.0 | 2133 | 0.1770 | 0.0924 | 0.0316 |
| 0.206 | 28.0 | 2212 | 0.1808 | 0.0893 | 0.0314 |
| 0.206 | 29.0 | 2291 | 0.1822 | 0.0885 | 0.0313 |
| 0.1797 | 30.0 | 2370 | 0.1761 | 0.0903 | 0.0306 |
| 0.1918 | 31.0 | 2449 | 0.1786 | 0.0903 | 0.0306 |
| 0.1819 | 32.0 | 2528 | 0.1821 | 0.0898 | 0.0308 |
| 0.1805 | 33.0 | 2607 | 0.1849 | 0.0885 | 0.0310 |
| 0.1805 | 34.0 | 2686 | 0.1817 | 0.0864 | 0.0314 |
| 0.1708 | 35.0 | 2765 | 0.1839 | 0.0882 | 0.0316 |
| 0.1734 | 36.0 | 2844 | 0.1817 | 0.0872 | 0.0321 |
| 0.161 | 37.0 | 2923 | 0.1824 | 0.0906 | 0.0319 |
| 0.154 | 38.0 | 3002 | 0.1804 | 0.0885 | 0.0314 |
| 0.154 | 39.0 | 3081 | 0.1782 | 0.0864 | 0.0305 |
| 0.1604 | 40.0 | 3160 | 0.1751 | 0.0858 | 0.0301 |
| 0.1631 | 41.0 | 3239 | 0.1719 | 0.0840 | 0.0298 |
| 0.1542 | 42.0 | 3318 | 0.1744 | 0.0858 | 0.0304 |
| 0.1542 | 43.0 | 3397 | 0.1742 | 0.0893 | 0.0308 |
| 0.1658 | 44.0 | 3476 | 0.1744 | 0.0874 | 0.0299 |
| 0.157 | 45.0 | 3555 | 0.1745 | 0.0887 | 0.0299 |
| 0.1451 | 46.0 | 3634 | 0.1755 | 0.0861 | 0.0296 |
| 0.1512 | 47.0 | 3713 | 0.1737 | 0.0911 | 0.0299 |
| 0.1512 | 48.0 | 3792 | 0.1722 | 0.0882 | 0.0295 |
| 0.1484 | 49.0 | 3871 | 0.1722 | 0.0837 | 0.0288 |
| 0.1343 | 50.0 | 3950 | 0.1744 | 0.0856 | 0.0294 |
| 0.1403 | 51.0 | 4029 | 0.1701 | 0.0874 | 0.0290 |
| 0.1334 | 52.0 | 4108 | 0.1770 | 0.0877 | 0.0298 |
| 0.1334 | 53.0 | 4187 | 0.1720 | 0.0872 | 0.0296 |
| 0.1345 | 54.0 | 4266 | 0.1738 | 0.0848 | 0.0287 |
| 0.1183 | 55.0 | 4345 | 0.1705 | 0.0866 | 0.0290 |
| 0.1328 | 56.0 | 4424 | 0.1738 | 0.0848 | 0.0289 |
| 0.1261 | 57.0 | 4503 | 0.1758 | 0.0864 | 0.0297 |
| 0.1261 | 58.0 | 4582 | 0.1770 | 0.0824 | 0.0285 |
| 0.1405 | 59.0 | 4661 | 0.1766 | 0.0879 | 0.0297 |
| 0.1164 | 60.0 | 4740 | 0.1753 | 0.0816 | 0.0286 |
| 0.1326 | 61.0 | 4819 | 0.1770 | 0.0861 | 0.0290 |
| 0.1326 | 62.0 | 4898 | 0.1725 | 0.0856 | 0.0294 |
| 0.1209 | 63.0 | 4977 | 0.1779 | 0.0840 | 0.0292 |
| 0.1412 | 64.0 | 5056 | 0.1753 | 0.0832 | 0.0282 |
| 0.1226 | 65.0 | 5135 | 0.1764 | 0.0840 | 0.0285 |
| 0.1187 | 66.0 | 5214 | 0.1813 | 0.0793 | 0.0276 |
| 0.1187 | 67.0 | 5293 | 0.1785 | 0.0798 | 0.0277 |
| 0.1182 | 68.0 | 5372 | 0.1771 | 0.0824 | 0.0279 |
| 0.1178 | 69.0 | 5451 | 0.1798 | 0.0843 | 0.0285 |
| 0.1289 | 70.0 | 5530 | 0.1798 | 0.0866 | 0.0292 |
| 0.1321 | 71.0 | 5609 | 0.1803 | 0.0843 | 0.0286 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
Zhengyi/CRM | Zhengyi | 2024-03-11T03:58:51Z | 0 | 52 | null | [
"image-to-3d",
"arxiv:2403.05034",
"license:mit",
"region:us"
]
| image-to-3d | 2024-03-06T06:47:22Z | ---
license: mit
pipeline_tag: image-to-3d
tags:
- image-to-3d
---
# Convolutional Reconstruction Model
Model card for *CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model*.
Project Page: https://ml.cs.tsinghua.edu.cn/~zhengyi/CRM/
Arxiv: https://arxiv.org/abs/2403.05034

The model contains a diffusion model to generate multi-view images from single input image, another difffusion to generate CCMs, and a UNet-based reconstruction model to get the final textured mesh.
## Citation
```
@article{wang2024crm,
title={CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model},
author={Zhengyi Wang and Yikai Wang and Yifei Chen and Chendong Xiang and Shuo Chen and Dajiang Yu and Chongxuan Li and Hang Su and Jun Zhu},
journal={arXiv preprint arXiv:2403.05034},
year={2024}
}
```
|
aken12/splade-japanese | aken12 | 2024-03-11T03:56:10Z | 132 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"ja",
"dataset:unicamp-dl/mmarco",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-03-03T09:01:57Z | ---
license: cc-by-sa-4.0
datasets:
- unicamp-dl/mmarco
language:
- ja
---
We initialize SPLADE-japanese from [tohoku-nlp/bert-base-japanese-v2](https://huggingface.co/tohoku-nlp/bert-base-japanese-v2).
This model is trained on [mMARCO](https://github.com/unicamp-dl/mMARCO) Japanese dataset.
```python
from transformers import AutoModelForMaskedLM,AutoTokenizer
import torch
import numpy as np
model = AutoModelForMaskedLM.from_pretrained("aken12/splade-japanese")
tokenizer = AutoTokenizer.from_pretrained("aken12/splade-japanese")
query = "筑波大学では何の研究が行われているか?"
def encode_query(query, tokenizer, model):
encoded_input = tokenizer(query, return_tensors="pt")
with torch.no_grad():
output = model(**encoded_input, return_dict=True).logits
aggregated_output, _ = torch.max(torch.log(1 + torch.relu(output)) * encoded_input['attention_mask'].unsqueeze(-1), dim=1)
return aggregated_output
def get_topk_tokens(reps, vocab_dict, topk):
topk_values, topk_indices = torch.topk(reps, topk, dim=1)
values = np.rint(topk_values.numpy() * 100).astype(int)
dict_splade = {vocab_dict[id_token.item()]: int(value_token) for id_token, value_token in zip(topk_indices[0], values[0]) if value_token > 0}
return dict_splade
vocab_dict = {v: k for k, v in tokenizer.get_vocab().items()}
topk = len(vocab_dict) // 1000
model_output = encode_query(query, tokenizer, model)
dict_splade = get_topk_tokens(model_output, vocab_dict, topk)
for token, value in dict_splade.items():
print(token, value)
```
## output
```
に 250
が 248
は 247
の 247
、 244
と 240
を 239
。 239
も 238
で 237
から 221
や 219
な 206
筑波 204
( 204
・ 202
て 197
へ 191
にて 189
など 188
) 186
まで 184
た 182
この 171
- 170
「 170
より 166
その 165
: 163
」 161
```
|
tsavage68/mistralit2_500_STEPS_1e8_rate_03_beta_DPO | tsavage68 | 2024-03-11T03:49:18Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T02:12:46Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: mistralit2_500_STEPS_1e8_rate_03_beta_DPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralit2_500_STEPS_1e8_rate_03_beta_DPO
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6903
- Rewards/chosen: -0.0048
- Rewards/rejected: -0.0113
- Rewards/accuracies: 0.5121
- Rewards/margins: 0.0065
- Logps/rejected: -28.6101
- Logps/chosen: -23.4018
- Logits/rejected: -2.8650
- Logits/chosen: -2.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6911 | 0.1 | 50 | 0.6909 | 0.0027 | -0.0025 | 0.4967 | 0.0052 | -28.5807 | -23.3768 | -2.8653 | -2.8655 |
| 0.6916 | 0.2 | 100 | 0.6928 | -0.0010 | -0.0023 | 0.4571 | 0.0014 | -28.5802 | -23.3891 | -2.8653 | -2.8655 |
| 0.6931 | 0.29 | 150 | 0.6916 | -0.0047 | -0.0087 | 0.4659 | 0.0040 | -28.6014 | -23.4015 | -2.8652 | -2.8654 |
| 0.6922 | 0.39 | 200 | 0.6914 | -0.0046 | -0.0090 | 0.4681 | 0.0044 | -28.6024 | -23.4011 | -2.8651 | -2.8654 |
| 0.6921 | 0.49 | 250 | 0.6927 | -0.0086 | -0.0103 | 0.4747 | 0.0017 | -28.6067 | -23.4145 | -2.8651 | -2.8653 |
| 0.6938 | 0.59 | 300 | 0.6916 | -0.0092 | -0.0132 | 0.4835 | 0.0040 | -28.6163 | -23.4163 | -2.8651 | -2.8654 |
| 0.6976 | 0.68 | 350 | 0.6907 | -0.0058 | -0.0116 | 0.4747 | 0.0058 | -28.6111 | -23.4052 | -2.8651 | -2.8654 |
| 0.6918 | 0.78 | 400 | 0.6902 | -0.0069 | -0.0137 | 0.4967 | 0.0068 | -28.6182 | -23.4089 | -2.8651 | -2.8653 |
| 0.6862 | 0.88 | 450 | 0.6903 | -0.0048 | -0.0113 | 0.5121 | 0.0065 | -28.6101 | -23.4018 | -2.8650 | -2.8653 |
| 0.6946 | 0.98 | 500 | 0.6903 | -0.0048 | -0.0113 | 0.5121 | 0.0065 | -28.6101 | -23.4018 | -2.8650 | -2.8653 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Sumail/Alchemist_02_2b | Sumail | 2024-03-11T03:46:06Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergewss]",
"mergekit",
"lazymergekit",
"deepnetguy/gemma-64",
"Aspik101/minigemma_ft9",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T03:41:57Z | ---
license: apache-2.0
tags:
- mergewss]
- mergekit
- lazymergekit
- deepnetguy/gemma-64
- Aspik101/minigemma_ft9
---
# Alchemist_02_2b
Alchemist_02_2b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [deepnetguy/gemma-64](https://huggingface.co/deepnetguy/gemma-64)
* [Aspik101/minigemma_ft9](https://huggingface.co/Aspik101/minigemma_ft9)
## 🧩 Configuration
```yaml
models:
- model: deepnet/SN6-71G5
# no parameters necessary for base model
- model: deepnetguy/gemma-64
parameters:
density: 0.5
weight: 0.3
- model: Aspik101/minigemma_ft9
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: deepnet/SN6-71G5
parameters:
normalize: true
dtype: bfloat16
``` |
Sumail/Alchemist_01_2b | Sumail | 2024-03-11T03:20:35Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergewss]",
"mergekit",
"lazymergekit",
"Aspik101/minigemma_ft9",
"deepnetguy/gemma-64",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T03:18:07Z | ---
license: apache-2.0
tags:
- mergewss]
- mergekit
- lazymergekit
- Aspik101/minigemma_ft9
- deepnetguy/gemma-64
---
# Alchemist_01_2b
Alchemist_01_2b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Aspik101/minigemma_ft9](https://huggingface.co/Aspik101/minigemma_ft9)
* [deepnetguy/gemma-64](https://huggingface.co/deepnetguy/gemma-64)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Aspik101/minigemma_ft9
layer_range: [0, 18]
- model: deepnetguy/gemma-64
layer_range: [0, 18]
merge_method: slerp
base_model: Aspik101/minigemma_ft9
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
wongctroman/fine-tuned-cloudy-sentence-transformer-4 | wongctroman | 2024-03-11T03:19:21Z | 45 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-03-11T01:39:34Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-4
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-4')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-4)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 500,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
wongctroman/fine-tuned-cloudy-sentence-transformer-6 | wongctroman | 2024-03-11T03:19:05Z | 48 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-03-11T03:11:55Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-6
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-6')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-6)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 500,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
smjmouse/zx80zx81b | smjmouse | 2024-03-11T03:16:47Z | 0 | 0 | fastai | [
"fastai",
"region:us"
]
| null | 2024-03-11T03:16:23Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
automerger/Experiment29Pastiche-7B | automerger | 2024-03-11T03:14:08Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:CorticalStack/pastiche-crown-clown-7b-dare",
"base_model:merge:CorticalStack/pastiche-crown-clown-7b-dare",
"base_model:yam-peleg/Experiment29-7B",
"base_model:merge:yam-peleg/Experiment29-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-10T22:55:51Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- yam-peleg/Experiment29-7B
- CorticalStack/pastiche-crown-clown-7b-dare
---
# Experiment29Pastiche-7B
Experiment29Pastiche-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [yam-peleg/Experiment29-7B](https://huggingface.co/yam-peleg/Experiment29-7B)
* [CorticalStack/pastiche-crown-clown-7b-dare](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: yam-peleg/Experiment29-7B
layer_range: [0, 32]
- model: CorticalStack/pastiche-crown-clown-7b-dare
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment29-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment29Pastiche-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
wongctroman/fine-tuned-cloudy-sentence-transformer-5 | wongctroman | 2024-03-11T03:09:34Z | 48 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-03-11T03:08:17Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-5
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-5')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-5)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 500,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
err00r/peft-starcoder-lora-a100 | err00r | 2024-03-11T03:02:04Z | 2 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigcode/starcoderbase-1b",
"base_model:adapter:bigcode/starcoderbase-1b",
"license:bigcode-openrail-m",
"region:us"
]
| null | 2024-03-08T08:30:00Z | ---
license: bigcode-openrail-m
library_name: peft
tags:
- generated_from_trainer
base_model: bigcode/starcoderbase-1b
model-index:
- name: peft-starcoder-lora-a100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-starcoder-lora-a100
This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 100
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
grace-pro/oops_i_did_it_again_eval_hans_full_set | grace-pro | 2024-03-11T02:56:38Z | 2 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2024-03-11T02:55:26Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: oops_i_did_it_again_eval_hans_full_set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oops_i_did_it_again_eval_hans_full_set
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8314
- Precision: 0.7598
- Recall: 0.2665
- F1-score: 0.3946
- Accuracy: 0.5911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:--------:|
| 0.5414 | 1.0 | 24544 | 1.8314 | 0.7598 | 0.2665 | 0.3946 | 0.5911 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
johnnyluhk/ppo-SnowballTarget | johnnyluhk | 2024-03-11T02:56:23Z | 19 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2024-03-11T02:56:20Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: johnnyluhk/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
essiam/clean_art_cat | essiam | 2024-03-11T02:55:52Z | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-03-11T02:44:09Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of 686HenrietteRonnerKnip859 cat
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - essiam/clean_art_cat
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of 686HenrietteRonnerKnip859 cat using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Reemvn/distilroberta-base | Reemvn | 2024-03-11T02:43:33Z | 46 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-10T23:41:57Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: Reemvn/distilroberta-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Reemvn/distilroberta-base
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0947
- Validation Loss: 0.1512
- Train Accuracy: 0.9455
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1343 | 0.1610 | 0.945 | 0 |
| 0.1097 | 0.1589 | 0.949 | 1 |
| 0.0947 | 0.1512 | 0.9455 | 2 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
shamekhjr/ppo-LunarLander-v2 | shamekhjr | 2024-03-11T02:42:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-07T21:56:43Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.33 +/- 25.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jsfs11/WestFrankenMoev2-9B | jsfs11 | 2024-03-11T02:36:51Z | 10 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"jsfs11/WestLakeSeverusV2-DPO-7B-DARE-TA",
"jsfs11/WONMSeverusDevil-TIES-7B",
"base_model:jsfs11/WONMSeverusDevil-TIES-7B",
"base_model:merge:jsfs11/WONMSeverusDevil-TIES-7B",
"base_model:jsfs11/WestLakeSeverusV2-DPO-7B-DARE-TA",
"base_model:merge:jsfs11/WestLakeSeverusV2-DPO-7B-DARE-TA",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T02:29:14Z | ---
tags:
- merge
- mergekit
- lazymergekit
- jsfs11/WestLakeSeverusV2-DPO-7B-DARE-TA
- jsfs11/WONMSeverusDevil-TIES-7B
base_model:
- jsfs11/WestLakeSeverusV2-DPO-7B-DARE-TA
- jsfs11/WONMSeverusDevil-TIES-7B
---
# WestFrankenMoev2-9B
WestFrankenMoev2-9B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [jsfs11/WestLakeSeverusV2-DPO-7B-DARE-TA](https://huggingface.co/jsfs11/WestLakeSeverusV2-DPO-7B-DARE-TA)
* [jsfs11/WONMSeverusDevil-TIES-7B](https://huggingface.co/jsfs11/WONMSeverusDevil-TIES-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: jsfs11/WestLakeSeverusV2-DPO-7B-DARE-TA
layer_range: [0, 20]
- sources:
- model: jsfs11/WONMSeverusDevil-TIES-7B
layer_range: [12, 32]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jsfs11/WestFrankenMoev2-9B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
lilyray/albert_emotion | lilyray | 2024-03-11T02:28:15Z | 121 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:lilyray/albert_emotion",
"base_model:finetune:lilyray/albert_emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-10T17:30:03Z | ---
license: apache-2.0
base_model: lilyray/albert_emotion
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: albert_emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_emotion
This model is a fine-tuned version of [lilyray/albert_emotion](https://huggingface.co/lilyray/albert_emotion) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2391
- Accuracy: 0.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.363600088100325e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 19
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1744 | 1.0 | 4000 | 0.2001 | 0.938 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ntvcie/model | ntvcie | 2024-03-11T02:15:15Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:vilm/vinallama-7b-chat",
"base_model:quantized:vilm/vinallama-7b-chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-11T02:13:05Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: vilm/vinallama-7b-chat
---
# Uploaded model
- **Developed by:** ntvcie
- **License:** apache-2.0
- **Finetuned from model :** vilm/vinallama-7b-chat
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
automerger/PasticheAlloyingotneoy-7B | automerger | 2024-03-11T01:54:34Z | 50 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:CorticalStack/pastiche-crown-clown-7b-dare-dpo",
"base_model:merge:CorticalStack/pastiche-crown-clown-7b-dare-dpo",
"base_model:nlpguy/AlloyIngotNeoY",
"base_model:merge:nlpguy/AlloyIngotNeoY",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-11T01:53:45Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- CorticalStack/pastiche-crown-clown-7b-dare-dpo
- nlpguy/AlloyIngotNeoY
---
# PasticheAlloyingotneoy-7B
PasticheAlloyingotneoy-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [CorticalStack/pastiche-crown-clown-7b-dare-dpo](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare-dpo)
* [nlpguy/AlloyIngotNeoY](https://huggingface.co/nlpguy/AlloyIngotNeoY)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range: [0, 32]
- model: nlpguy/AlloyIngotNeoY
layer_range: [0, 32]
merge_method: slerp
base_model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/PasticheAlloyingotneoy-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
asedmammad/Contextual_KTO_Mistral_PairRM-GGUF | asedmammad | 2024-03-11T01:54:18Z | 83 | 2 | null | [
"gguf",
"kto",
"dpo",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"rl",
"rlaif",
"en",
"dataset:snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset",
"arxiv:2402.01306",
"base_model:ContextualAI/Contextual_KTO_Mistral_PairRM",
"base_model:quantized:ContextualAI/Contextual_KTO_Mistral_PairRM",
"license:apache-2.0",
"region:us",
"conversational"
]
| null | 2024-03-10T22:07:16Z | ---
base_model: ContextualAI/Contextual_KTO_Mistral_PairRM
inference: false
language:
- en
license: apache-2.0
tags:
- kto
- dpo
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- rl
- rlaif
datasets:
- snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
metrics:
- accuracy
model_creator: ContextualAI
model_name: Contextual KTO Mistral PairRM
model_type: mistral
prompt_template: '<|user|>
{prompt}
<|assistant|>
'
quantized_by: Ased Mammad
---
# Contextual_KTO_Mistral_PairRM - GGUF
- Model creator: [ContextualAI](https://huggingface.co/ContextualAI)
- Original model: [Contextual_KTO_Mistral_PairRM](https://huggingface.co/ContextualAI/Contextual_KTO_Mistral_PairRM)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Contextual_KTO_Mistral_PairRM](https://huggingface.co/ContextualAI/Contextual_KTO_Mistral_PairRM).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|user|>
{prompt}
<|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Contextual_KTO_Mistral_PairRM.Q2_K.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q2_K.gguf) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [Contextual_KTO_Mistral_PairRM.Q3_K_S.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [Contextual_KTO_Mistral_PairRM.Q3_K_M.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [Contextual_KTO_Mistral_PairRM.Q3_K_L.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [Contextual_KTO_Mistral_PairRM.Q4_0.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Contextual_KTO_Mistral_PairRM.Q4_K_S.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [Contextual_KTO_Mistral_PairRM.Q5_0.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Contextual_KTO_Mistral_PairRM.Q5_K_S.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [Contextual_KTO_Mistral_PairRM.Q5_K_M.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [Contextual_KTO_Mistral_PairRM.Q6_K.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [Contextual_KTO_Mistral_PairRM.Q8_0.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF and below it, a specific filename to download, such as: Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 35` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|user|>\n{prompt}<|assistant|>\n", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- original-model-card start -->
This repo contains the model and tokenizer checkpoints for:
- model family [<b>mistralai/Mistral-7B-Instruct-v0.2</b>](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
- optimized with the loss [<b>KTO</b>](https://twitter.com/winniethexu/status/1732839295365554643)
- aligned using the [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset)
- via 3 iterations of KTO on one epoch of each training partition, each previous iteration's model serving as the reference for the subsequent.
**[03/06/2024]**: We are #2 on the (verified) [Alpaca Eval 2.0 Leaderboard](https://tatsu-lab.github.io/alpaca_eval/) scoring **33.23**!
To prompt this model, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added at tokenization time and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
You may also use our tokenizer's `apply_chat_template` if doing inference with `chatml` set or evaluating generations through non-local clients.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) for more details on the methodology.
If you found this work useful, feel free to cite [our work](https://arxiv.org/abs/2402.01306):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
<!-- original-model-card end -->
|
jeonsiyun/layoutlmv3-v29-epoch25 | jeonsiyun | 2024-03-11T01:54:04Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-03-11T01:53:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shadowlight2784/Sonata_Dusk_Singing_Voice | shadowlight2784 | 2024-03-11T01:50:56Z | 0 | 1 | null | [
"region:us"
]
| null | 2023-08-25T23:43:01Z | Use for Retrival-Voice-Conversion (RVC).
https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI |
coon-hound/lunarlander | coon-hound | 2024-03-11T01:47:10Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-03-11T01:13:57Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -276.01 +/- 133.95
name: mean_reward
verified: false
---
# This is aaron's lunar landing. It landed on the moon succesfully a few times.
|
ITT-AF/ITT-Yi-Ko-6B-v6.0 | ITT-AF | 2024-03-11T01:43:50Z | 55 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-07T02:52:58Z | ---
license: cc-by-nc-4.0
---
## ITT-AF/ITT-Yi-Ko-6B-v6.0
This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an custom dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hypuerparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 4
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 8
* total_train_batch_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0 |
Subsets and Splits