modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Xenova/yolov8s-pose | Xenova | 2024-04-26T22:20:56Z | 4 | 1 | transformers.js | [
"transformers.js",
"onnx",
"yolov8",
"pose-estimation",
"license:agpl-3.0",
"region:us"
] | null | 2024-04-24T17:52:50Z | ---
library_name: transformers.js
tags:
- pose-estimation
license: agpl-3.0
---
YOLOv8s-pose with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Perform pose-estimation w/ `Xenova/yolov8s-pose`.
```js
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model and processor
const model_id = 'Xenova/yolov8s-pose';
const model = await AutoModel.from_pretrained(model_id);
const processor = await AutoProcessor.from_pretrained(model_id);
// Read image and run processor
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg';
const image = await RawImage.read(url);
const { pixel_values } = await processor(image);
// Set thresholds
const threshold = 0.3; // Remove detections with low confidence
const iouThreshold = 0.5; // Used to remove duplicates
const pointThreshold = 0.3; // Hide uncertain points
// Predict bounding boxes and keypoints
const { output0 } = await model({ images: pixel_values });
// Post-process:
const permuted = output0[0].transpose(1, 0);
// `permuted` is a Tensor of shape [ 8400, 56 ]:
// - 8400 potential detections
// - 56 parameters for each box:
// - 4 for the bounding box dimensions (x-center, y-center, width, height)
// - 1 for the confidence score
// - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy)
// Example code to format it nicely:
const results = [];
const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2);
for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) {
if (score < threshold) continue;
// Get pixel values, taking into account the original image size
const x1 = (xc - w / 2) / scaledWidth * image.width;
const y1 = (yc - h / 2) / scaledHeight * image.height;
const x2 = (xc + w / 2) / scaledWidth * image.width;
const y2 = (yc + h / 2) / scaledHeight * image.height;
results.push({ x1, x2, y1, y2, score, keypoints })
}
// Define helper functions
function removeDuplicates(detections, iouThreshold) {
const filteredDetections = [];
for (const detection of detections) {
let isDuplicate = false;
let duplicateIndex = -1;
let maxIoU = 0;
for (let i = 0; i < filteredDetections.length; ++i) {
const filteredDetection = filteredDetections[i];
const iou = calculateIoU(detection, filteredDetection);
if (iou > iouThreshold) {
isDuplicate = true;
if (iou > maxIoU) {
maxIoU = iou;
duplicateIndex = i;
}
}
}
if (!isDuplicate) {
filteredDetections.push(detection);
} else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) {
filteredDetections[duplicateIndex] = detection;
}
}
return filteredDetections;
}
function calculateIoU(detection1, detection2) {
const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1));
const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1));
const overlapArea = xOverlap * yOverlap;
const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1);
const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1);
const unionArea = area1 + area2 - overlapArea;
return overlapArea / unionArea;
}
const filteredResults = removeDuplicates(results, iouThreshold);
// Display results
for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) {
console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`)
for (let i = 0; i < keypoints.length; i += 3) {
const label = model.config.id2label[Math.floor(i / 3)];
const [x, y, point_score] = keypoints.slice(i, i + 3);
if (point_score < pointThreshold) continue;
console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`);
}
}
```
<details>
<summary>See example output</summary>
```
Found person at [533.1403350830078, 39.96531672477722, 645.8853149414062, 296.1657429695129] with score 0.739
- nose: (443.99, 91.98) with score 0.970
- left_eye: (449.84, 85.01) with score 0.968
- right_eye: (436.28, 86.54) with score 0.839
- left_ear: (458.69, 87.08) with score 0.822
- right_ear: (427.88, 89.20) with score 0.317
- left_shoulder: (471.29, 128.05) with score 0.991
- right_shoulder: (421.84, 127.22) with score 0.788
- left_elbow: (494.03, 174.09) with score 0.976
- right_elbow: (405.83, 162.81) with score 0.367
- left_wrist: (505.29, 232.06) with score 0.955
- right_wrist: (411.89, 213.05) with score 0.470
- left_hip: (469.48, 217.49) with score 0.978
- right_hip: (438.79, 216.48) with score 0.901
- left_knee: (474.03, 283.00) with score 0.957
- right_knee: (448.00, 287.90) with score 0.808
- left_ankle: (472.06, 339.67) with score 0.815
- right_ankle: (447.15, 340.44) with score 0.576
Found person at [0.03232002258300781, 57.89646775722503, 156.35095596313477, 370.9132190942764] with score 0.908
- nose: (60.48, 105.82) with score 0.975
- left_eye: (64.86, 100.59) with score 0.952
- right_eye: (55.12, 100.60) with score 0.855
- left_ear: (73.04, 101.96) with score 0.820
- right_ear: (51.07, 103.28) with score 0.482
- left_shoulder: (85.74, 137.77) with score 0.996
- right_shoulder: (42.04, 137.63) with score 0.988
- left_elbow: (101.10, 190.45) with score 0.988
- right_elbow: (25.75, 186.44) with score 0.937
- left_wrist: (115.93, 250.05) with score 0.975
- right_wrist: (7.39, 233.44) with score 0.918
- left_hip: (80.15, 242.20) with score 0.999
- right_hip: (52.69, 239.82) with score 0.999
- left_knee: (93.29, 326.00) with score 0.999
- right_knee: (57.42, 329.04) with score 0.998
- left_ankle: (100.24, 413.83) with score 0.992
- right_ankle: (50.47, 417.93) with score 0.988
Found person at [106.16920471191406, 8.419264698028565, 515.0135803222656, 530.6886708259583] with score 0.819
- nose: (134.03, 111.15) with score 0.921
- left_eye: (137.51, 100.95) with score 0.824
- right_eye: (131.82, 97.53) with score 0.489
- left_ear: (147.19, 92.96) with score 0.792
- left_shoulder: (188.28, 127.51) with score 0.993
- right_shoulder: (181.81, 149.32) with score 0.995
- left_elbow: (258.49, 199.10) with score 0.984
- right_elbow: (181.43, 251.27) with score 0.988
- left_wrist: (311.74, 257.93) with score 0.979
- right_wrist: (129.68, 284.38) with score 0.984
- left_hip: (267.43, 299.85) with score 1.000
- right_hip: (277.05, 307.50) with score 1.000
- left_knee: (232.15, 427.54) with score 0.999
- right_knee: (278.99, 453.09) with score 0.999
- left_ankle: (352.68, 457.89) with score 0.990
- right_ankle: (362.15, 554.69) with score 0.993
Found person at [425.3855133056641, 73.76281919479369, 640.6651306152344, 502.32841634750366] with score 0.876
- nose: (416.15, 149.68) with score 0.996
- left_eye: (430.34, 139.56) with score 0.984
- right_eye: (412.88, 142.56) with score 0.976
- left_ear: (446.59, 142.21) with score 0.843
- right_ear: (398.82, 144.52) with score 0.740
- left_shoulder: (436.54, 197.92) with score 0.999
- right_shoulder: (362.94, 210.20) with score 0.996
- left_elbow: (460.06, 293.80) with score 0.992
- right_elbow: (352.33, 262.09) with score 0.966
- left_wrist: (491.33, 364.20) with score 0.986
- right_wrist: (402.62, 272.23) with score 0.956
- left_hip: (429.79, 354.94) with score 0.999
- right_hip: (383.27, 372.77) with score 0.999
- left_knee: (461.07, 437.73) with score 0.998
- right_knee: (410.89, 522.05) with score 0.995
- left_ankle: (460.74, 552.53) with score 0.966
- right_ankle: (429.00, 560.54) with score 0.940
```
</details> |
RichardErkhov/bigscience_-_bigscience-small-testing-8bits | RichardErkhov | 2024-04-26T22:18:16Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-04-26T22:18:02Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bigscience-small-testing - bnb 8bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bigscience-small-testing/
Original model description:
---
language:
- eng
tags:
- integration
pipeline_tag: text-generation
---
# BigScience - testing model
This model aims to test the conversion between Megatron-LM and transformers. It is a small ```GPT-2```-like model that has been used to debug the script. Use it only for integration tests
|
ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2 | ShenaoZhang | 2024-04-26T22:11:20Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1",
"base_model:finetune:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T21:39:34Z | ---
license: mit
base_model: ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1](https://huggingface.co/ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
BohanJiang/my_awesome_model | BohanJiang | 2024-04-26T22:08:34Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-26T20:19:30Z | ---
license: apache-2.0
base_model: albert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2276
- Accuracy: 0.9424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2073 | 1.0 | 1563 | 0.1896 | 0.9298 |
| 0.1448 | 2.0 | 3126 | 0.2276 | 0.9424 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
saraataryy/xlm-roberta-base-finetuned-panx-ar | saraataryy | 2024-04-26T22:07:47Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:tner/xlm-roberta-base-panx-dataset-ar",
"base_model:finetune:tner/xlm-roberta-base-panx-dataset-ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-04-26T22:04:49Z | ---
base_model: tner/xlm-roberta-base-panx-dataset-ar
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-ar
This model is a fine-tuned version of [tner/xlm-roberta-base-panx-dataset-ar](https://huggingface.co/tner/xlm-roberta-base-panx-dataset-ar) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1977
- F1: 0.8803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2179 | 1.0 | 188 | 0.1977 | 0.8803 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
richiebailey/wiz_lm_2 | richiebailey | 2024-04-26T22:01:07Z | 0 | 0 | null | [
"safetensors",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T21:59:57Z | ---
license: apache-2.0
---
<p style="font-size:20px;" align="center">
π <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
π€ <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> β’π± <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> β’ π¦ <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> β’ π <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> β’ π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> β’ π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
π Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## See [here](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B) for the WizardLM-2-7B re-upload.
## News π₯π₯π₯ [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 8x22B
* **Developed by**: WizardLM@Microsoft AI
* **Model type**: Mixture of Experts (MoE)
* **Base model**: [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1)
* **Parameters**: 141B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://web.archive.org/web/20240415175608im_/https://wizardlm.github.io/WizardLM2/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://web.archive.org/web/20240415163303im_/https://wizardlm.github.io/WizardLM2/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://web.archive.org/web/20240415163303im_/https://wizardlm.github.io/WizardLM2/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
β<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
|
kchopra04/llama3-inst-finetune-saxs-gguf | kchopra04 | 2024-04-26T22:00:39Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-26T22:00:28Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** kchopra04
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2 | ShenaoZhang | 2024-04-26T21:56:58Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1",
"base_model:finetune:ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T21:25:43Z | ---
license: mit
base_model: ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_2
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1](https://huggingface.co/ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
jspr/smut_llama_8b_instruct_peft | jspr | 2024-04-26T21:55:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T21:55:09Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** jspr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yeilho/llama-3-8b-Instruct-bnb-4bit-medical | yeilho | 2024-04-26T21:54:58Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T21:41:44Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** yeilho
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Josephgflowers/Phi-3-mini-4k-instruct-Cinder-with-16bit-GGUF | Josephgflowers | 2024-04-26T21:53:51Z | 33 | 1 | transformers | [
"transformers",
"safetensors",
"gguf",
"phi3",
"text-generation",
"nlp",
"code",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-25T02:56:17Z | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- text: '<|system|>
You are a helpful assistant.<|end|>
<|user|>
'
---
I am really enjoying this version of Cinder. More information coming. As well as Cinder character specific data, a mix of RAG generated Q and A of world knowledge, STEM topics, and Cinder Character data. I suplimented the Cinder character with an abreviated Samantha dataset edited for Cinder and removed a lot of the negative responses.
Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration.

## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, βtextbook-likeβ data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must followβ―[Microsoftβs Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-partyβs policies.
|
HenryCai1129/adapter-llama-adapterhappy2sad-1k-50-0.009 | HenryCai1129 | 2024-04-26T21:53:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T21:53:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HFDON/bert-finetuned-ner | HFDON | 2024-04-26T21:48:28Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-04-26T20:15:58Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9346
- Recall: 0.9505
- F1: 0.9425
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0777 | 1.0 | 1756 | 0.0721 | 0.9129 | 0.9325 | 0.9226 | 0.9814 |
| 0.036 | 2.0 | 3512 | 0.0604 | 0.9309 | 0.9477 | 0.9392 | 0.9859 |
| 0.0186 | 3.0 | 5268 | 0.0623 | 0.9346 | 0.9505 | 0.9425 | 0.9864 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.1+cpu
- Datasets 2.19.0
- Tokenizers 0.15.2
|
Laz4rz/hf-huggy-1-bonus | Laz4rz | 2024-04-26T21:48:25Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-04-26T21:48:17Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Laz4rz/hf-huggy-1-bonus
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
MoMonir/Sehty360-llama-3-8b-arabic-health-instruct-GGUF | MoMonir | 2024-04-26T21:37:48Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"ar",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T17:16:33Z | ---
language:
- ar
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# MoMonir/Sehty360-llama-3-8b-arabic-health-instruct-GGUF
This model was converted to GGUF format from [`health360/Sehty360-llama-3-8b-arabic-health-instruct`](https://huggingface.co/health360/Sehty360-llama-3-8b-arabic-health-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/health360/Sehty360-llama-3-8b-arabic-health-instruct) for more details on the model.
<!-- README_GGUF.md-about-gguf start -->
### About GGUF ([TheBloke](https://huggingface.co/TheBloke) Description)
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo MoMonir/Sehty360-llama-3-8b-arabic-health-instruct-GGUF --model sehty360-llama-3-8b-arabic-health-instruct.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo MoMonir/Sehty360-llama-3-8b-arabic-health-instruct-GGUF --model sehty360-llama-3-8b-arabic-health-instruct.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m sehty360-llama-3-8b-arabic-health-instruct.Q5_K_M.gguf -n 128
```
|
Raihan004/Action_Classification | Raihan004 | 2024-04-26T21:35:51Z | 216 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-04-26T18:56:34Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Action_Classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: agent_action_class
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7628571428571429
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Action_Classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the agent_action_class dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8482
- Accuracy: 0.7629
- Confusion Matrix: [[45, 5, 20, 4, 2, 6, 4, 8, 3, 3], [5, 154, 4, 2, 1, 2, 6, 1, 17, 1], [0, 0, 51, 1, 2, 8, 1, 0, 0, 2], [1, 0, 8, 26, 8, 5, 0, 0, 1, 3], [0, 1, 0, 0, 89, 3, 0, 0, 0, 0], [0, 1, 11, 3, 1, 55, 0, 1, 0, 0], [0, 1, 1, 0, 3, 3, 51, 0, 0, 0], [0, 0, 10, 1, 0, 4, 0, 68, 0, 0], [0, 26, 5, 0, 1, 3, 16, 1, 127, 1], [3, 0, 2, 9, 2, 1, 0, 1, 0, 135]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Confusion Matrix |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.3922 | 0.32 | 100 | 1.0781 | 0.6933 | [[66, 1, 9, 6, 1, 5, 1, 3, 7, 1], [41, 96, 0, 0, 8, 0, 2, 1, 45, 0], [2, 0, 46, 1, 1, 7, 4, 0, 1, 3], [9, 1, 4, 19, 5, 3, 2, 1, 4, 4], [0, 2, 0, 3, 84, 2, 1, 0, 0, 1], [4, 1, 3, 2, 0, 55, 3, 1, 3, 0], [0, 0, 1, 0, 0, 1, 54, 0, 3, 0], [5, 1, 4, 1, 0, 1, 0, 70, 1, 0], [5, 12, 0, 1, 1, 0, 14, 0, 147, 0], [9, 0, 1, 38, 3, 1, 4, 4, 2, 91]] |
| 0.439 | 0.64 | 200 | 0.8592 | 0.7562 | [[73, 3, 6, 4, 0, 3, 2, 3, 3, 3], [30, 121, 1, 0, 1, 0, 8, 0, 32, 0], [1, 0, 47, 1, 1, 9, 1, 0, 1, 4], [7, 0, 5, 28, 5, 1, 0, 1, 2, 3], [0, 2, 0, 1, 88, 0, 1, 0, 0, 1], [4, 1, 5, 3, 2, 51, 0, 1, 2, 3], [0, 1, 1, 0, 0, 0, 56, 0, 1, 0], [4, 2, 1, 0, 0, 0, 1, 74, 1, 0], [4, 28, 0, 1, 0, 0, 19, 2, 125, 1], [3, 0, 1, 15, 1, 0, 1, 1, 0, 131]] |
| 0.4664 | 0.96 | 300 | 0.8482 | 0.7629 | [[45, 5, 20, 4, 2, 6, 4, 8, 3, 3], [5, 154, 4, 2, 1, 2, 6, 1, 17, 1], [0, 0, 51, 1, 2, 8, 1, 0, 0, 2], [1, 0, 8, 26, 8, 5, 0, 0, 1, 3], [0, 1, 0, 0, 89, 3, 0, 0, 0, 0], [0, 1, 11, 3, 1, 55, 0, 1, 0, 0], [0, 1, 1, 0, 3, 3, 51, 0, 0, 0], [0, 0, 10, 1, 0, 4, 0, 68, 0, 0], [0, 26, 5, 0, 1, 3, 16, 1, 127, 1], [3, 0, 2, 9, 2, 1, 0, 1, 0, 135]] |
| 0.2929 | 1.27 | 400 | 1.1281 | 0.6790 | [[65, 3, 9, 7, 1, 1, 2, 2, 10, 0], [38, 113, 1, 0, 1, 0, 5, 0, 35, 0], [3, 0, 54, 4, 1, 1, 2, 0, 0, 0], [8, 2, 5, 31, 5, 0, 0, 0, 1, 0], [0, 2, 6, 3, 80, 0, 1, 1, 0, 0], [6, 2, 16, 8, 1, 34, 1, 1, 3, 0], [1, 2, 1, 0, 0, 0, 55, 0, 0, 0], [6, 2, 6, 2, 0, 0, 0, 66, 1, 0], [3, 24, 2, 2, 0, 0, 14, 0, 135, 0], [9, 2, 4, 56, 1, 0, 1, 0, 0, 80]] |
| 0.4188 | 1.59 | 500 | 1.1851 | 0.6657 | [[61, 2, 11, 6, 3, 5, 2, 5, 4, 1], [53, 85, 5, 0, 8, 5, 3, 6, 28, 0], [0, 0, 51, 2, 2, 5, 1, 3, 0, 1], [2, 1, 4, 34, 8, 0, 0, 2, 1, 0], [0, 1, 0, 1, 89, 0, 1, 0, 1, 0], [1, 0, 7, 4, 5, 48, 1, 5, 1, 0], [0, 1, 1, 0, 3, 0, 54, 0, 0, 0], [5, 1, 1, 1, 0, 3, 0, 72, 0, 0], [11, 18, 0, 0, 2, 2, 17, 8, 122, 0], [1, 1, 2, 42, 8, 1, 10, 4, 1, 83]] |
| 0.3668 | 1.91 | 600 | 0.8554 | 0.7467 | [[53, 11, 11, 5, 0, 3, 1, 4, 10, 2], [3, 145, 5, 0, 1, 1, 4, 5, 29, 0], [0, 0, 53, 1, 1, 5, 2, 1, 1, 1], [4, 0, 9, 29, 5, 2, 0, 0, 1, 2], [0, 1, 4, 3, 84, 0, 0, 0, 1, 0], [2, 2, 12, 3, 1, 45, 0, 3, 3, 1], [0, 1, 2, 1, 1, 0, 52, 0, 2, 0], [1, 2, 5, 1, 0, 1, 0, 73, 0, 0], [4, 29, 2, 0, 0, 0, 7, 3, 135, 0], [1, 0, 11, 19, 1, 5, 0, 1, 0, 115]] |
| 0.342 | 2.23 | 700 | 1.0291 | 0.7048 | [[58, 5, 4, 4, 1, 8, 1, 9, 7, 3], [36, 111, 0, 2, 1, 4, 1, 4, 34, 0], [3, 2, 45, 5, 1, 6, 1, 1, 0, 1], [6, 0, 5, 35, 2, 0, 0, 0, 1, 3], [1, 1, 2, 6, 77, 3, 1, 0, 2, 0], [4, 0, 10, 8, 1, 39, 0, 7, 2, 1], [1, 2, 1, 0, 2, 0, 50, 0, 3, 0], [1, 0, 4, 1, 0, 0, 0, 77, 0, 0], [4, 29, 0, 0, 0, 0, 5, 2, 140, 0], [5, 0, 5, 27, 0, 0, 1, 7, 0, 108]] |
| 0.2984 | 2.55 | 800 | 1.2207 | 0.6962 | [[55, 3, 11, 2, 0, 1, 2, 11, 10, 5], [44, 71, 1, 0, 1, 0, 4, 5, 66, 1], [0, 0, 49, 3, 2, 3, 3, 2, 1, 2], [4, 0, 5, 26, 7, 0, 2, 0, 1, 7], [0, 1, 0, 0, 86, 0, 3, 0, 3, 0], [5, 2, 12, 4, 2, 39, 1, 3, 1, 3], [0, 0, 1, 0, 1, 0, 57, 0, 0, 0], [0, 0, 5, 0, 0, 0, 0, 78, 0, 0], [5, 15, 3, 0, 1, 0, 9, 4, 143, 0], [0, 0, 1, 17, 1, 0, 3, 3, 1, 127]] |
| 0.3542 | 2.87 | 900 | 1.1835 | 0.6657 | [[66, 2, 6, 8, 0, 4, 4, 4, 5, 1], [38, 78, 2, 0, 3, 1, 23, 4, 43, 1], [2, 0, 50, 7, 0, 5, 1, 0, 0, 0], [2, 0, 2, 45, 3, 0, 0, 0, 0, 0], [0, 1, 3, 6, 76, 3, 4, 0, 0, 0], [3, 1, 10, 8, 1, 47, 0, 0, 1, 1], [0, 0, 2, 0, 1, 0, 56, 0, 0, 0], [2, 0, 11, 5, 0, 2, 4, 59, 0, 0], [4, 24, 2, 1, 1, 1, 23, 0, 124, 0], [5, 0, 1, 42, 3, 0, 3, 1, 0, 98]] |
| 0.2749 | 3.18 | 1000 | 0.9242 | 0.7286 | [[54, 12, 5, 2, 3, 1, 7, 1, 12, 3], [13, 155, 0, 0, 3, 1, 2, 1, 18, 0], [2, 0, 53, 1, 4, 1, 3, 0, 0, 1], [5, 1, 7, 21, 8, 0, 0, 0, 1, 9], [0, 2, 0, 1, 89, 0, 0, 1, 0, 0], [2, 4, 16, 1, 6, 34, 3, 1, 4, 1], [0, 2, 1, 0, 2, 0, 54, 0, 0, 0], [1, 3, 6, 1, 0, 0, 0, 70, 2, 0], [4, 45, 0, 1, 2, 0, 13, 0, 115, 0], [2, 1, 6, 19, 4, 0, 0, 1, 0, 120]] |
| 0.2695 | 3.5 | 1100 | 0.9828 | 0.7314 | [[58, 8, 9, 3, 0, 3, 2, 5, 10, 2], [29, 130, 2, 0, 0, 3, 1, 4, 24, 0], [1, 0, 49, 3, 1, 6, 2, 0, 0, 3], [6, 1, 5, 26, 6, 1, 0, 1, 1, 5], [0, 1, 2, 4, 79, 1, 1, 1, 4, 0], [2, 2, 12, 4, 1, 48, 1, 0, 1, 1], [0, 0, 1, 0, 1, 0, 57, 0, 0, 0], [2, 2, 4, 1, 0, 2, 2, 67, 3, 0], [1, 32, 0, 1, 0, 4, 12, 2, 125, 3], [3, 1, 2, 10, 1, 5, 2, 0, 0, 129]] |
| 0.2343 | 3.82 | 1200 | 1.0871 | 0.7295 | [[58, 2, 10, 5, 1, 1, 7, 4, 10, 2], [21, 115, 0, 0, 4, 0, 12, 3, 38, 0], [1, 0, 53, 2, 2, 3, 0, 1, 1, 2], [2, 2, 9, 22, 9, 0, 1, 0, 1, 6], [0, 2, 1, 1, 88, 0, 0, 0, 1, 0], [2, 1, 10, 4, 2, 48, 1, 2, 2, 0], [0, 0, 1, 0, 2, 0, 56, 0, 0, 0], [2, 1, 7, 0, 0, 0, 0, 72, 1, 0], [4, 22, 0, 1, 0, 0, 14, 0, 139, 0], [7, 0, 4, 20, 1, 3, 1, 2, 0, 115]] |
| 0.2714 | 4.14 | 1300 | 1.0720 | 0.7314 | [[59, 6, 8, 8, 1, 1, 3, 4, 7, 3], [23, 114, 2, 1, 1, 0, 5, 3, 42, 2], [1, 1, 54, 2, 1, 2, 0, 1, 0, 3], [3, 1, 3, 32, 4, 0, 0, 0, 1, 8], [0, 1, 3, 5, 80, 1, 1, 0, 0, 2], [3, 1, 11, 7, 2, 43, 1, 2, 1, 1], [0, 0, 1, 1, 0, 0, 56, 0, 0, 1], [1, 0, 4, 0, 0, 0, 0, 77, 1, 0], [6, 31, 2, 1, 0, 0, 10, 0, 130, 0], [5, 0, 1, 22, 0, 1, 0, 1, 0, 123]] |
| 0.2287 | 4.46 | 1400 | 1.1125 | 0.7057 | [[52, 5, 15, 8, 1, 8, 0, 3, 6, 2], [27, 109, 1, 0, 1, 6, 2, 3, 43, 1], [1, 0, 55, 3, 0, 3, 1, 0, 0, 2], [2, 1, 4, 34, 4, 2, 0, 0, 0, 5], [0, 1, 2, 4, 81, 2, 1, 0, 2, 0], [2, 2, 7, 3, 1, 54, 0, 0, 3, 0], [0, 0, 1, 0, 1, 0, 56, 0, 1, 0], [1, 1, 12, 1, 0, 3, 2, 62, 1, 0], [5, 30, 1, 0, 0, 3, 9, 0, 131, 1], [4, 6, 4, 28, 0, 4, 0, 0, 0, 107]] |
| 0.2814 | 4.78 | 1500 | 1.1163 | 0.72 | [[71, 3, 7, 5, 2, 1, 1, 6, 3, 1], [53, 111, 1, 0, 1, 0, 5, 5, 17, 0], [2, 0, 48, 4, 1, 4, 0, 4, 0, 2], [6, 0, 4, 31, 6, 0, 0, 0, 0, 5], [0, 3, 1, 4, 82, 0, 1, 1, 1, 0], [8, 0, 4, 4, 1, 49, 1, 4, 1, 0], [2, 0, 1, 1, 1, 0, 52, 2, 0, 0], [2, 0, 1, 0, 0, 0, 0, 80, 0, 0], [11, 35, 2, 0, 0, 1, 9, 5, 117, 0], [10, 1, 1, 21, 1, 0, 2, 2, 0, 115]] |
| 0.2648 | 5.1 | 1600 | 1.1721 | 0.7057 | [[61, 6, 3, 3, 0, 3, 2, 12, 8, 2], [27, 131, 0, 0, 4, 0, 1, 9, 21, 0], [2, 2, 51, 2, 2, 4, 0, 1, 0, 1], [3, 1, 4, 28, 6, 3, 0, 3, 1, 3], [1, 3, 0, 4, 82, 1, 0, 1, 1, 0], [1, 0, 8, 4, 3, 51, 0, 3, 1, 1], [3, 1, 1, 1, 1, 0, 51, 1, 0, 0], [1, 0, 2, 1, 0, 0, 0, 79, 0, 0], [9, 31, 1, 0, 0, 1, 11, 10, 117, 0], [17, 5, 4, 28, 2, 2, 1, 4, 0, 90]] |
| 0.1857 | 5.41 | 1700 | 1.0404 | 0.7514 | [[57, 9, 5, 2, 1, 0, 4, 7, 11, 4], [22, 131, 0, 0, 0, 0, 5, 4, 30, 1], [1, 0, 56, 1, 1, 2, 1, 0, 0, 3], [3, 1, 2, 28, 8, 1, 1, 1, 1, 6], [1, 1, 0, 3, 85, 0, 0, 0, 3, 0], [6, 2, 11, 4, 3, 36, 2, 5, 1, 2], [0, 0, 1, 0, 0, 0, 58, 0, 0, 0], [1, 0, 2, 0, 0, 0, 0, 80, 0, 0], [7, 32, 1, 0, 0, 0, 19, 3, 117, 1], [6, 0, 1, 3, 0, 0, 1, 1, 0, 141]] |
| 0.1958 | 5.73 | 1800 | 1.1392 | 0.7238 | [[53, 7, 4, 3, 2, 1, 6, 9, 13, 2], [16, 134, 0, 0, 0, 1, 10, 7, 25, 0], [2, 1, 54, 2, 1, 1, 1, 0, 0, 3], [6, 0, 3, 29, 8, 0, 1, 1, 1, 3], [0, 2, 0, 2, 85, 0, 2, 0, 2, 0], [7, 1, 9, 2, 4, 43, 2, 2, 1, 1], [0, 0, 1, 0, 1, 0, 57, 0, 0, 0], [0, 1, 4, 0, 0, 0, 1, 77, 0, 0], [4, 28, 2, 0, 0, 0, 19, 4, 123, 0], [7, 0, 2, 25, 1, 0, 11, 2, 0, 105]] |
| 0.1475 | 6.05 | 1900 | 1.1926 | 0.7238 | [[72, 6, 4, 4, 0, 1, 2, 0, 8, 3], [52, 97, 0, 0, 1, 0, 1, 0, 41, 1], [3, 1, 52, 3, 1, 1, 1, 0, 0, 3], [6, 1, 3, 32, 2, 1, 0, 1, 1, 5], [0, 4, 1, 3, 79, 1, 0, 0, 3, 2], [3, 2, 12, 6, 0, 43, 0, 1, 2, 3], [3, 0, 1, 0, 1, 0, 52, 0, 1, 1], [8, 0, 7, 2, 0, 0, 0, 66, 0, 0], [13, 26, 1, 1, 0, 1, 8, 1, 129, 0], [6, 0, 0, 7, 0, 0, 0, 1, 1, 138]] |
| 0.1443 | 6.37 | 2000 | 1.2271 | 0.7152 | [[64, 3, 18, 4, 1, 3, 1, 3, 2, 1], [26, 112, 1, 0, 2, 0, 5, 6, 41, 0], [4, 0, 54, 2, 1, 1, 0, 0, 0, 3], [7, 1, 3, 34, 3, 1, 0, 1, 1, 1], [0, 3, 0, 3, 82, 0, 2, 0, 3, 0], [5, 2, 11, 5, 1, 44, 1, 1, 1, 1], [0, 0, 1, 0, 1, 0, 57, 0, 0, 0], [1, 0, 7, 0, 0, 4, 0, 71, 0, 0], [5, 23, 5, 2, 0, 0, 12, 5, 128, 0], [6, 3, 1, 36, 0, 0, 0, 2, 0, 105]] |
| 0.1453 | 6.69 | 2100 | 1.0546 | 0.7390 | [[71, 4, 11, 3, 0, 4, 1, 3, 2, 1], [26, 127, 3, 0, 0, 2, 4, 4, 27, 0], [1, 0, 53, 2, 2, 4, 1, 0, 0, 2], [5, 2, 5, 27, 6, 2, 0, 1, 1, 3], [1, 1, 0, 1, 87, 1, 1, 0, 1, 0], [2, 1, 6, 1, 2, 58, 0, 1, 1, 0], [2, 3, 1, 0, 2, 0, 50, 0, 0, 1], [4, 0, 7, 0, 0, 4, 0, 68, 0, 0], [4, 34, 4, 0, 1, 3, 14, 1, 119, 0], [9, 1, 2, 18, 2, 1, 0, 3, 1, 116]] |
| 0.2319 | 7.01 | 2200 | 1.0890 | 0.7371 | [[60, 4, 9, 7, 1, 4, 2, 2, 10, 1], [18, 127, 1, 0, 2, 0, 9, 2, 34, 0], [3, 0, 53, 3, 1, 2, 0, 1, 0, 2], [4, 2, 2, 36, 6, 0, 0, 0, 1, 1], [0, 4, 0, 3, 83, 0, 1, 1, 1, 0], [2, 2, 9, 6, 1, 49, 1, 0, 2, 0], [0, 0, 1, 0, 1, 0, 57, 0, 0, 0], [1, 0, 10, 1, 0, 0, 0, 71, 0, 0], [5, 24, 4, 0, 0, 1, 15, 1, 130, 0], [4, 4, 2, 28, 0, 0, 2, 5, 0, 108]] |
| 0.1499 | 7.32 | 2300 | 1.3652 | 0.7 | [[68, 3, 3, 11, 1, 1, 1, 4, 7, 1], [60, 82, 0, 2, 6, 0, 6, 4, 31, 2], [2, 1, 43, 6, 2, 4, 2, 2, 0, 3], [2, 0, 2, 36, 6, 0, 0, 1, 1, 4], [1, 2, 1, 2, 83, 2, 0, 1, 0, 1], [4, 0, 3, 11, 2, 46, 0, 3, 2, 1], [0, 0, 0, 0, 1, 1, 54, 1, 1, 1], [4, 0, 1, 1, 0, 0, 0, 76, 1, 0], [9, 22, 0, 1, 2, 0, 16, 2, 127, 1], [1, 0, 1, 27, 0, 0, 0, 4, 0, 120]] |
| 0.1467 | 7.64 | 2400 | 1.4623 | 0.6676 | [[59, 3, 10, 7, 0, 7, 1, 2, 8, 3], [55, 65, 1, 0, 4, 4, 15, 3, 46, 0], [3, 1, 48, 6, 2, 1, 2, 0, 0, 2], [1, 0, 3, 34, 5, 1, 0, 0, 1, 7], [0, 4, 1, 2, 83, 2, 0, 0, 0, 1], [4, 2, 12, 3, 1, 46, 0, 0, 2, 2], [1, 0, 1, 0, 1, 0, 56, 0, 0, 0], [3, 0, 5, 1, 0, 2, 4, 68, 0, 0], [9, 18, 1, 0, 3, 1, 17, 1, 129, 1], [2, 3, 1, 32, 0, 0, 0, 2, 0, 113]] |
| 0.1163 | 7.96 | 2500 | 1.5301 | 0.6819 | [[53, 2, 15, 7, 0, 3, 6, 4, 5, 5], [62, 76, 2, 0, 4, 2, 19, 7, 16, 5], [1, 1, 52, 1, 1, 2, 2, 2, 0, 3], [1, 0, 5, 28, 6, 2, 1, 2, 0, 7], [0, 1, 1, 2, 83, 3, 1, 1, 0, 1], [2, 1, 13, 3, 0, 44, 1, 4, 0, 4], [0, 0, 1, 0, 1, 0, 57, 0, 0, 0], [1, 0, 5, 0, 0, 1, 1, 75, 0, 0], [11, 17, 1, 0, 1, 1, 28, 3, 116, 2], [0, 3, 2, 10, 0, 1, 3, 2, 0, 132]] |
| 0.1087 | 8.28 | 2600 | 1.2231 | 0.7324 | [[62, 6, 5, 6, 0, 0, 2, 2, 12, 5], [32, 102, 0, 0, 2, 1, 12, 3, 41, 0], [3, 2, 45, 4, 1, 3, 3, 0, 0, 4], [5, 0, 3, 29, 3, 0, 0, 0, 3, 9], [1, 5, 0, 4, 73, 2, 1, 1, 4, 2], [5, 3, 3, 6, 1, 43, 0, 3, 3, 5], [0, 0, 1, 0, 1, 0, 57, 0, 0, 0], [1, 0, 5, 1, 0, 0, 1, 72, 3, 0], [3, 21, 0, 1, 1, 0, 9, 1, 142, 2], [1, 0, 1, 6, 0, 0, 0, 1, 0, 144]] |
| 0.1783 | 8.6 | 2700 | 1.1571 | 0.7390 | [[53, 5, 17, 5, 0, 3, 2, 4, 7, 4], [23, 127, 1, 0, 4, 2, 2, 3, 31, 0], [0, 1, 56, 2, 1, 2, 0, 0, 0, 3], [1, 0, 7, 34, 3, 1, 0, 0, 0, 6], [1, 2, 3, 6, 75, 1, 1, 0, 2, 2], [2, 1, 18, 5, 1, 40, 0, 2, 2, 1], [2, 0, 1, 0, 1, 0, 54, 0, 0, 1], [1, 0, 9, 1, 0, 0, 0, 71, 0, 1], [6, 27, 4, 0, 0, 1, 12, 0, 130, 0], [1, 2, 2, 11, 0, 0, 0, 1, 0, 136]] |
| 0.1733 | 8.92 | 2800 | 1.3044 | 0.7190 | [[51, 5, 13, 8, 0, 4, 4, 5, 8, 2], [29, 116, 6, 0, 0, 4, 10, 2, 26, 0], [1, 0, 49, 1, 1, 8, 2, 0, 0, 3], [0, 0, 5, 34, 4, 3, 0, 0, 0, 6], [1, 3, 4, 2, 76, 4, 2, 0, 1, 0], [1, 0, 8, 4, 0, 52, 0, 3, 2, 2], [0, 0, 1, 0, 0, 0, 58, 0, 0, 0], [1, 0, 9, 0, 0, 2, 0, 71, 0, 0], [3, 26, 5, 0, 0, 4, 19, 3, 118, 2], [1, 2, 2, 11, 0, 0, 5, 2, 0, 130]] |
| 0.1275 | 9.24 | 2900 | 1.2416 | 0.7267 | [[66, 6, 8, 5, 0, 4, 3, 3, 4, 1], [53, 111, 0, 0, 4, 1, 4, 3, 17, 0], [3, 1, 48, 3, 1, 5, 2, 0, 0, 2], [5, 1, 3, 27, 5, 2, 0, 2, 1, 6], [1, 2, 0, 1, 85, 0, 1, 0, 2, 1], [5, 0, 5, 6, 0, 50, 2, 3, 1, 0], [0, 0, 1, 0, 1, 0, 57, 0, 0, 0], [3, 0, 3, 0, 0, 2, 1, 74, 0, 0], [13, 34, 0, 1, 0, 1, 11, 1, 119, 0], [7, 0, 1, 14, 0, 0, 3, 2, 0, 126]] |
| 0.1231 | 9.55 | 3000 | 1.4284 | 0.7124 | [[73, 3, 7, 5, 0, 1, 4, 2, 4, 1], [84, 81, 0, 0, 3, 1, 3, 1, 20, 0], [2, 1, 51, 2, 1, 5, 0, 0, 0, 3], [5, 0, 3, 28, 6, 1, 0, 0, 1, 8], [1, 1, 0, 1, 86, 0, 1, 0, 2, 1], [9, 0, 6, 4, 1, 46, 1, 3, 2, 0], [2, 0, 1, 0, 1, 0, 54, 0, 0, 1], [10, 0, 1, 0, 0, 0, 1, 71, 0, 0], [21, 23, 1, 0, 0, 2, 12, 2, 119, 0], [7, 0, 1, 4, 0, 0, 0, 2, 0, 139]] |
| 0.1828 | 9.87 | 3100 | 1.2049 | 0.7524 | [[66, 2, 13, 7, 0, 0, 2, 1, 7, 2], [38, 115, 1, 0, 4, 0, 4, 2, 28, 1], [1, 0, 52, 2, 1, 4, 2, 0, 0, 3], [3, 0, 4, 35, 5, 0, 0, 0, 1, 4], [0, 1, 1, 5, 83, 0, 1, 0, 1, 1], [4, 1, 12, 6, 3, 41, 1, 2, 2, 0], [0, 0, 1, 0, 0, 0, 58, 0, 0, 0], [5, 0, 5, 0, 0, 0, 1, 72, 0, 0], [11, 24, 1, 0, 0, 0, 10, 1, 132, 1], [3, 0, 2, 9, 0, 0, 1, 2, 0, 136]] |
| 0.083 | 10.19 | 3200 | 1.2484 | 0.7238 | [[57, 5, 16, 5, 1, 1, 7, 2, 3, 3], [30, 127, 0, 0, 1, 2, 11, 3, 18, 1], [0, 0, 52, 3, 1, 5, 2, 0, 0, 2], [4, 0, 5, 30, 5, 1, 0, 0, 0, 7], [1, 1, 0, 4, 84, 0, 1, 0, 1, 1], [3, 1, 9, 4, 1, 48, 2, 1, 2, 1], [0, 0, 1, 0, 0, 0, 57, 0, 0, 1], [4, 0, 7, 0, 0, 0, 3, 69, 0, 0], [9, 27, 1, 0, 0, 1, 32, 0, 109, 1], [2, 1, 2, 16, 0, 0, 3, 2, 0, 127]] |
| 0.1256 | 10.51 | 3300 | 1.2746 | 0.7229 | [[64, 4, 8, 4, 1, 7, 5, 2, 2, 3], [43, 119, 0, 0, 2, 1, 10, 3, 14, 1], [0, 0, 49, 3, 1, 7, 2, 0, 0, 3], [4, 0, 8, 27, 5, 1, 0, 0, 0, 7], [2, 1, 2, 2, 81, 3, 1, 0, 1, 0], [2, 2, 10, 3, 0, 50, 1, 0, 2, 2], [0, 1, 1, 0, 1, 0, 55, 0, 0, 1], [2, 0, 3, 0, 0, 0, 1, 77, 0, 0], [11, 37, 0, 0, 0, 3, 25, 2, 102, 0], [1, 1, 2, 11, 0, 0, 1, 2, 0, 135]] |
| 0.1067 | 10.83 | 3400 | 1.1905 | 0.7381 | [[55, 3, 11, 9, 1, 2, 7, 2, 6, 4], [35, 122, 0, 0, 1, 1, 6, 2, 25, 1], [2, 1, 50, 2, 1, 3, 4, 0, 0, 2], [2, 0, 4, 37, 4, 0, 0, 0, 0, 5], [0, 1, 1, 4, 82, 2, 1, 0, 1, 1], [2, 1, 13, 6, 0, 44, 1, 0, 1, 4], [0, 1, 1, 0, 0, 0, 56, 0, 0, 1], [1, 0, 3, 1, 0, 0, 1, 76, 0, 1], [9, 36, 0, 0, 0, 1, 20, 2, 112, 0], [0, 0, 2, 10, 0, 0, 0, 0, 0, 141]] |
| 0.092 | 11.15 | 3500 | 1.1175 | 0.7476 | [[65, 3, 8, 2, 0, 4, 5, 2, 9, 2], [27, 108, 0, 0, 1, 1, 8, 2, 46, 0], [4, 0, 49, 2, 1, 4, 3, 0, 0, 2], [2, 0, 3, 37, 4, 0, 0, 0, 1, 5], [0, 1, 1, 3, 83, 2, 1, 0, 1, 1], [6, 2, 8, 6, 0, 45, 1, 0, 1, 3], [0, 0, 1, 0, 0, 0, 56, 0, 1, 1], [3, 0, 2, 1, 0, 1, 3, 72, 0, 1], [10, 22, 0, 0, 0, 1, 12, 1, 134, 0], [1, 0, 1, 12, 0, 1, 2, 0, 0, 136]] |
| 0.153 | 11.46 | 3600 | 1.2434 | 0.7362 | [[75, 4, 9, 1, 0, 1, 2, 3, 4, 1], [51, 111, 0, 1, 1, 1, 1, 3, 24, 0], [2, 1, 52, 2, 1, 3, 0, 2, 0, 2], [4, 1, 6, 30, 4, 0, 0, 1, 0, 6], [1, 4, 1, 4, 80, 0, 1, 1, 0, 1], [6, 4, 8, 6, 1, 39, 0, 1, 2, 5], [2, 1, 1, 1, 1, 0, 52, 0, 0, 1], [2, 0, 1, 0, 0, 0, 0, 80, 0, 0], [15, 33, 0, 0, 1, 1, 11, 3, 114, 2], [1, 0, 1, 7, 2, 0, 1, 1, 0, 140]] |
| 0.1065 | 11.78 | 3700 | 1.2327 | 0.7371 | [[69, 2, 10, 2, 0, 2, 6, 3, 5, 1], [44, 109, 2, 0, 0, 2, 5, 3, 28, 0], [2, 1, 50, 2, 1, 4, 2, 1, 0, 2], [4, 0, 4, 32, 4, 2, 0, 1, 1, 4], [1, 2, 2, 6, 76, 2, 1, 1, 2, 0], [2, 1, 11, 4, 0, 47, 2, 1, 2, 2], [0, 0, 1, 0, 0, 0, 58, 0, 0, 0], [2, 0, 4, 0, 0, 0, 1, 76, 0, 0], [9, 27, 0, 0, 0, 1, 13, 1, 128, 1], [2, 2, 2, 13, 0, 0, 4, 1, 0, 129]] |
| 0.0875 | 12.1 | 3800 | 1.2357 | 0.7457 | [[67, 3, 10, 5, 0, 3, 1, 3, 6, 2], [40, 110, 0, 0, 1, 1, 4, 3, 34, 0], [1, 1, 51, 2, 1, 3, 1, 2, 0, 3], [3, 0, 4, 35, 4, 1, 0, 1, 0, 4], [0, 2, 1, 5, 78, 3, 1, 1, 1, 1], [1, 2, 12, 4, 0, 45, 2, 4, 1, 1], [0, 0, 1, 0, 1, 0, 56, 0, 0, 1], [2, 0, 2, 0, 0, 0, 1, 78, 0, 0], [9, 26, 0, 0, 1, 1, 12, 1, 129, 1], [0, 0, 2, 13, 0, 0, 3, 1, 0, 134]] |
| 0.0714 | 12.42 | 3900 | 1.2996 | 0.7305 | [[77, 3, 7, 3, 0, 1, 2, 2, 4, 1], [58, 103, 0, 0, 0, 1, 4, 1, 26, 0], [4, 1, 51, 2, 1, 3, 1, 0, 0, 2], [4, 0, 4, 33, 6, 0, 0, 0, 0, 5], [3, 1, 4, 3, 77, 2, 1, 0, 1, 1], [5, 2, 14, 6, 0, 44, 0, 0, 0, 1], [2, 1, 1, 0, 1, 0, 53, 0, 0, 1], [7, 0, 7, 0, 0, 0, 0, 69, 0, 0], [14, 28, 0, 0, 0, 1, 12, 1, 124, 0], [2, 0, 1, 11, 0, 3, 0, 0, 0, 136]] |
| 0.1433 | 12.74 | 4000 | 1.2167 | 0.7410 | [[74, 4, 9, 3, 0, 2, 1, 2, 4, 1], [42, 114, 0, 0, 1, 1, 3, 3, 29, 0], [4, 0, 49, 2, 1, 5, 2, 0, 0, 2], [4, 0, 4, 31, 6, 2, 0, 0, 0, 5], [0, 1, 2, 2, 86, 0, 1, 0, 0, 1], [7, 2, 9, 6, 0, 46, 1, 0, 0, 1], [2, 1, 1, 0, 1, 0, 53, 0, 0, 1], [3, 0, 5, 0, 0, 0, 0, 75, 0, 0], [9, 34, 0, 0, 1, 2, 11, 1, 120, 2], [3, 0, 1, 16, 0, 1, 2, 0, 0, 130]] |
| 0.0765 | 13.06 | 4100 | 1.2837 | 0.7381 | [[69, 4, 10, 3, 0, 4, 4, 2, 3, 1], [48, 105, 0, 0, 2, 1, 6, 3, 28, 0], [3, 0, 50, 1, 1, 6, 2, 0, 0, 2], [4, 0, 5, 31, 6, 1, 0, 0, 0, 5], [0, 1, 1, 4, 82, 2, 1, 1, 0, 1], [3, 2, 10, 5, 0, 47, 2, 2, 0, 1], [0, 1, 1, 0, 1, 0, 55, 0, 0, 1], [2, 0, 3, 0, 0, 0, 0, 78, 0, 0], [10, 30, 0, 0, 1, 2, 16, 1, 118, 2], [1, 0, 1, 8, 0, 2, 0, 1, 0, 140]] |
| 0.0753 | 13.38 | 4200 | 1.2866 | 0.7371 | [[72, 4, 9, 2, 0, 3, 3, 2, 4, 1], [46, 110, 0, 0, 1, 1, 5, 3, 27, 0], [3, 0, 51, 1, 1, 6, 1, 0, 0, 2], [4, 0, 5, 30, 6, 1, 0, 0, 0, 6], [0, 1, 2, 3, 80, 2, 1, 1, 2, 1], [7, 2, 8, 4, 0, 49, 0, 0, 1, 1], [0, 1, 1, 0, 1, 0, 55, 0, 0, 1], [2, 0, 4, 0, 0, 1, 0, 76, 0, 0], [9, 34, 1, 0, 1, 1, 12, 1, 120, 1], [7, 1, 1, 9, 0, 3, 0, 1, 0, 131]] |
| 0.0766 | 13.69 | 4300 | 1.3334 | 0.7324 | [[68, 5, 9, 5, 0, 2, 3, 3, 3, 2], [53, 106, 0, 0, 1, 1, 6, 4, 22, 0], [2, 0, 54, 1, 1, 4, 1, 0, 0, 2], [3, 0, 5, 34, 4, 1, 0, 0, 0, 5], [0, 2, 1, 4, 79, 2, 2, 1, 1, 1], [4, 2, 10, 4, 0, 47, 1, 2, 1, 1], [0, 1, 1, 0, 1, 0, 55, 0, 0, 1], [2, 0, 3, 0, 0, 0, 1, 77, 0, 0], [10, 37, 0, 0, 1, 1, 13, 1, 115, 2], [5, 0, 1, 11, 0, 1, 0, 1, 0, 134]] |
| 0.0699 | 14.01 | 4400 | 1.3905 | 0.7276 | [[66, 3, 8, 5, 0, 2, 3, 5, 6, 2], [59, 94, 0, 0, 1, 1, 5, 5, 28, 0], [2, 0, 53, 1, 1, 4, 1, 1, 0, 2], [4, 0, 5, 33, 5, 0, 0, 1, 0, 4], [0, 1, 2, 4, 79, 2, 2, 1, 1, 1], [3, 1, 13, 5, 0, 44, 1, 3, 1, 1], [0, 0, 1, 0, 1, 0, 56, 0, 0, 1], [2, 0, 1, 0, 0, 0, 1, 79, 0, 0], [10, 27, 0, 0, 1, 1, 12, 2, 125, 2], [5, 0, 1, 11, 0, 0, 0, 1, 0, 135]] |
| 0.1218 | 14.33 | 4500 | 1.3635 | 0.7324 | [[68, 3, 8, 4, 0, 1, 3, 4, 7, 2], [58, 92, 0, 0, 1, 1, 5, 4, 32, 0], [1, 0, 54, 1, 1, 4, 1, 0, 0, 3], [4, 0, 5, 33, 5, 0, 0, 0, 0, 5], [0, 1, 2, 4, 80, 3, 2, 1, 0, 0], [2, 1, 13, 4, 0, 44, 2, 3, 1, 2], [0, 0, 1, 0, 1, 0, 56, 0, 0, 1], [2, 0, 1, 0, 0, 0, 1, 79, 0, 0], [9, 26, 0, 0, 1, 1, 15, 1, 126, 1], [4, 0, 1, 10, 0, 0, 0, 1, 0, 137]] |
| 0.0648 | 14.65 | 4600 | 1.3205 | 0.7343 | [[66, 3, 9, 5, 0, 2, 3, 3, 7, 2], [52, 95, 0, 0, 2, 1, 6, 4, 33, 0], [3, 1, 52, 1, 1, 3, 2, 0, 0, 2], [4, 0, 5, 33, 5, 0, 0, 0, 0, 5], [0, 2, 1, 3, 83, 1, 2, 1, 0, 0], [2, 2, 13, 4, 0, 44, 2, 2, 1, 2], [0, 0, 1, 0, 1, 0, 56, 0, 0, 1], [2, 0, 2, 0, 0, 0, 1, 78, 0, 0], [9, 26, 0, 0, 1, 1, 14, 1, 127, 1], [3, 0, 1, 10, 0, 0, 1, 1, 0, 137]] |
| 0.0917 | 14.97 | 4700 | 1.3112 | 0.7343 | [[66, 3, 9, 5, 0, 2, 3, 3, 7, 2], [52, 98, 0, 0, 2, 1, 6, 4, 30, 0], [3, 1, 52, 1, 1, 3, 2, 0, 0, 2], [4, 0, 5, 33, 5, 0, 0, 0, 0, 5], [0, 2, 1, 3, 83, 1, 2, 1, 0, 0], [2, 2, 13, 4, 0, 44, 2, 2, 1, 2], [0, 0, 1, 0, 1, 0, 56, 0, 0, 1], [2, 0, 2, 0, 0, 0, 1, 78, 0, 0], [9, 28, 0, 0, 1, 1, 15, 1, 124, 1], [3, 0, 1, 10, 0, 1, 0, 1, 0, 137]] |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
xhluca/Llama-3-8B-Web-Q4_K_M-GGUF | xhluca | 2024-04-26T21:33:58Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"agents",
"agent",
"llm",
"llama",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:McGill-NLP/WebLINX",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-26T21:33:44Z | ---
language:
- en
license: llama3
library_name: transformers
tags:
- agents
- agent
- llm
- llama
- llama-cpp
- gguf-my-repo
datasets:
- McGill-NLP/WebLINX
---
# xhluca/Llama-3-8B-Web-Q4_K_M-GGUF
This model was converted to GGUF format from [`McGill-NLP/Llama-3-8B-Web`](https://huggingface.co/McGill-NLP/Llama-3-8B-Web) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/McGill-NLP/Llama-3-8B-Web) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo xhluca/Llama-3-8B-Web-Q4_K_M-GGUF --model llama-3-8b-web.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo xhluca/Llama-3-8B-Web-Q4_K_M-GGUF --model llama-3-8b-web.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-web.Q4_K_M.gguf -n 128
```
|
mlx-community/Meta-Llama-3-8B-Instruct | mlx-community | 2024-04-26T21:31:57Z | 22 | 2 | mlx | [
"mlx",
"safetensors",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"text-generation",
"conversational",
"en",
"license:other",
"region:us"
] | text-generation | 2024-04-26T21:19:33Z | ---
language:
- en
license: other
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- mlx
pipeline_tag: text-generation
license_name: llama3
license_link: LICENSE
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entityβs behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Metaβs proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Metaβs intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display βBuilt with Meta Llama 3β on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include βLlama 3β at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a βNoticeβ text file distributed as a part of such copies: βMeta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright Β© Meta Platforms,\
\ Inc. All Rights Reserved.β\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licenseeβs affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN βAS ISβ BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use βLlama 3β (the βMarkβ) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Metaβs brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Metaβs ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (βPolicyβ). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or othersβ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software βbug,β or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
widget:
- example_title: Hello
messages:
- role: user
content: Hey my name is Julien! How are you?
- example_title: Winter holidays
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and
truthfully.
- role: user
content: Can you recommend a good destination for Winter holidays?
- example_title: Programming assistant
messages:
- role: system
content: You are a helpful and honest code and programming assistant. Please,
respond concisely and truthfully.
- role: user
content: Write a function that computes the nth fibonacci number.
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
---
# mlx-community/Meta-Llama-3-8B-Instruct
This model was converted to MLX format from [`meta-llama/Meta-Llama-3-8B-Instruct`]() using mlx-lm version **0.12.0**.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Meta-Llama-3-8B-Instruct")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
kchopra04/lora_model_inst | kchopra04 | 2024-04-26T21:31:26Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-04-26T18:58:54Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** kchopra04
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
woransa/OrpoLlama-3-8B | woransa | 2024-04-26T21:28:40Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T20:43:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sumandas/llama3-openhermes-2.5 | sumandas | 2024-04-26T21:23:05Z | 84 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:teknium/OpenHermes-2.5",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-20T18:35:19Z | ---
datasets:
- teknium/OpenHermes-2.5
license: llama2
---
**Finetuned over openhermes-2.5 dataset for 1 epoch**
- Follows the llama-3 instruction format described in https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3
**Training Details**
https://medium.com/@sumandas0/fine-tune-llama3-on-million-scale-dataset-in-consumer-gpu-using-qlora-deepspeed-3ae8ad75299a |
NIHNCATS/NHS-BiomedNLP-BiomedBERT-hypop | NIHNCATS | 2024-04-26T21:21:01Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-25T21:50:12Z | ---
license: mit
base_model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: NHS-BiomedNLP-BiomedBERT-hypop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NHS-BiomedNLP-BiomedBERT-hypop
This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4277
- Accuracy: 0.8293
- Precision: 0.8301
- Recall: 0.8375
- F1: 0.8285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.0264 | 1.0 | 397 | 0.4689 | 0.7950 | 0.7974 | 0.8017 | 0.7946 |
| 0.5258 | 2.0 | 794 | 0.5543 | 0.7779 | 0.7745 | 0.7743 | 0.7744 |
| 3.0689 | 3.0 | 1191 | 0.6701 | 0.8050 | 0.8068 | 0.7957 | 0.7990 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Beanpow/zephyr-7b-dpo-full | Beanpow | 2024-04-26T21:12:02Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-25T13:07:18Z | ---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: zephyr-7b-dpo-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-full
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3377
- Rewards/chosen: -14.5626
- Rewards/rejected: -18.1281
- Rewards/accuracies: 0.6389
- Rewards/margins: 3.5654
- Logps/rejected: -2073.0146
- Logps/chosen: -1738.2311
- Logits/rejected: -0.6819
- Logits/chosen: -1.0035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 4.5854 | 0.1047 | 100 | 4.3811 | -0.2719 | -0.4992 | 0.6488 | 0.2273 | -310.1242 | -309.1552 | -2.1923 | -2.2813 |
| 2.6464 | 0.2093 | 200 | 2.6063 | -9.6247 | -11.6315 | 0.625 | 2.0068 | -1423.3580 | -1244.4360 | 0.6982 | -0.3562 |
| 1.9069 | 0.3140 | 300 | 2.2624 | -9.8468 | -11.9256 | 0.6329 | 2.0788 | -1452.7675 | -1266.6490 | 1.5569 | 0.4590 |
| 1.6642 | 0.4186 | 400 | 1.6421 | -14.4918 | -17.8494 | 0.625 | 3.3576 | -2045.1493 | -1731.1526 | -0.0875 | -0.7751 |
| 1.6328 | 0.5233 | 500 | 1.5120 | -13.0737 | -16.3036 | 0.6389 | 3.2299 | -1890.5623 | -1589.3370 | -0.0918 | -0.6590 |
| 1.6032 | 0.6279 | 600 | 1.4752 | -17.3374 | -21.4238 | 0.6230 | 4.0864 | -2402.5845 | -2015.7072 | 0.6402 | 0.0190 |
| 1.5039 | 0.7326 | 700 | 1.3853 | -14.1299 | -17.5624 | 0.6528 | 3.4325 | -2016.4491 | -1694.9624 | -0.4968 | -0.8898 |
| 1.3527 | 0.8373 | 800 | 1.3663 | -13.9016 | -17.2583 | 0.6448 | 3.3567 | -1986.0359 | -1672.1306 | -0.6750 | -1.0375 |
| 1.5137 | 0.9419 | 900 | 1.3374 | -14.5395 | -18.1313 | 0.6409 | 3.5918 | -2073.3389 | -1735.9152 | -0.6740 | -1.0018 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
AmberYifan/safe-spin-iter2 | AmberYifan | 2024-04-26T21:10:10Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:AmberYifan/safe-spin-iter1",
"base_model:finetune:AmberYifan/safe-spin-iter1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-25T23:40:53Z | ---
license: apache-2.0
base_model: AmberYifan/safe-spin-iter1
tags:
- generated_from_trainer
model-index:
- name: safe-spin-iter2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# safe-spin-iter2
This model is a fine-tuned version of [AmberYifan/safe-spin-iter1](https://huggingface.co/AmberYifan/safe-spin-iter1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
jdhadljasnajd/chat-model | jdhadljasnajd | 2024-04-26T21:09:54Z | 1 | 0 | transformers | [
"transformers",
"llama",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:47:53Z | ---
license: other
license_name: license
license_link: LICENSE
---
|
harir/mistral-7b-instruct-v0.1-review-toxicity | harir | 2024-04-26T21:05:40Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T21:02:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Laz4rz/hf-LunarLander-1-ppo | Laz4rz | 2024-04-26T21:02:59Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-04-26T20:45:27Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.43 +/- 17.17
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
Follow to eval the agent locally:
```python
repo_id = "Laz4rz/hf-LunarLander-1-ppo" # The repo_id
filename = "ppo-LunarLander-v2.zip" # The model filename.zip
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint)
eval_env = Monitor(gym.make("LunarLander-v2"))
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
...
```
|
Syed-Hasan-8503/Llama-3-openhermes-reft | Syed-Hasan-8503 | 2024-04-26T21:02:49Z | 4 | 1 | transformers | [
"transformers",
"dataset:teknium/OpenHermes-2.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-25T19:23:02Z | ---
library_name: transformers
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
---
# Llama-3-openhermes-reft

**Llama-3-openhermes-reft** is a fine-tuned version of **[meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)** on a 10K subset of **[teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5)**
dataset using **Representation Fine-Tuning (ReFT)**. The model has been trained for **1 epoch** on **1x A100** using PyReFT library.
#### What is ReFT?
ReFT methods are drop-in replacements for weight-based PEFTs. Parameter-efficient finetuning (PEFT) methods propose a efficient and cheaper alternative to full fine-tuning by updating a small fraction of weights, while using less memory and finishing training faster.
Current state-of-art PEFTs like LoRA and DoRA modify weights of model but not the representations. Representation Finetuning (ReFT) operates on a frozen base model and learn task-specific interventions on hidden representations.
#### PyReFT
PyReFT, a Python library made for training and sharing ReFTs.
This library is built on top of pyvene, a library for performing and training activation interventions on arbitrary PyTorch models.
* Codebase: **[PyReFT](https://github.com/stanfordnlp/pyreft)**
* PyPI release: **[Link](https://pypi.org/project/pyreft/)**
* Any pretrained LM available on HuggingFace is supported through pyreft for finetuning with ReFT methods, and finetuned models can be easily uploaded to HuggingFace.
#### Inference
```python
import torch, transformers, pyreft
device = "cuda"
model_name_or_path = "meta-llama/Meta-Llama-3-8B"
model = transformers.AutoModelForCausalLM.from_pretrained(
model_name_or_path, torch_dtype=torch.bfloat16, device_map=device)
reft_model = pyreft.ReftModel.load(
"Syed-Hasan-8503/Llama-3-openhermes-reft", model, from_huggingface_hub=True
)
reft_model.set_device("cuda")
instruction = "A rectangular garden has a length of 25 feet and a width of 15 feet. If you want to build a fence around the entire garden, how many feet of fencing will you need?"
prompt_no_input_template = """<|begin_of_text|><|start_header_id|>user<|end_header_id|>%s<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
# tokenize and prepare the input
prompt = prompt_no_input_template % instruction
prompt = tokenizer(prompt, return_tensors="pt").to(device)
base_unit_location = prompt["input_ids"].shape[-1] - 1 # last position
_, reft_response = reft_model.generate(
prompt, unit_locations={"sources->base": (None, [[[base_unit_location]]])},
intervene_on_prompt=True, max_new_tokens=512, do_sample=True,
eos_token_id=tokenizer.eos_token_id, early_stopping=True
)
print(tokenizer.decode(reft_response[0], skip_special_tokens=True))
``` |
GauravR12060102/my_awesome_eli5_clm-model | GauravR12060102 | 2024-04-26T20:56:53Z | 209 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T20:48:38Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
datasets:
- eli5_category
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6908 | 1.0 | 1273 | 3.5831 |
| 3.5749 | 2.0 | 2546 | 3.5787 |
| 3.5283 | 3.0 | 3819 | 3.5790 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
Litzy619/V0424HMA20 | Litzy619 | 2024-04-26T20:52:50Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-26T12:50:12Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0424HMA20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0424HMA20
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 60
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8142 | 0.09 | 10 | 0.3516 |
| 0.1881 | 0.18 | 20 | 0.1201 |
| 0.1155 | 0.27 | 30 | 0.0873 |
| 0.0936 | 0.36 | 40 | 0.0807 |
| 0.0868 | 0.45 | 50 | 0.0851 |
| 0.0884 | 0.54 | 60 | 0.0797 |
| 0.0825 | 0.63 | 70 | 0.0671 |
| 0.0726 | 0.73 | 80 | 0.0749 |
| 0.0803 | 0.82 | 90 | 0.0740 |
| 0.0796 | 0.91 | 100 | 0.0675 |
| 0.0722 | 1.0 | 110 | 0.0688 |
| 0.0639 | 1.09 | 120 | 0.0634 |
| 0.0642 | 1.18 | 130 | 0.0750 |
| 0.0638 | 1.27 | 140 | 0.0678 |
| 0.0628 | 1.36 | 150 | 0.0673 |
| 0.0645 | 1.45 | 160 | 0.0682 |
| 0.0575 | 1.54 | 170 | 0.0695 |
| 0.0635 | 1.63 | 180 | 0.0652 |
| 0.0534 | 1.72 | 190 | 0.0661 |
| 0.0682 | 1.81 | 200 | 0.0620 |
| 0.0551 | 1.9 | 210 | 0.0655 |
| 0.0539 | 1.99 | 220 | 0.0631 |
| 0.0342 | 2.08 | 230 | 0.0705 |
| 0.0331 | 2.18 | 240 | 0.0829 |
| 0.0313 | 2.27 | 250 | 0.0669 |
| 0.0286 | 2.36 | 260 | 0.0698 |
| 0.0324 | 2.45 | 270 | 0.0721 |
| 0.0288 | 2.54 | 280 | 0.0713 |
| 0.0294 | 2.63 | 290 | 0.0700 |
| 0.0322 | 2.72 | 300 | 0.0682 |
| 0.0313 | 2.81 | 310 | 0.0675 |
| 0.029 | 2.9 | 320 | 0.0676 |
| 0.0359 | 2.99 | 330 | 0.0675 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
hoaj/danish-bert-botxo-fb-housing-posts | hoaj | 2024-04-26T20:39:14Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Maltehb/danish-bert-botxo",
"base_model:finetune:Maltehb/danish-bert-botxo",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-26T20:36:20Z | ---
license: cc-by-4.0
base_model: Maltehb/danish-bert-botxo
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: danish-bert-botxo-fb-housing-posts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# danish-bert-botxo-fb-housing-posts
This model is a fine-tuned version of [Maltehb/danish-bert-botxo](https://huggingface.co/Maltehb/danish-bert-botxo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1659
- Accuracy: 0.9519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.372 | 1.0 | 55 | 0.2514 | 0.9251 |
| 0.171 | 2.0 | 110 | 0.1881 | 0.9305 |
| 0.2315 | 3.0 | 165 | 0.1854 | 0.9465 |
| 0.1284 | 4.0 | 220 | 0.1745 | 0.9465 |
| 0.0353 | 5.0 | 275 | 0.1659 | 0.9519 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
MrezaPRZ/CodeLLama_SFT_GRETEL | MrezaPRZ | 2024-04-26T20:34:45Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T20:32:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gremlin97/RemoteDiff | gremlin97 | 2024-04-26T20:28:30Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dataset:None",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-21T07:23:46Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
datasets:
- None
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - gremlin97/RemoteDiff
This pipeline was finetuned from **runwayml/stable-diffusion-v1-5** on the **None** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['A satellite image of a crop field']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("gremlin97/RemoteDiff", torch_dtype=torch.float16)
prompt = "A satellite image of a crop field"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 5
* Learning rate: 1e-06
* Batch size: 4
* Gradient accumulation steps: 4
* Image resolution: 224
* Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/gremlin/text2image-fine-tune/runs/tegl1gtv).
|
HenryCai1129/adapter-llama-adapterhappy2sad-1k-50-0.006 | HenryCai1129 | 2024-04-26T20:28:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:27:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
akankshya107/llava_dpt_2 | akankshya107 | 2024-04-26T20:18:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T18:46:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BohanJiang/my_awesome_opus_books_model | BohanJiang | 2024-04-26T20:14:12Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-04-26T20:00:55Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1944
- Bleu: 0.1991
- Gen Len: 18.18
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 3.6446 | 1.0 | 1617 | 3.2778 | 0.1513 | 18.2069 |
| 3.5134 | 2.0 | 3234 | 3.1944 | 0.1991 | 18.18 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
McGill-NLP/Llama-3-8B-Web | McGill-NLP | 2024-04-26T20:06:59Z | 212 | 212 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"agents",
"agent",
"llm",
"conversational",
"en",
"dataset:McGill-NLP/WebLINX",
"arxiv:2402.05930",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-22T20:48:36Z | ---
license: llama3
datasets:
- McGill-NLP/WebLINX
language:
- en
library_name: transformers
tags:
- agents
- agent
- llm
- llama
---
<div align="center">
<h1>Llama-3-8B-Web</h1>
<table>
<tr>
<td>
<a href="https://github.com/McGill-NLP/webllama">π» GitHub</a>
</td>
<td>
<a href="https://webllama.github.io">π Homepage</a>
</td>
<td>
<a href="https://huggingface.co/McGill-NLP/Llama-3-8B-Web">π€ Llama-3-8B-Web</a>
</td>
</tr>
</table>
<img src="assets/WebLlamaLogo.png" style="width: 400px;" />
*By using this model, you are accepting the terms of the [Meta Llama 3 Community License Agreement](https://llama.meta.com/llama3/license/).*
</div>
| `WebLlama` helps you build powerful agents, powered by Meta Llama 3, for browsing the web on your behalf | Our first model, [`Llama-3-8B-Web`](https://huggingface.co/McGill-NLP/Llama-3-8B-Web), surpasses GPT-4V (`*`zero-shot) by 18% on [`WebLINX`](https://mcgill-nlp.github.io/weblinx/) |
|:---: | :---: |
|  |  |
## Modeling
Our first agent is a finetuned [`Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) model, which was recently released by Meta GenAI team. We have finetuned this model on the [`WebLINX`](https://mcgill-nlp.github.io/weblinx/) dataset, which contains over 100K instances of web navigation and dialogue, each collected and verified by expert annotators. We use a 24K curated subset for training the data. The training and evaluation data is available on [Huggingface Hub as `McGill-NLP/WebLINX`](https://huggingface.co/datasets/McGill-NLP/WebLINX).
```python
from datasets import load_dataset
from huggingface_hub import snapshot_download
from transformers import pipeline
# We use validation data, but you can use your own data here
valid = load_dataset("McGill-NLP/WebLINX", split="validation")
snapshot_download("McGill-NLP/WebLINX", "dataset", allow_patterns="templates/*")
template = open('templates/llama.txt').read()
# Run the agent on a single state (text representation) and get the action
state = template.format(**valid[0])
agent = pipeline(model="McGill-NLP/Llama-3-8b-Web", device=0, torch_dtype='auto')
out = agent(state, return_full_text=False)[0]
print("Action:", out['generated_text'])
# Here, you can use the predictions on platforms like playwright or browsergym
action = process_pred(out['generated_text']) # implement based on your platform
env.step(action) # execute the action in your environment
```

**It surpasses GPT-4V (zero-shot `*`) by over 18% on the [`WebLINX`](https://mcgill-nlp.github.io/weblinx/) benchmark**, achieving an overall score of 28.8% on the out-of-domain test splits (compared to 10.5% for GPT-4V). It chooses more useful links (34.1% vs 18.9% *seg-F1*), clicks on more relevant elements (27.1% vs 13.6% *IoU*) and formulates more aligned responses (37.5% vs 3.1% *chr-F1*).
## About `WebLlama`
| `WebLlama` | The goal of our project is to build effective human-centric agents for browsing the web. We don't want to replace users, but equip them with powerful assistants. |
|:---: | :---|
| Modeling | We are build on top of cutting edge libraries for training Llama agents on web navigation tasks. We will provide training scripts, optimized configs, and instructions for training cutting-edge Llamas. |
| Evaluation | Benchmarks for testing Llama models on real-world web browsing. This include *human-centric* browsing through dialogue ([`WebLINX`](https://mcgill-nlp.github.io/weblinx/)), and we will soon add more benchmarks for automatic web navigation (e.g. Mind2Web). |
| Data | Our first model is finetuned on over 24K instances of web interactions, including `click`, `textinput`, `submit`, and dialogue acts. We want to continuously curate, compile and release datasets for training better agents. |
| Deployment | We want to make it easy to integrate Llama models with existing deployment platforms, including Playwright, Selenium, and BrowserGym. We are currently focusing on making this a reality. |
## Evaluation
We believe short demo videos showing how well an agent performs is NOT enough to judge an agent. Simply put, **we do not know if we have a good agent if we do not have good benchmarks.** We need to systematically evaluate agents on wide range of tasks, spanning from simple instruction-following web navigation to complex dialogue-guided browsing.
<img src="assets/WebLINXTestSplits.png" style="width: 100%; max-width:800px"/>
This is why we chose [`WebLINX`](https://mcgill-nlp.github.io/weblinx/) as our first benchmark. In addition to the training split, the benchmark has 4 real-world splits, with the goal of testing multiple dimensions of generalization: new websites, new domains, unseen geographic locations, and scenarios where the *user cannot see the screen and relies on dialogue*. It also covers 150 websites, including booking, shopping, writing, knowledge lookup, and even complex tasks like manipulating spreadsheets.
## Data
Although the 24K training examples from [`WebLINX`](https://mcgill-nlp.github.io/weblinx/) provide a good starting point for training a capable agent, we believe that more data is needed to train agents that can generalize to a wide range of web navigation tasks. Although it has been trained and evaluated on 150 websites, there are millions of websites that has never been seen by the model, with new ones being created every day.
**This motivates us to continuously curate, compile and release datasets for training better agents.** As an immediate next step, we will be incorporating `Mind2Web`'s training data into the equation, which also covers over 100 websites.
## Deployment
We are working hard to make it easy for you to deploy Llama web agents to the web. We want to integrate `WebLlama` with existing deployment platforms, including Microsoft's Playwright, ServiceNow Research's BrowserGym, and other partners.
## Code
The code for finetuning the model and evaluating it on the [`WebLINX`](https://mcgill-nlp.github.io/weblinx/) benchmark is available now. You can find the detailed instructions in [modeling](https://github.com/McGill-NLP/webllama/tree/main/modeling).
## Citation
If you use `WebLlama` in your research, please cite the following paper (upon which the data, training and evaluation are originally based on):
```
@misc{lΓΉ2024weblinx,
title={WebLINX: Real-World Website Navigation with Multi-Turn Dialogue},
author={Xing Han LΓΉ and ZdenΔk Kasner and Siva Reddy},
year={2024},
eprint={2402.05930},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
pkarypis/llama2-lima | pkarypis | 2024-04-26T19:51:13Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:GAIR/lima",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T19:41:10Z | ---
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- GAIR/lima
model-index:
- name: llama2-lima
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-lima
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the GAIR/lima dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9786 | 1.0 | 6 | 2.6180 |
| 1.75 | 2.0 | 12 | 1.9988 |
| 1.5489 | 3.0 | 18 | 1.9909 |
| 1.3631 | 4.0 | 24 | 1.9766 |
| 1.0134 | 5.0 | 30 | 2.1003 |
| 0.8425 | 6.0 | 36 | 2.1683 |
| 0.7051 | 7.0 | 42 | 2.3737 |
| 0.594 | 8.0 | 48 | 2.5296 |
| 0.5107 | 9.0 | 54 | 2.5203 |
| 0.4594 | 10.0 | 60 | 2.5297 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
LoneStriker/OpenBioLLM-Llama3-8B-5.0bpw-h6-exl2 | LoneStriker | 2024-04-26T19:48:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-04-26T19:45:27Z | ---
base_model: meta-llama/Meta-Llama-3-8B
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
model-index:
- name: OpenBioLLM-8B
results: []
license: llama3
language:
- en
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: >-
You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: >-
Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth.
3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.
It's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary.
Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance.
---
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
π₯ **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
π **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
π§ **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
βοΈ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF)
- **Language(s) (NLP):**Β en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:**Β Meta-Llama License
- **Fine-tuned from models:**Β [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
π₯ OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
π¨ The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
π¨Β **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**Β
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> π Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) |
LoneStriker/OpenBioLLM-Llama3-8B-4.0bpw-h6-exl2 | LoneStriker | 2024-04-26T19:45:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-04-26T19:42:43Z | ---
base_model: meta-llama/Meta-Llama-3-8B
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
model-index:
- name: OpenBioLLM-8B
results: []
license: llama3
language:
- en
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: >-
You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: >-
Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth.
3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.
It's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary.
Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance.
---
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
π₯ **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
π **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
π§ **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
βοΈ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF)
- **Language(s) (NLP):**Β en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:**Β Meta-Llama License
- **Fine-tuned from models:**Β [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
π₯ OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
π¨ The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
π¨Β **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**Β
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> π Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) |
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4 | study-hjt | 2024-04-26T19:43:26Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"codeqwen",
"code",
"chat",
"gptq",
"int4",
"conversational",
"en",
"arxiv:2309.16609",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2024-04-26T18:53:57Z | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- codeqwen
- code
- chat
- gptq
- int4
studios:
- qwen/CodeQwen1.5-7b-Chat-demo
---
# CodeQwen1.5-7B-Chat
## About Quantization
ζ们使η¨modelscope [swift](https://github.com/modelscope/swift/)δ»εΊθΏθ‘GPTQιε. ιεζζ‘£ε―δ»₯ζ₯η[θΏι](https://github.com/modelscope/swift/blob/main/docs/source/LLM/LLM%E9%87%8F%E5%8C%96%E6%96%87%E6%A1%A3.md). ιεε½δ»€ε¦δΈ:
We use the modelscope [swift](https://github.com/modelscope/swift/) repository to perform GPTQ quantization. Quantization documentation can be found [here](https://github.com/modelscope/swift/blob/main/docs/source_en/LLM/LLM-quantization.md). The quantization command is as follows:
```bash
OMP_NUM_THREADS=14 CUDA_VISIBLE_DEVICES=0 swift export \
--model_type codeqwen1half-7b-chat --quant_bits 4 \
--dataset codefuse-evol-instruction-zh --quant_method gptq --quant_seqlen 8192
```
## Introduction
CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.
* Strong code generation capabilities and competitve performance across a series of benchmarks;
* Supporting long context understanding and generation with the context length of 64K tokens;
* Supporting 92 coding languages
* Excellent performance on text-to-SQL, bug fix, etc.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/codeqwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Model Details
CodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'.
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4")
prompt = "Write a quicksort algorithm in python."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Tips
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
``` |
LoneStriker/OpenBioLLM-Llama3-8B-3.0bpw-h6-exl2 | LoneStriker | 2024-04-26T19:42:40Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] | text-generation | 2024-04-26T19:39:48Z | ---
base_model: meta-llama/Meta-Llama-3-8B
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
model-index:
- name: OpenBioLLM-8B
results: []
license: llama3
language:
- en
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: >-
You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: >-
Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth.
3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.
It's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary.
Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance.
---
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
π₯ **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
π **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
π§ **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
βοΈ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF)
- **Language(s) (NLP):**Β en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:**Β Meta-Llama License
- **Fine-tuned from models:**Β [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
π₯ OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
π¨ The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
π¨Β **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**Β
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> π Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) |
qianyihuang1203/class | qianyihuang1203 | 2024-04-26T19:35:33Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:autoevaluate/binary-classification",
"base_model:finetune:autoevaluate/binary-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-25T23:21:43Z | ---
license: apache-2.0
base_model: autoevaluate/binary-classification
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# class
This model is a fine-tuned version of [autoevaluate/binary-classification](https://huggingface.co/autoevaluate/binary-classification) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2408
- Accuracy: 0.9352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.212 | 1.0 | 1563 | 0.1816 | 0.9304 |
| 0.132 | 2.0 | 3126 | 0.2408 | 0.9352 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
shrenikb/fedsparsegpt50sparsity1test2 | shrenikb | 2024-04-26T19:29:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:shrenikb/sparsegpt50sparsitymodel",
"base_model:adapter:shrenikb/sparsegpt50sparsitymodel",
"region:us"
] | null | 2024-04-26T08:52:10Z | ---
library_name: peft
base_model: shrenikb/sparsegpt50sparsitymodel
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
Shreyagg2202/Bert-Custom-Sentiment-Analysis | Shreyagg2202 | 2024-04-26T19:29:29Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-26T19:08:47Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4741
- Accuracy: 0.5251
- F1: 0.5348
- Precision: 0.5692
- Recall: 0.5251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Tokenizers 0.19.1
|
HenryCai1129/adapter-llama-adaptertoxic2nontoxic-100-50-0.006 | HenryCai1129 | 2024-04-26T19:24:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T12:03:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pszemraj/jamba-H1024_L12-v0.07-fineweb-1M-med | pszemraj | 2024-04-26T19:22:46Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"jamba",
"text-generation",
"claude3 tokenizer",
"en",
"dataset:BEE-spoke-data/fineweb-1M_en-med",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-25T13:34:41Z | ---
license: apache-2.0
datasets:
- BEE-spoke-data/fineweb-1M_en-med
language:
- en
tags:
- jamba
- claude3 tokenizer
---
# jamba-H1024_L12-v0.07-fineweb-1M-med
<a href="https://colab.research.google.com/gist/pszemraj/a7aa793feb394580a962641ed92310bb/test-jamba-h1024-v0-07.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
> mid-training checkpoint
- arch: [jamba](https://huggingface.co/ai21labs/Jamba-v0.1) (see model card for kernels/use)
- tokenizer: claude3 as HF GPT2
- has only seen up to 2048 context length thus far
## numbers
> for this checkpoint
hf (pretrained=pszemraj/jamba-H1024_L12-v0.07-fineweb-1M-med,trust_remote_code=True,dtype=float), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 8
| Tasks |Version|Filter|n-shot| Metric | Value | |Stderr|
|--------------|------:|------|-----:|----------|-------:|---|-----:|
|winogrande | 1|none | 0|acc | 0.4972|Β± |0.0141|
|piqa | 1|none | 0|acc | 0.6072|Β± |0.0114|
| | |none | 0|acc_norm | 0.6034|Β± |0.0114|
|openbookqa | 1|none | 0|acc | 0.1660|Β± |0.0167|
| | |none | 0|acc_norm | 0.2800|Β± |0.0201|
|lambada_openai| 1|none | 0|perplexity|157.6757|Β± |6.8536|
| | |none | 0|acc | 0.2127|Β± |0.0057|
|boolq | 2|none | 0|acc | 0.6235|Β± |0.0085|
|arc_easy | 1|none | 0|acc | 0.3944|Β± |0.0100|
| | |none | 0|acc_norm | 0.3531|Β± |0.0098| |
RichardErkhov/IlyaGusev_-_saiga_gemma_9b-8bits | RichardErkhov | 2024-04-26T19:22:17Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-04-26T19:14:13Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
saiga_gemma_9b - bnb 8bits
- Model creator: https://huggingface.co/IlyaGusev/
- Original model: https://huggingface.co/IlyaGusev/saiga_gemma_9b/
Original model description:
---
language:
- ru
datasets:
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/gpt_roleplay_realm
- lksy/ru_instruct_gpt4
---
# Saiga/Gemma 9B, Russian Gemma-based chatbot
Based on [Gemma 7B](https://huggingface.co/google/gemma-7b).
Training Colab: [link](https://colab.research.google.com/drive/1O7F7Q3IQYh-v7EfsdRwMDMfUnOYZ3DEh).
ChatML prompt format:
```
<|im_start|>system
Π’Ρ β Π‘Π°ΠΉΠ³Π°, ΡΡΡΡΠΊΠΎΡΠ·ΡΡΠ½ΡΠΉ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΈΠΉ Π°ΡΡΠΈΡΡΠ΅Π½Ρ. Π’Ρ ΡΠ°Π·Π³ΠΎΠ²Π°ΡΠΈΠ²Π°Π΅ΡΡ Ρ Π»ΡΠ΄ΡΠΌΠΈ ΠΈ ΠΏΠΎΠΌΠΎΠ³Π°Π΅ΡΡ ΠΈΠΌ.<|im_end|>
<|im_start|>user
ΠΡΠΈΠ²Π΅Ρ!<|im_end|>
<|im_start|>assistant
ΠΡΠ»ΠΈΡΠ½ΠΎ, Π° ΡΡ?<|im_end|>
<|im_start|>user
Π¨ΠΈΠΊΠ°ΡΠ½ΠΎ. ΠΠ°ΠΊ ΠΏΡΠΎΠΉΡΠΈ Π² Π±ΠΈΠ±Π»ΠΈΠΎΡΠ΅ΠΊΡ?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: ΠΠΎΡΠ΅ΠΌΡ ΡΡΠ°Π²Π° Π·Π΅Π»Π΅Π½Π°Ρ?
Saiga: ΠΠ΅Π»Π΅Π½ΡΠΉ ΡΠ²Π΅Ρ ΡΡΠ°Π²Ρ ΠΎΠ±ΡΡΠ»ΠΎΠ²Π»Π΅Π½ Π½Π°Π»ΠΈΡΠΈΠ΅ΠΌ Π² Π΅Ρ ΠΊΠ»Π΅ΡΠΊΠ°Ρ
ΠΏΠΈΠ³ΠΌΠ΅Π½ΡΠ° Ρ
Π»ΠΎΡΠΎΡΠΈΠ»Π»Π°, ΠΊΠΎΡΠΎΡΡΠΉ ΠΈΠ³ΡΠ°Π΅Ρ Π²Π°ΠΆΠ½ΡΡ ΡΠΎΠ»Ρ Π² ΠΏΡΠΎΡΠ΅ΡΡΠ΅ ΡΠΎΡΠΎΡΠΈΠ½ΡΠ΅Π·Π° β ΠΏΡΠ΅ΠΎΠ±ΡΠ°Π·ΠΎΠ²Π°Π½ΠΈΡ ΡΠΎΠ»Π½Π΅ΡΠ½ΠΎΠ³ΠΎ ΡΠ²Π΅ΡΠ° Π² ΡΠ½Π΅ΡΠ³ΠΈΡ, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΡ Π΄Π»Ρ ΡΠΎΡΡΠ° ΠΈ ΡΠ°Π·Π²ΠΈΡΠΈΡ ΡΠ°ΡΡΠ΅Π½ΠΈΠΉ.
Π₯Π»ΠΎΡΠΎΡΠΈΠ»Π» ΠΏΠΎΠ³Π»ΠΎΡΠ°Π΅Ρ ΡΠ²Π΅Ρ Π² ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½Π½ΡΡ
Π΄ΠΈΠ°ΠΏΠ°Π·ΠΎΠ½Π°Ρ
Π΄Π»ΠΈΠ½ Π²ΠΎΠ»Π½, Π² ΡΠΎΠΌ ΡΠΈΡΠ»Π΅ Π² Π·Π΅Π»Π΅Π½ΠΎΠΉ ΠΎΠ±Π»Π°ΡΡΠΈ ΡΠΏΠ΅ΠΊΡΡΠ°, ΠΈ ΠΎΡΡΠ°ΠΆΠ°Π΅Ρ ΡΠ²Π΅Ρ Π² Π΄ΡΡΠ³ΠΈΡ
Π΄ΠΈΠ°ΠΏΠ°Π·ΠΎΠ½Π°Ρ
, ΡΡΠΎ ΠΈ ΠΎΠ±ΡΡΠ»Π°Π²Π»ΠΈΠ²Π°Π΅Ρ Π·Π΅Π»ΡΠ½ΡΠΉ ΡΠ²Π΅Ρ.
```
```
User: Π‘ΠΎΡΠΈΠ½ΠΈ Π΄Π»ΠΈΠ½Π½ΡΠΉ ΡΠ°ΡΡΠΊΠ°Π·, ΠΎΠ±ΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎ ΡΠΏΠΎΠΌΠΈΠ½Π°Ρ ΡΠ»Π΅Π΄ΡΡΡΠΈΠ΅ ΠΎΠ±ΡΠ΅ΠΊΡΡ. ΠΠ°Π½ΠΎ: Π’Π°Π½Ρ, ΠΌΡΡ
Saiga: ΠΠ΄Π½Π°ΠΆΠ΄Ρ, Π² Π΄Π°Π»Π΅ΠΊΠΎΠΉ ΡΡΡΠ°Π½Π΅, ΠΆΠΈΠ»Π° ΠΌΠ°Π»Π΅Π½ΡΠΊΠ°Ρ Π΄Π΅Π²ΠΎΡΠΊΠ° ΠΏΠΎ ΠΈΠΌΠ΅Π½ΠΈ Π’Π°Π½Ρ. ΠΠ½Π° Π±ΡΠ»Π° ΠΎΡΠ΅Π½Ρ Π°ΠΊΡΠΈΠ²Π½ΠΎΠΉ ΠΈ Π»ΡΠ±ΠΈΠ»Π° ΠΈΠ³ΡΠ°ΡΡ Π² ΠΌΡΡ. ΠΠΌΠ΅ΡΡΠ΅ ΡΠΎ ΡΠ²ΠΎΠΈΠΌΠΈ Π΄ΡΡΠ·ΡΡΠΌΠΈ ΠΎΠ½ΠΈ ΠΏΡΠΎΠ²ΠΎΠ΄ΠΈΠ»ΠΈ Π΄Π½ΠΈ, ΠΈΠ³ΡΠ°Ρ Π² ΡΡΡΠ±ΠΎΠ», Π±Π°ΡΠΊΠ΅ΡΠ±ΠΎΠ» ΠΈ Π²ΠΎΠ»Π΅ΠΉΠ±ΠΎΠ».
Π’Π°Π½Ρ Π±ΡΠ»Π° ΠΎΡΠ΅Π½Ρ ΡΠ°Π»Π°Π½ΡΠ»ΠΈΠ²ΠΎΠΉ ΠΈ Π»ΠΎΠ²ΠΊΠΎΠΉ, ΠΈ ΠΎΠ½Π° ΡΠ°ΡΡΠΎ Π²ΡΠΈΠ³ΡΡΠ²Π°Π»Π° Π² ΠΈΠ³ΡΠ°Ρ
Ρ Π΄ΡΡΠ³ΠΈΠΌΠΈ Π΄Π΅ΡΡΠΌΠΈ. ΠΠΎ ΠΎΠ΄Π½Π°ΠΆΠ΄Ρ, Π²ΠΎ Π²ΡΠ΅ΠΌΡ ΠΈΠ³ΡΡ Π² Π±Π°ΡΠΊΠ΅ΡΠ±ΠΎΠ», ΠΎΠ½Π° ΠΏΠΎΠ»ΡΡΠΈΠ»Π° ΡΠΈΠ»ΡΠ½ΡΡ ΡΡΠ°Π²ΠΌΡ. ΠΠΉ ΠΏΡΠΈΡΠ»ΠΎΡΡ ΠΏΡΠΎΠ²Π΅ΡΡΠΈ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ Π½Π΅Π΄Π΅Π»Ρ Π² Π±ΠΎΠ»ΡΠ½ΠΈΡΠ΅, ΠΈ ΠΎΠ½Π° Π±ΡΠ»Π° ΠΎΡΠ΅Π½Ρ Π³ΡΡΡΡΠ½ΠΎΠΉ ΠΈ ΡΠ½ΡΠ»ΠΎΠΉ.
ΠΡ Π΄ΡΡΠ·ΡΡ Π½Π΅ ΠΌΠΎΠ³Π»ΠΈ ΠΏΠΎΠ½ΡΡΡ, ΠΊΠ°ΠΊ ΠΏΠΎΠΌΠΎΡΡ Π’Π°Π½Π΅ ΡΠΏΡΠ°Π²ΠΈΡΡΡΡ Ρ ΡΡΠΎΠΉ ΡΠΈΡΡΠ°ΡΠΈΠ΅ΠΉ. ΠΠ½ΠΈ ΠΏΡΡΠ°Π»ΠΈΡΡ ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΈΠ²Π°ΡΡ Π΅Ρ, Π½ΠΎ Π’Π°Π½Ρ Π±ΡΠ»Π° ΡΠ»ΠΈΡΠΊΠΎΠΌ ΠΏΠΎΠ΄Π°Π²Π»Π΅Π½Π° ΠΈ Π½Π΅ Ρ
ΠΎΡΠ΅Π»Π° ΠΈΠ³ΡΠ°ΡΡ Π² ΠΌΡΡ.
ΠΠ΄Π½Π°ΠΆΠ΄Ρ, Π’Π°Π½Ρ ΡΠ²ΠΈΠ΄Π΅Π»Π°, ΠΊΠ°ΠΊ Π΅Ρ Π΄ΡΡΠ·ΡΡ ΠΈΠ³ΡΠ°ΡΡ Π² ΠΌΡΡ. ΠΠ½Π° Π½Π΅ ΠΌΠΎΠ³Π»Π° Π½Π΅ ΡΠΌΠΎΡΡΠ΅ΡΡ, ΠΊΠ°ΠΊ ΠΎΠ½ΠΈ Π»ΠΎΠ²ΠΊΠΎ ΠΏΠ΅ΡΠ΅Π΄Π°ΡΡ ΠΌΡΡ Π΄ΡΡΠ³ Π΄ΡΡΠ³Ρ ΠΈ Π·Π°Π±ΠΈΠ²Π°ΡΡ Π³ΠΎΠ»Ρ. Π’Π°Π½Ρ ΠΏΠΎΡΡΠ²ΡΡΠ²ΠΎΠ²Π°Π»Π°, ΠΊΠ°ΠΊ Π΅Ρ ΡΠ΅ΡΠ΄ΡΠ΅ ΡΠ°Π·ΡΡΠ²Π°Π΅ΡΡΡ ΠΎΡ ΠΆΠ΅Π»Π°Π½ΠΈΡ ΠΈΠ³ΡΠ°ΡΡ Π²ΠΌΠ΅ΡΡΠ΅ Ρ Π½ΠΈΠΌΠΈ.
ΠΠ½Π° ΡΠ΅ΡΠΈΠ»Π°, ΡΡΠΎ Π΅ΠΉ Π½ΡΠΆΠ½ΠΎ Π²Π΅ΡΠ½ΡΡΡΡΡ Π² ΠΈΠ³ΡΡ, ΡΡΠΎΠ±Ρ ΡΠ½ΠΎΠ²Π° ΠΏΠΎΡΡΠ²ΡΡΠ²ΠΎΠ²Π°ΡΡ ΡΠ°Π΄ΠΎΡΡΡ ΠΈΠ³ΡΡ. ΠΠ½Π° Π½Π°ΡΠ°Π»Π° ΠΌΠ΅Π΄Π»Π΅Π½Π½ΠΎ Π²ΠΎΡΡΡΠ°Π½Π°Π²Π»ΠΈΠ²Π°ΡΡΡΡ ΠΈ ΠΏΠΎΡΡΠ΅ΠΏΠ΅Π½Π½ΠΎ Π²ΠΎΠ·Π²ΡΠ°ΡΠ°Π»Π°ΡΡ ΠΊ ΡΠ²ΠΎΠ΅ΠΌΡ Π»ΡΠ±ΠΈΠΌΠΎΠΌΡ Π·Π°Π½ΡΡΠΈΡ - ΠΈΠ³ΡΠ΅ Π² ΠΌΡΡ.
Π’Π°Π½Ρ ΡΡΠ°Π»Π° ΠΈΠ³ΡΠ°ΡΡ Π² ΠΌΡΡ ΡΠ½ΠΎΠ²Π°, Π½ΠΎ ΡΠ΅ΠΏΠ΅ΡΡ ΠΎΠ½Π° Π±ΡΠ»Π° Π±ΠΎΠ»Π΅Π΅ ΠΎΡΡΠΎΡΠΎΠΆΠ½Π° ΠΈ Π²Π½ΠΈΠΌΠ°ΡΠ΅Π»ΡΠ½Π° ΠΊ ΡΠ²ΠΎΠ΅ΠΌΡ Π·Π΄ΠΎΡΠΎΠ²ΡΡ. ΠΡ Π΄ΡΡΠ·ΡΡ Π±ΡΠ»ΠΈ ΠΎΡΠ΅Π½Ρ ΡΠ°Π΄Ρ Π²ΠΈΠ΄Π΅ΡΡ, ΠΊΠ°ΠΊ Π’Π°Π½Ρ Π²ΠΎΠ·Π²ΡΠ°ΡΠ°Π΅ΡΡΡ ΠΊ ΡΠ²ΠΎΠ΅ΠΉ ΠΏΡΠΈΠ²ΡΡΠ½ΠΎΠΉ ΠΆΠΈΠ·Π½ΠΈ.
Π‘ ΡΠ΅Ρ
ΠΏΠΎΡ, Π’Π°Π½Ρ ΡΡΠ°Π»Π° Π½Π°ΡΡΠΎΡΡΠ΅ΠΉ Π·Π²Π΅Π·Π΄ΠΎΠΉ Π½Π° ΠΏΠΎΠ»Π΅. ΠΠ½Π° ΡΡΠ°Π»Π° Π»ΡΡΡΠ΅ΠΉ ΠΈΠ³ΡΠΎΠΊΠΎΠΌ Π² ΡΠ²ΠΎΠ΅ΠΉ ΠΊΠΎΠΌΠ°Π½Π΄Π΅ ΠΈ ΠΏΠΎΠΌΠΎΠ³Π»Π° ΠΈΠΌ Π²ΡΠΈΠ³ΡΠ°ΡΡ ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ²ΠΎ ΠΈΠ³Ρ. ΠΡ Π΄ΡΡΠ·ΡΡ Π±ΡΠ»ΠΈ Π³ΠΎΡΠ΄Ρ, ΡΡΠΎ Π’Π°Π½Ρ Π²Π΅ΡΠ½ΡΠ»Π°ΡΡ ΠΊ ΡΠ²ΠΎΠ΅ΠΌΡ Π»ΡΠ±ΠΈΠΌΠΎΠΌΡ Π·Π°Π½ΡΡΠΈΡ ΠΈ ΡΠ½ΠΎΠ²Π° ΡΡΠ°Π»Π° ΡΠ°ΠΊΠΎΠΉ ΠΆΠ΅ Π°ΠΊΡΠΈΠ²Π½ΠΎΠΉ ΠΈ Π»ΠΎΠ²ΠΊΠΎΠΉ, ΠΊΠ°ΠΊ ΡΠ°Π½ΡΡΠ΅.
Π’Π°Π½Ρ ΠΈ Π΅Ρ Π΄ΡΡΠ·ΡΡ ΠΏΡΠΎΠ΄ΠΎΠ»ΠΆΠ°Π»ΠΈ ΠΈΠ³ΡΠ°ΡΡ Π² ΠΌΡΡ ΠΈ ΠΏΡΠΎΠ²ΠΎΠ΄ΠΈΠ»ΠΈ ΡΡΠ΄Π΅ΡΠ½ΡΠ΅ Π΄Π½ΠΈ Π²ΠΌΠ΅ΡΡΠ΅. ΠΠ½ΠΈ Π±ΡΠ»ΠΈ ΡΡΠ°ΡΡΠ»ΠΈΠ²Ρ, ΡΡΠΎ Π’Π°Π½Ρ Π²Π΅ΡΠ½ΡΠ»Π°ΡΡ ΠΊ ΡΠ²ΠΎΠ΅ΠΌΡ Π»ΡΠ±ΠΈΠΌΠΎΠΌΡ Π·Π°Π½ΡΡΠΈΡ ΠΈ ΡΠ½ΠΎΠ²Π° ΡΡΠ°Π»Π° ΡΠ°ΡΡΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ.
Π’Π°Π½Ρ ΠΈ Π΅Ρ Π΄ΡΡΠ·ΡΡ ΠΏΡΠΎΠ΄ΠΎΠ»ΠΆΠ°ΡΡ ΠΈΠ³ΡΠ°ΡΡ Π² ΠΌΡΡ ΠΈ Π²ΠΌΠ΅ΡΡΠ΅ ΠΎΠ½ΠΈ ΡΠΎΠ·Π΄Π°ΡΡ ΠΏΡΠ΅ΠΊΡΠ°ΡΠ½ΡΠ΅ Π²ΠΎΡΠΏΠΎΠΌΠΈΠ½Π°Π½ΠΈΡ.
```
v1:
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/gemma_test/runs/k7u3uw5i)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
|
city96/t5-v1_1-xxl-encoder-bf16 | city96 | 2024-04-26T19:16:01Z | 17,358 | 23 | transformers | [
"transformers",
"safetensors",
"t5",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T17:35:59Z | A single-safetensor version of Google's T5 v1.1 XXL encoder model in bfloat16 precision.
Intended to be used with text to image models such as PixArt.
|
RichardErkhov/IlyaGusev_-_saiga_llama3_8b-4bits | RichardErkhov | 2024-04-26T19:13:13Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-04-26T19:07:43Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
saiga_llama3_8b - bnb 4bits
- Model creator: https://huggingface.co/IlyaGusev/
- Original model: https://huggingface.co/IlyaGusev/saiga_llama3_8b/
Original model description:
---
language:
- ru
datasets:
- IlyaGusev/saiga_scored
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
---
# Saiga/Llama3 8B, Russian Llama-3-based chatbot
Based on [Llama-3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
Llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga_llama3_8b_gguf)
**ΠΠ‘Π’ΠΠ ΠΠΠΠ! WARNING! LET OP!**
I've changed the prompt format from ChatML to **the original Llama-3 format in v4**. Don't forget to switch formats!
**v4**: LLama-3 prompt format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Π’Ρ β Π‘Π°ΠΉΠ³Π°, ΡΡΡΡΠΊΠΎΡΠ·ΡΡΠ½ΡΠΉ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΈΠΉ Π°ΡΡΠΈΡΡΠ΅Π½Ρ. Π’Ρ ΡΠ°Π·Π³ΠΎΠ²Π°ΡΠΈΠ²Π°Π΅ΡΡ Ρ Π»ΡΠ΄ΡΠΌΠΈ ΠΈ ΠΏΠΎΠΌΠΎΠ³Π°Π΅ΡΡ ΠΈΠΌ.<|eot_id|><|start_header_id|>user<|end_header_id|>
ΠΠ°ΠΊ Π΄Π΅Π»Π°?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
ΠΡΠ»ΠΈΡΠ½ΠΎ, Π° Ρ ΡΠ΅Π±Ρ?<|eot_id|><|start_header_id|>user<|end_header_id|>
Π¨ΠΈΠΊΠ°ΡΠ½ΠΎ. ΠΠ°ΠΊ ΠΏΡΠΎΠΉΡΠΈ Π² Π±ΠΈΠ±Π»ΠΈΠΎΡΠ΅ΠΊΡ?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
**v2, v3**: ChatML prompt format:
```
<|im_start|>system
Π’Ρ β Π‘Π°ΠΉΠ³Π°, ΡΡΡΡΠΊΠΎΡΠ·ΡΡΠ½ΡΠΉ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΈΠΉ Π°ΡΡΠΈΡΡΠ΅Π½Ρ. Π’Ρ ΡΠ°Π·Π³ΠΎΠ²Π°ΡΠΈΠ²Π°Π΅ΡΡ Ρ Π»ΡΠ΄ΡΠΌΠΈ ΠΈ ΠΏΠΎΠΌΠΎΠ³Π°Π΅ΡΡ ΠΈΠΌ.<|im_end|>
<|im_start|>user
ΠΠ°ΠΊ Π΄Π΅Π»Π°?<|im_end|>
<|im_start|>assistant
ΠΡΠ»ΠΈΡΠ½ΠΎ, Π° Ρ ΡΠ΅Π±Ρ?<|im_end|>
<|im_start|>user
Π¨ΠΈΠΊΠ°ΡΠ½ΠΎ. ΠΠ°ΠΊ ΠΏΡΠΎΠΉΡΠΈ Π² Π±ΠΈΠ±Π»ΠΈΠΎΡΠ΅ΠΊΡ?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: ΠΠΎΡΠ΅ΠΌΡ ΡΡΠ°Π²Π° Π·Π΅Π»Π΅Π½Π°Ρ?
Saiga: ΠΠ΅Π»Π΅Π½ΡΠΉ ΡΠ²Π΅Ρ ΡΡΠ°Π²Ρ ΠΎΠ±ΡΡΠ»ΠΎΠ²Π»Π΅Π½ Π½Π°Π»ΠΈΡΠΈΠ΅ΠΌ Ρ
Π»ΠΎΡΠΎΡΠΈΠ»Π»Π°, ΠΏΠΈΠ³ΠΌΠ΅Π½ΡΠ°, ΠΊΠΎΡΠΎΡΡΠΉ ΠΏΠΎΠ³Π»ΠΎΡΠ°Π΅Ρ ΡΠ²Π΅Ρ ΠΈ ΠΏΠ΅ΡΠ΅Π΄Π°Π΅Ρ ΡΠ½Π΅ΡΠ³ΠΈΡ Π΄Π»Ρ ΡΠΎΡΠΎΡΠΈΠ½ΡΠ΅Π·Π°. Π₯Π»ΠΎΡΠΎΡΠΈΠ»Π» ΡΠΎΠ΄Π΅ΡΠΆΠΈΡΡΡ Π² ΠΊΠ»Π΅ΡΠΊΠ°Ρ
Π»ΠΈΡΡΡΠ΅Π² ΡΠ°ΡΡΠ΅Π½ΠΈΠΉ ΠΈ ΠΈΠ³ΡΠ°Π΅Ρ ΠΊΠ»ΡΡΠ΅Π²ΡΡ ΡΠΎΠ»Ρ Π² ΠΏΡΠΎΡΠ΅ΡΡΠ΅ ΡΠΎΡΠΎΡΠΈΠ½ΡΠ΅Π·Π°, ΠΊΠΎΠ³Π΄Π° ΡΠ°ΡΡΠ΅Π½ΠΈΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡΡ ΡΠΎΠ»Π½Π΅ΡΠ½ΡΠΉ ΡΠ²Π΅Ρ Π΄Π»Ρ ΠΏΡΠ΅Π²ΡΠ°ΡΠ΅Π½ΠΈΡ Π²ΠΎΠ΄Ρ ΠΈ ΡΠ³Π»Π΅ΠΊΠΈΡΠ»ΠΎΠ³ΠΎ Π³Π°Π·Π° Π² ΠΎΡΠ³Π°Π½ΠΈΡΠ΅ΡΠΊΠΈΠ΅ Π²Π΅ΡΠ΅ΡΡΠ²Π°, ΡΠ°ΠΊΠΈΠ΅ ΠΊΠ°ΠΊ Π³Π»ΡΠΊΠΎΠ·Π° ΠΈ ΠΎΠ»ΠΈΠ³ΠΎΡΡΠΈΡΡ.
Π₯Π»ΠΎΡΠΎΡΠΈΠ»Π» ΡΠΎΡΡΠΎΠΈΡ ΠΈΠ· Π΄Π²ΡΡ
ΠΎΡΠ½ΠΎΠ²Π½ΡΡ
ΡΠΎΡΠΌ: Ρ
Π»ΠΎΡΠΎΡΠΈΠ»Π»Π° Π° ΠΈ Ρ
Π»ΠΎΡΠΎΡΠΈΠ»Π»Π° Π±. Π₯Π»ΠΎΡΠΎΡΠΈΠ»Π» Π° ΠΏΠΎΠ³Π»ΠΎΡΠ°Π΅Ρ ΡΠ²Π΅Ρ Ρ Π΄Π»ΠΈΠ½ΠΎΠΉ Π²ΠΎΠ»Π½Ρ ΠΎΠΊΠΎΠ»ΠΎ 430 Π½Π°Π½ΠΎΠΌΠ΅ΡΡΠΎΠ² (Π½ΠΌ), ΡΡΠΎ ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²ΡΠ΅Ρ ΡΠΈΠ½Π΅ΠΌΡ ΠΈΠ»ΠΈ Π³ΠΎΠ»ΡΠ±ΠΎΠΌΡ ΡΠ²Π΅ΡΡ, Π° Ρ
Π»ΠΎΡΠΎΡΠΈΠ»Π» Π± ΠΏΠΎΠ³Π»ΠΎΡΠ°Π΅Ρ ΡΠ²Π΅Ρ Ρ Π΄Π»ΠΈΠ½ΠΎΠΉ Π²ΠΎΠ»Π½Ρ ΠΎΠΊΠΎΠ»ΠΎ 660 Π½ΠΌ, ΡΡΠΎ ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²ΡΠ΅Ρ ΠΊΡΠ°ΡΠ½ΠΎΠΌΡ ΡΠ²Π΅ΡΡ. ΠΠΎΠ³Π΄Π° ΡΠ²Π΅Ρ ΠΏΠΎΠ³Π»ΠΎΡΠ°Π΅ΡΡΡ Ρ
Π»ΠΎΡΠΎΡΠΈΠ»Π»ΠΎΠΌ, ΠΎΠ½ ΠΏΠ΅ΡΠ΅Π΄Π°Π΅ΡΡΡ Π² ΠΌΠΎΠ»Π΅ΠΊΡΠ»Ρ Π²ΠΎΠ΄Ρ ΠΈ ΡΠ³Π»Π΅ΠΊΠΈΡΠ»ΠΎΠ³ΠΎ Π³Π°Π·Π°, ΠΊΠΎΡΠΎΡΡΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡΡΡΡ Π΄Π»Ρ ΡΠΎΡΠΎΡΠΈΠ½ΡΠ΅Π·Π°.
Π’Π°ΠΊΠΈΠΌ ΠΎΠ±ΡΠ°Π·ΠΎΠΌ, Ρ
Π»ΠΎΡΠΎΡΠΈΠ»Π» ΡΠ²Π»ΡΠ΅ΡΡΡ ΠΎΡΠ½ΠΎΠ²Π½ΡΠΌ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΠΎΠΌ, ΠΎΡΠ²Π΅ΡΡΡΠ²Π΅Π½Π½ΡΠΌ Π·Π° Π·Π΅Π»Π΅Π½ΡΠΉ ΡΠ²Π΅Ρ Π»ΠΈΡΡΡΠ΅Π² ΡΠ°ΡΡΠ΅Π½ΠΈΠΉ, ΡΠ°ΠΊ ΠΊΠ°ΠΊ ΠΎΠ½ ΠΏΠΎΠ³Π»ΠΎΡΠ°Π΅Ρ ΡΠ²Π΅Ρ ΠΈ ΠΏΠ΅ΡΠ΅Π΄Π°Π΅Ρ Π΅Π³ΠΎ ΡΠ½Π΅ΡΠ³ΠΈΡ Π΄Π»Ρ ΡΠΎΡΠΎΡΠΈΠ½ΡΠ΅Π·Π°.
```
```
User: Π‘ΠΎΡΠΈΠ½ΠΈ Π΄Π»ΠΈΠ½Π½ΡΠΉ ΡΠ°ΡΡΠΊΠ°Π·, ΠΎΠ±ΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎ ΡΠΏΠΎΠΌΠΈΠ½Π°Ρ ΡΠ»Π΅Π΄ΡΡΡΠΈΠ΅ ΠΎΠ±ΡΠ΅ΠΊΡΡ. ΠΠ°Π½ΠΎ: Π’Π°Π½Ρ, ΠΌΡΡ
Saiga: ΠΠ΄Π½Π°ΠΆΠ΄Ρ Π² ΡΠΎΠ»Π½Π΅ΡΠ½ΡΡ ΠΏΠΎΠ³ΠΎΠ΄Ρ Π½Π° ΠΏΠΎΠ»Π΅ Π΄Π»Ρ ΠΈΠ³ΡΡ Π² ΡΡΡΠ±ΠΎΠ» ΠΏΡΠΈΠ΅Ρ
Π°Π»Π° Π³ΡΡΠΏΠΏΠ° Π΄ΡΡΠ·Π΅ΠΉ, ΡΡΠΎΠ±Ρ ΠΏΡΠΎΠ²Π΅ΡΡΠΈ Π²ΡΠ΅ΠΌΡ Π²ΠΌΠ΅ΡΡΠ΅. Π ΡΠΈΡΠ»Π΅ ΠΈΡ
Π±ΡΠ» ΠΈ Π’Π°Π½Ρ, ΠΊΠΎΡΠΎΡΠ°Ρ Π±ΡΠ»Π° ΡΡΡΠ°ΡΡΠ½ΠΎΠΉ ΡΡΡΠ±ΠΎΠ»ΡΠ½ΠΎΠΉ ΡΠ°Π½Π°ΡΠΊΠΎΠΉ ΠΈ Π²ΡΠ΅Π³Π΄Π° ΡΠ°Π΄ΠΎΠ²Π°Π» ΡΠ²ΠΎΠΈΡ
Π΄ΡΡΠ·Π΅ΠΉ ΡΠ²ΠΎΠΈΠΌ ΡΠ½ΡΡΠ·ΠΈΠ°Π·ΠΌΠΎΠΌ ΠΈ ΡΠΌΠ΅Π½ΠΈΠ΅ΠΌ Π·Π°Π±ΠΈΠ²Π°ΡΡ ΠΌΡΡΠΈ.
Π ΡΡΠΎΡ Π΄Π΅Π½Ρ, ΠΊΠ°ΠΊ ΠΎΠ±ΡΡΠ½ΠΎ, Π΄ΡΡΠ·ΡΡ ΡΠ΅ΡΠΈΠ»ΠΈ ΠΏΡΠΎΠ²Π΅ΡΡΠΈ ΡΠΎΡΠ΅Π²Π½ΠΎΠ²Π°Π½ΠΈΠ΅ ΠΌΠ΅ΠΆΠ΄Ρ ΡΠΎΠ±ΠΎΠΉ, ΡΡΠΎΠ±Ρ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΠΈΡΡ ΠΊΡΠΎ ΠΈΠ· Π½ΠΈΡ
ΡΠ²Π»ΡΠ΅ΡΡΡ Π»ΡΡΡΠΈΠΌ ΡΡΡΠ±ΠΎΠ»ΠΈΡΡΠΎΠΌ. Π’Π°Π½Ρ Π±ΡΠ»Π° ΠΎΡΠ΅Π½Ρ ΡΠ²Π΅ΡΠ΅Π½Π° Π² ΡΠ²ΠΎΠΈΡ
ΡΠΈΠ»Π°Ρ
ΠΈ Π³ΠΎΡΠΎΠ²ΠΈΠ»Π°ΡΡ ΠΊ ΡΡΠΎΠΌΡ ΠΌΠ°ΡΡΡ Ρ ΠΎΡΠΎΠ±ΠΎΠΉ ΡΠ΅ΡΡΡΠ·Π½ΠΎΡΡΡΡ.
ΠΠΎΠ³Π΄Π° Π²ΡΠ΅ Π΄ΡΡΠ·ΡΡ ΡΠΎΠ±ΡΠ°Π»ΠΈΡΡ Π½Π° ΠΏΠΎΠ»Π΅, ΠΎΠ½ΠΈ ΡΠ²ΠΈΠ΄Π΅Π»ΠΈ, ΡΡΠΎ ΠΏΠ΅ΡΠ΅Π΄ Π½ΠΈΠΌΠΈ ΡΡΠΎΡΠ» ΠΎΠ³ΡΠΎΠΌΠ½ΡΠΉ ΠΌΡΡ, ΠΊΠΎΡΠΎΡΡΠΉ Π΄ΠΎΠ»ΠΆΠ΅Π½ Π±ΡΠ» ΡΡΠ°ΡΡ ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΠΎΠΌ ΡΠΎΡΡΡΠ·Π°Π½ΠΈΡ. ΠΡΡ Π±ΡΠ» ΠΎΠ³ΡΠΎΠΌΠ½ΡΠΌ ΠΈ ΡΡΠΆΠ΅Π»ΡΠΌ, ΠΈ Π΅Π³ΠΎ ΡΠ°Π·ΠΌΠ΅ΡΡ Π±ΡΠ»ΠΈ Π½Π΅ΠΎΠ±ΡΡΠ°ΠΉΠ½ΠΎ Π±ΠΎΠ»ΡΡΠΈΠΌΠΈ ΠΏΠΎ ΡΡΠ°Π²Π½Π΅Π½ΠΈΡ Ρ ΠΎΠ±ΡΡΠ½ΡΠΌΠΈ ΠΌΡΡΠ°ΠΌΠΈ, ΠΊΠΎΡΠΎΡΡΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡΡΡΡ Π² ΡΡΡΠ±ΠΎΠ»Π΅.
Π’Π°Π½Ρ Π±ΡΠ»Π° ΠΏΠ΅ΡΠ²Π°Ρ, ΠΊΡΠΎ ΡΠ΅ΡΠΈΠ» Π½Π°ΡΠ°ΡΡ ΠΈΠ³ΡΡ. ΠΠ½Π° ΠΏΠΎΠ΄ΠΎΡΠ»Π° ΠΊ ΠΌΡΡΡ ΠΈ Π½Π°ΡΠ°Π»Π° Π΅Π³ΠΎ ΡΠ΄Π΅ΡΠΆΠΈΠ²Π°ΡΡ, ΡΡΠ°ΡΠ°ΡΡΡ Π²ΡΠ΄Π΅ΡΠΆΠ°ΡΡ Π΅Π³ΠΎ Π²Π΅Ρ ΠΈ ΡΠΈΠ»Ρ. ΠΠΎ ΠΌΡΡ ΠΎΠΊΠ°Π·Π°Π»ΡΡ Π½Π°ΡΡΠΎΠ»ΡΠΊΠΎ ΡΡΠΆΠ΅Π»ΡΠΌ, ΡΡΠΎ Π’Π°Π½Ρ Π½Π΅ ΡΠΌΠΎΠ³Π»Π° ΡΠ΄Π΅ΡΠΆΠ°ΡΡ Π΅Π³ΠΎ ΠΈ ΠΎΠ½ ΡΠΏΠ°Π» Π½Π° Π·Π΅ΠΌΠ»Ρ.
ΠΡΡΠ·ΡΡ ΠΏΠΎΡΠΌΠ΅ΡΠ»ΠΈΡΡ Π½Π°Π΄ Π΅Π΅ Π½Π΅ΡΠ΄Π°ΡΠ΅ΠΉ, Π½ΠΎ Π’Π°Π½Ρ Π½Π΅ ΠΎΡΡΠ°ΠΈΠ²Π°Π»Π°ΡΡ ΠΈ ΠΏΡΠΎΠ΄ΠΎΠ»ΠΆΠΈΠ»Π° ΠΏΡΡΠ°ΡΡΡΡ ΡΠ΄Π΅ΡΠΆΠ°ΡΡ ΠΌΡΡ. ΠΠ½Π° ΡΡΠ°Π»Π° ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ Π²ΡΠ΅ ΡΠ²ΠΎΠΈ ΡΠΈΠ»Ρ ΠΈ ΡΠΌΠ΅Π½ΠΈΡ, ΡΡΠΎΠ±Ρ Π²ΡΠ΄Π΅ΡΠΆΠ°ΡΡ Π΅Π³ΠΎ Π²Π΅Ρ ΠΈ ΡΠΈΠ»Ρ. ΠΠ°ΠΊΠΎΠ½Π΅Ρ, ΠΏΠΎΡΠ»Π΅ Π΄ΠΎΠ»Π³ΠΈΡ
ΡΡΠΈΠ»ΠΈΠΉ, ΠΎΠ½Π° ΡΠΌΠΎΠ³Π»Π° ΡΠ΄Π΅ΡΠΆΠ°ΡΡ ΠΌΡΡ ΠΈ Π½Π°ΡΠ°Π»Π° Π΅Π³ΠΎ Π±ΡΠΎΡΠ°ΡΡ Π² ΡΡΠΎΡΠΎΠ½Ρ.
ΠΡΡ Π»Π΅ΡΠ΅Π» Π²ΡΡΠΎΠΊΠΎ Π²Π²Π΅ΡΡ
, ΠΈ Π΄ΡΡΠ·ΡΡ ΡΠΌΠΎΡΡΠ΅Π»ΠΈ, ΠΊΠ°ΠΊ ΠΎΠ½ ΠΏΡΠΎΠ»Π΅ΡΠ°Π΅Ρ Π½Π°Π΄ ΠΏΠΎΠ»Π΅ΠΌ. ΠΠΎ ΠΌΡΡ Π½Π΅ΠΎΠΆΠΈΠ΄Π°Π½Π½ΠΎ ΠΏΠΎΠ²Π΅ΡΠ½ΡΠ» ΠΈ ΡΡΠ°Π» Π»Π΅ΡΠ΅ΡΡ ΠΎΠ±ΡΠ°ΡΠ½ΠΎ ΠΊ Π’Π°Π½Π΅. ΠΠ½Π° ΡΡΠΏΠ΅Π»Π° ΠΏΠΎΠΉΠΌΠ°ΡΡ Π΅Π³ΠΎ ΠΈ ΠΏΡΠΎΠ΄ΠΎΠ»ΠΆΠΈΠ»Π° ΠΈΠ³ΡΠ°ΡΡ, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡ Π²ΡΠ΅ ΡΠ²ΠΎΠΈ Π½Π°Π²ΡΠΊΠΈ ΠΈ ΡΠΌΠ΅Π½ΠΈΡ.
```
v4:
- [1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd)
- dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, c66032920556c0f21bbbed05e7e04433ec954c3d
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/dcbs9ttt)
v3:
- [c588356cd60bdee54d52c2dd5a2445acca8aa5c3](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/c588356cd60bdee54d52c2dd5a2445acca8aa5c3)
- dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, d51cf8060bdc90023da8cf1c3f113f9193d6569b
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/ltoqdsal)
v2:
- [ae61b4f9b34fac9856d361ea78c66284a00e4f0b](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/ae61b4f9b34fac9856d361ea78c66284a00e4f0b)
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/huggingface/runs/r6u5juyk)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
# Evaluation
* Dataset: https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/tasks.jsonl
* Framework: https://github.com/tatsu-lab/alpaca_eval
* Evaluator: alpaca_eval_cot_gpt4_turbo_fn
| model | length_controlled_winrate | win_rate | standard_error | avg_length |
|-----|-----|-----|-----|-----|
|chatgpt_4_turbo | 76.04 | 90.00 |1.46 | 1270 |
|chatgpt_3_5_turbo | 50.00 | 50.00 | 0.00 | 536 |
|saiga_llama3_8b, v4 | 43.64 | 65.90 | 2.31 | 1200 |
|saiga_llama3_8b, v3 | 36.97 | 61.08 | 2.38 | 1162 |
|saiga_llama3_8b, v2 | 33.07 | 48.19 | 2.45 | 1166 |
|saiga_mistral_7b | 23.38 | 35.99 | 2.34 | 949 |
|
automerger/Ognoexperiment27multi_verse_modelMeliodas-7B | automerger | 2024-04-26T19:07:46Z | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T19:07:40Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
---
# Ognoexperiment27multi_verse_modelMeliodas-7B
Ognoexperiment27multi_verse_modelMeliodas-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## π§© Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: automerger/Ognoexperiment27Multi_verse_model-7B
- model: AurelPx/Meliodas-7b-dare
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Ognoexperiment27multi_verse_modelMeliodas-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mserloth/autotrain400to100 | mserloth | 2024-04-26T19:03:47Z | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:autotrain400to100/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-26T19:00:34Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- autotrain400to100/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.6517816781997681
f1: 0.5555555555555556
precision: 0.625
recall: 0.5
auc: 0.71
accuracy: 0.6
|
josiahgottfried/my_awesome_billsum_model | josiahgottfried | 2024-04-26T19:03:24Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-04-26T18:25:30Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4539
- Rouge1: 0.1468
- Rouge2: 0.0569
- Rougel: 0.1209
- Rougelsum: 0.1212
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7578 | 0.1326 | 0.0415 | 0.1103 | 0.1105 | 19.0 |
| No log | 2.0 | 124 | 2.5392 | 0.1368 | 0.0491 | 0.1134 | 0.1136 | 19.0 |
| No log | 3.0 | 186 | 2.4711 | 0.1456 | 0.0563 | 0.1193 | 0.1196 | 19.0 |
| No log | 4.0 | 248 | 2.4539 | 0.1468 | 0.0569 | 0.1209 | 0.1212 | 19.0 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
huiang/distilbert-rotten_tomatoes | huiang | 2024-04-26T18:58:16Z | 119 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-26T18:58:07Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-rotten_tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rotten_tomatoes
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
zilongpa/aes-llama3-v1 | zilongpa | 2024-04-26T18:55:19Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T18:32:53Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Novski/PPO-LunarLander-v2 | Novski | 2024-04-26T18:53:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-04-26T17:46:00Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.33 +/- 11.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
stvhuang/rcr-run-5pqr6lwp-90396-master-0_20240402T105012-ep35 | stvhuang | 2024-04-26T18:52:20Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-04-26T18:51:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
akankshya107/llava_dpt_1 | akankshya107 | 2024-04-26T18:42:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-23T22:35:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yiyic/llama-text-ent-lora-clf-epoch-2 | yiyic | 2024-04-26T18:41:06Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-04-26T18:41:03Z | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
yiyic/llama-text-prop-lora-clf-epoch-2 | yiyic | 2024-04-26T18:40:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-04-26T18:40:42Z | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
Thermostatic/NeuralTranslate_v0.1_GGUF | Thermostatic | 2024-04-26T18:38:04Z | 6 | 1 | transformers | [
"transformers",
"gguf",
"mistral",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-21T06:44:22Z | ---
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is the first alpha version of NeuralTranslate. It translates bidirectionally to English/Spanish but has some unexpected behaviour due to overfitting.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GoHugo/idefics2-8b-docvqa-finetuned-tutorial | GoHugo | 2024-04-26T18:33:49Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"base_model:finetune:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T18:33:43Z | ---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: idefics2-8b-docvqa-finetuned-tutorial
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# idefics2-8b-docvqa-finetuned-tutorial
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
nem012/gemma2B-r16MHCv2 | nem012 | 2024-04-26T18:31:55Z | 137 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T18:28:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LoneStriker/Qwen1.5-110B-Chat-GGUF | LoneStriker | 2024-04-26T18:26:55Z | 2 | 4 | null | [
"gguf",
"chat",
"text-generation",
"en",
"arxiv:2309.16609",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-04-26T16:47:54Z | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen1.5-110B-Chat
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 9 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B, 72B, and 110B dense models, and an MoE model of 14B with 2.7B activated;
* Significant performance improvement in human preference for chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
<br>
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B and 110B) and the mixture of SWA and full attention.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen1.5-110B-Chat",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-110B-Chat")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Tips
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
|
human-centered-summarization/financial-summarization-pegasus | human-centered-summarization | 2024-04-26T18:26:40Z | 5,061 | 130 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"pegasus",
"text2text-generation",
"summarization",
"en",
"dataset:xsum",
"arxiv:1912.08777",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- summarization
datasets:
- xsum
metrics:
- rouge
widget:
- text: National Commercial Bank (NCB), Saudi Arabiaβs largest lender by assets, agreed
to buy rival Samba Financial Group for $15 billion in the biggest banking takeover
this year.NCB will pay 28.45 riyals ($7.58) for each Samba share, according to
a statement on Sunday, valuing it at about 55.7 billion riyals. NCB will offer
0.739 new shares for each Samba share, at the lower end of the 0.736-0.787 ratio
the banks set when they signed an initial framework agreement in June.The offer
is a 3.5% premium to Sambaβs Oct. 8 closing price of 27.50 riyals and about 24%
higher than the level the shares traded at before the talks were made public.
Bloomberg News first reported the merger discussions.The new bank will have total
assets of more than $220 billion, creating the Gulf regionβs third-largest lender.
The entityβs $46 billion market capitalization nearly matches that of Qatar National
Bank QPSC, which is still the Middle Eastβs biggest lender with about $268 billion
of assets.
model-index:
- name: human-centered-summarization/financial-summarization-pegasus
results:
- task:
type: summarization
name: Summarization
dataset:
name: xsum
type: xsum
config: default
split: test
metrics:
- type: rouge
value: 35.2055
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTA5OTZkY2YxMDU1YzE3NGJlMmE1OTg1NjlmNzcxOTg4YzY2OThlOTlkNGFhMGFjZWY4YjdiMjU5NDdmMWYzNSIsInZlcnNpb24iOjF9.ufBRoV2JoX4UlEfAUOYq7F3tZougwngdpKlnaC37tYXJU3omsR5hTsWM69hSdYO-k0cKUbAWCAMzjmoGwIaPAw
- type: rouge
value: 16.5689
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWQwMmM2NjJjNzM1N2Y3NjZmMmE5NzNlNjRjNjEwNzNhNjcyZTRiMGRlODY3NWUyMGQ0YzZmMGFhODYzOTRmOSIsInZlcnNpb24iOjF9.AZZkbaYBZG6rw6-QHYjRlSl-p0gBT2EtJxwjIP7QYH5XIQjeoiQsTnDPIq25dSMDbmQLSZnpHC104ZctX0f_Dg
- type: rouge
value: 30.1285
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTRjYThlMTllZjI4MGFiMDZhZTVkYmRjMTNhZDUzNTQ0OWQyNDQxMmQ5ODJiMmJiNGI3OTAzYjhiMzc2MTI4NCIsInZlcnNpb24iOjF9.zTHd3F4ZlgS-azl-ZVjOckcTrtrJmDOGWVaC3qQsvvn2UW9TnseNkmo7KBc3DJU7_NmlxWZArl1BdSetED0NCg
- type: rouge
value: 30.1706
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGMzZGFjNzVkYWI0NTJkMmZjZDQ0YjhiYjIxN2VkNmJjMTgwZTk1NjFlOGU2NjNjM2VjYTNlYTBhNTQ5MGZkNSIsInZlcnNpb24iOjF9.xQ2LoI3PwlEiXo1OT2o4Pq9o2thYCd9lSCKCWlLmZdxI5GxdsjcASBKmHKopzUcwCGBPR7zF95MHSAPyszOODA
- type: loss
value: 2.7092134952545166
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzQzODE0NDc5YTYzYjJlMWU2YTVjOGRjN2JmYWVkOWNkNTRlMTZlOWIyN2NiODJkMDljMjI3YzZmYzM3N2JjYSIsInZlcnNpb24iOjF9.Vv_pdeFuRMoKK3cPr5P6n7D6_18ChJX-2qcT0y4is3XX3mS98fk3U1AYEuy9nBHOwYR3o0U8WBgQ-Ya_FqefBg
- type: gen_len
value: 15.1414
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjk5OTk3NWRiNjZlZmQzMmYwOTU2MmQwOWE1MDNlNTg3YWVkOTgwOTc2ZTQ0MTBiZjliOWMyZTYwMDI2MDUzYiIsInZlcnNpb24iOjF9.Zvj84JzIhM50rWTQ2GrEeOU7HrS8KsILH-8ApTcSWSI6kVnucY0MyW2ODxvRAa_zHeCygFW6Q13TFGrT5kLNAA
---
### PEGASUS for Financial Summarization
This model was fine-tuned on a novel financial news dataset, which consists of 2K articles from [Bloomberg](https://www.bloomberg.com/europe), on topics such as stock, markets, currencies, rate and cryptocurrencies.
It is based on the [PEGASUS](https://huggingface.co/transformers/model_doc/pegasus.html) model and in particular PEGASUS fine-tuned on the Extreme Summarization (XSum) dataset: [google/pegasus-xsum model](https://huggingface.co/google/pegasus-xsum). PEGASUS was originally proposed by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu in [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf).
*Note: This model serves as a base version. For an even more advanced model with significantly enhanced performance, please check out our [advanced version](https://rapidapi.com/medoid-ai-medoid-ai-default/api/financial-summarization-advanced) on Rapid API. The advanced model offers more than a 16% increase in ROUGE scores (similarity to a human-generated summary) compared to our base model. Moreover, our advanced model also offers several convenient plans tailored to different use cases and workloads, ensuring a seamless experience for both personal and enterprise access.*
### How to use
We provide a simple snippet of how to use this model for the task of financial summarization in PyTorch.
```Python
from transformers import PegasusTokenizer, PegasusForConditionalGeneration, TFPegasusForConditionalGeneration
# Let's load the model and the tokenizer
model_name = "human-centered-summarization/financial-summarization-pegasus"
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name) # If you want to use the Tensorflow model
# just replace with TFPegasusForConditionalGeneration
# Some text to summarize here
text_to_summarize = "National Commercial Bank (NCB), Saudi Arabiaβs largest lender by assets, agreed to buy rival Samba Financial Group for $15 billion in the biggest banking takeover this year.NCB will pay 28.45 riyals ($7.58) for each Samba share, according to a statement on Sunday, valuing it at about 55.7 billion riyals. NCB will offer 0.739 new shares for each Samba share, at the lower end of the 0.736-0.787 ratio the banks set when they signed an initial framework agreement in June.The offer is a 3.5% premium to Sambaβs Oct. 8 closing price of 27.50 riyals and about 24% higher than the level the shares traded at before the talks were made public. Bloomberg News first reported the merger discussions.The new bank will have total assets of more than $220 billion, creating the Gulf regionβs third-largest lender. The entityβs $46 billion market capitalization nearly matches that of Qatar National Bank QPSC, which is still the Middle Eastβs biggest lender with about $268 billion of assets."
# Tokenize our text
# If you want to run the code in Tensorflow, please remember to return the particular tensors as simply as using return_tensors = 'tf'
input_ids = tokenizer(text_to_summarize, return_tensors="pt").input_ids
# Generate the output (Here, we use beam search but you can also use any other strategy you like)
output = model.generate(
input_ids,
max_length=32,
num_beams=5,
early_stopping=True
)
# Finally, we can print the generated summary
print(tokenizer.decode(output[0], skip_special_tokens=True))
# Generated Output: Saudi bank to pay a 3.5% premium to Samba share price. Gulf regionβs third-largest lender will have total assets of $220 billion
```
## Evaluation Results
The results before and after the fine-tuning on our dataset are shown below:
| Fine-tuning | R-1 | R-2 | R-L | R-S |
|:-----------:|:-----:|:-----:|:------:|:-----:|
| Yes | 23.55 | 6.99 | 18.14 | 21.36 |
| No | 13.8 | 2.4 | 10.63 | 12.03 |
## Citation
You can find more details about this work in the following workshop paper. If you use our model in your research, please consider citing our paper:
> T. Passali, A. Gidiotis, E. Chatzikyriakidis and G. Tsoumakas. 2021.
> Towards Human-Centered Summarization: A Case Study on Financial News.
> In Proceedings of the First Workshop on Bridging Human-Computer Interaction and Natural Language Processing(pp. 21β27). Association for Computational Linguistics.
BibTeX entry:
```
@inproceedings{passali-etal-2021-towards,
title = "Towards Human-Centered Summarization: A Case Study on Financial News",
author = "Passali, Tatiana and Gidiotis, Alexios and Chatzikyriakidis, Efstathios and Tsoumakas, Grigorios",
booktitle = "Proceedings of the First Workshop on Bridging Human{--}Computer Interaction and Natural Language Processing",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.hcinlp-1.4",
pages = "21--27",
}
```
## Support
Contact us at [[email protected]](mailto:[email protected]) if you are interested in a more sophisticated version of the model, trained on more articles and adapted to your needs!
More information about Medoid AI:
- Website: [https://www.medoid.ai](https://www.medoid.ai)
- LinkedIn: [https://www.linkedin.com/company/medoid-ai/](https://www.linkedin.com/company/medoid-ai/)
|
nem012/gemma2b-r16MHC | nem012 | 2024-04-26T18:15:45Z | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T10:29:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nitsuai/ct2fast-all-MiniLM-L6-v2 | nitsuai | 2024-04-26T18:14:40Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"bert",
"ctranslate2",
"int8",
"float16",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-04-26T18:14:40Z | ---
pipeline_tag: sentence-similarity
tags:
- ctranslate2
- int8
- float16
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-all-MiniLM-L6-v2"
model_name_orig="sentence-transformers/all-MiniLM-L6-v2"
from hf_hub_ctranslate2 import EncoderCT2fromHfHub
model = EncoderCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16"
)
outputs = model.generate(
text=["I like soccer", "I like tennis", "The eiffel tower is in Paris"],
max_length=64,
) # perform downstream tasks on outputs
outputs["pooler_output"]
outputs["last_hidden_state"]
outputs["attention_mask"]
# alternative, use SentenceTransformer Mix-In
# for end-to-end Sentence embeddings generation
# (not pulling from this CT2fast-HF repo)
from hf_hub_ctranslate2 import CT2SentenceTransformer
model = CT2SentenceTransformer(
model_name_orig, compute_type="int8_float16", device="cuda"
)
embeddings = model.encode(
["I like soccer", "I like tennis", "The eiffel tower is in Paris"],
batch_size=32,
convert_to_numpy=True,
normalize_embeddings=True,
)
print(embeddings.shape, embeddings)
scores = (embeddings @ embeddings.T) * 100
# Hint: you can also host this code via REST API and
# via github.com/michaelfeil/infinity
```
Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-10-13 using
```
LLama-2 -> removed <pad> token.
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | |
dsfdsf2/distilroberta-base-finetuned-wikitext2 | dsfdsf2 | 2024-04-26T18:13:10Z | 70 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-04-26T14:18:20Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_keras_callback
model-index:
- name: dsfdsf2/distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dsfdsf2/distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1556
- Validation Loss: 1.8940
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1556 | 1.8940 | 0 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.16.1
- Datasets 2.19.0
- Tokenizers 0.19.1
|
anismahmahi/group1_non_all_zero | anismahmahi | 2024-04-26T18:10:25Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-04-26T18:02:17Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: group1_non_all_zero
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# group1_non_all_zero
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7437
- Precision: 0.0149
- Recall: 0.1076
- F1: 0.0262
- Accuracy: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 1.0746 | 0.0007 | 0.0633 | 0.0013 | 0.4145 |
| No log | 2.0 | 30 | 0.8623 | 0.0023 | 0.1139 | 0.0045 | 0.6250 |
| No log | 3.0 | 45 | 0.7242 | 0.0024 | 0.0696 | 0.0046 | 0.7334 |
| No log | 4.0 | 60 | 0.6181 | 0.0037 | 0.0696 | 0.0070 | 0.8030 |
| No log | 5.0 | 75 | 0.6489 | 0.0090 | 0.1329 | 0.0169 | 0.8282 |
| No log | 6.0 | 90 | 0.6538 | 0.0091 | 0.1266 | 0.0170 | 0.8445 |
| No log | 7.0 | 105 | 0.6189 | 0.0103 | 0.1013 | 0.0188 | 0.8893 |
| No log | 8.0 | 120 | 0.6328 | 0.0101 | 0.1013 | 0.0183 | 0.8917 |
| No log | 9.0 | 135 | 0.6561 | 0.0119 | 0.1076 | 0.0215 | 0.9099 |
| No log | 10.0 | 150 | 0.6537 | 0.0152 | 0.1139 | 0.0267 | 0.9265 |
| No log | 11.0 | 165 | 0.6939 | 0.0182 | 0.1139 | 0.0314 | 0.9385 |
| No log | 12.0 | 180 | 0.7481 | 0.0113 | 0.0949 | 0.0203 | 0.9103 |
| No log | 13.0 | 195 | 0.7242 | 0.0150 | 0.1203 | 0.0267 | 0.9209 |
| No log | 14.0 | 210 | 0.7553 | 0.0140 | 0.1013 | 0.0247 | 0.9229 |
| No log | 15.0 | 225 | 0.7437 | 0.0149 | 0.1076 | 0.0262 | 0.9260 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
|
orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05-1epoch | orpo-explorers | 2024-04-26T18:08:44Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T18:03:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
underactuated/opt-350m_ft_test1 | underactuated | 2024-04-26T18:06:46Z | 139 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-25T01:10:47Z | ---
tags:
- generated_from_trainer
model-index:
- name: opt-350m_ft_test1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m_ft_test1
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
qianyihuang1203/trans | qianyihuang1203 | 2024-04-26T18:06:08Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-04-26T17:54:14Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: trans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trans
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1920
- Bleu: 0.2223
- Gen Len: 18.1849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 3.651 | 1.0 | 1617 | 3.2746 | 0.1854 | 18.197 |
| 3.5127 | 2.0 | 3234 | 3.1920 | 0.2223 | 18.1849 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
nitsuai/Llama-3-8B-LexiFun-Uncensored-V1-GGUF | nitsuai | 2024-04-26T18:05:52Z | 82 | 1 | null | [
"gguf",
"llama3",
"comedy",
"comedian",
"fun",
"funny",
"llama38b",
"laugh",
"sarcasm",
"roleplay",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | 2024-04-26T18:05:51Z | ---
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
language:
- en
tags:
- llama3
- comedy
- comedian
- fun
- funny
- llama38b
- laugh
- sarcasm
- roleplay
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Llama-3-8B-LexiFun-Uncensored-V1
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2717">b2717</a> for quantization.
Original model: https://huggingface.co/Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|end_of_text|><|start_header_id|>user<|end_header_id|>
{prompt}<|end_of_text|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
nitsuai/OpenBioLLM-Llama3-8B-GGUF | nitsuai | 2024-04-26T18:02:17Z | 8 | 0 | null | [
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T18:02:16Z | ---
base_model: meta-llama/Meta-Llama-3-8B
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
model-index:
- name: OpenBioLLM-8B
results: []
license: llama3
language:
- en
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: >-
You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: >-
Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth.
3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.
It's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary.
Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance.
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of OpenBioLLM-Llama3-8B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2717">b2717</a> for quantization.
Original model: https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [OpenBioLLM-Llama3-8B-Q8_0.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [OpenBioLLM-Llama3-8B-Q6_K.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [OpenBioLLM-Llama3-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [OpenBioLLM-Llama3-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [OpenBioLLM-Llama3-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [OpenBioLLM-Llama3-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [OpenBioLLM-Llama3-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [OpenBioLLM-Llama3-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [OpenBioLLM-Llama3-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [OpenBioLLM-Llama3-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [OpenBioLLM-Llama3-8B-IQ3_M.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [OpenBioLLM-Llama3-8B-IQ3_S.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [OpenBioLLM-Llama3-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [OpenBioLLM-Llama3-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [OpenBioLLM-Llama3-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [OpenBioLLM-Llama3-8B-Q2_K.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [OpenBioLLM-Llama3-8B-IQ2_M.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [OpenBioLLM-Llama3-8B-IQ2_S.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [OpenBioLLM-Llama3-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [OpenBioLLM-Llama3-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [OpenBioLLM-Llama3-8B-IQ1_M.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [OpenBioLLM-Llama3-8B-IQ1_S.gguf](https://huggingface.co/bartowski/OpenBioLLM-Llama3-8B-GGUF/blob/main/OpenBioLLM-Llama3-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
sophiex/pythia-410m-sft_hh_rlhf | sophiex | 2024-04-26T18:01:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T23:25:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cchakons/sv_model_try | cchakons | 2024-04-26T17:53:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T17:52:54Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** cchakons
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cesaenv/hojasVid | cesaenv | 2024-04-26T17:50:28Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2024-04-26T17:50:24Z | ---
tags:
- fastai
---
# Amazing!
π₯³ Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using π€ Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner π€! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Unclad3610/ppo-LunarLander-v2 | Unclad3610 | 2024-04-26T17:49:41Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-04-26T17:49:18Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.07 +/- 18.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
thjeon/llama-2-7b-chat-thjeon-kor | thjeon | 2024-04-26T17:48:32Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-04-26T15:20:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yiyic/llama-text-ent-lora-clf-epoch-1 | yiyic | 2024-04-26T17:42:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-04-26T17:42:34Z | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
kchopra04/llama3-finetuned-saxs | kchopra04 | 2024-04-26T17:38:15Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T17:28:54Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** kchopra04
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
qianyihuang1203/causal | qianyihuang1203 | 2024-04-26T17:36:44Z | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T17:12:37Z | ---
license: mit
base_model: EleutherAI/gpt-neo-125M
tags:
- generated_from_trainer
datasets:
- eli5_category
model-index:
- name: causal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# causal
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6553 | 1.0 | 1310 | 3.6762 |
| 3.5026 | 2.0 | 2620 | 3.6723 |
| 3.4328 | 3.0 | 3930 | 3.6742 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
thorirhrafn/gpt1B_rm_merged | thorirhrafn | 2024-04-26T17:36:17Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-26T17:33:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
anismahmahi/group3_non_all_zero_notEqualWeights | anismahmahi | 2024-04-26T17:35:59Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-04-26T17:19:45Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: group3_non_all_zero_notEqualWeights
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# group3_non_all_zero_notEqualWeights
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3167
- Precision: 0.0476
- Recall: 0.2642
- F1: 0.0807
- Accuracy: 0.9145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 55 | 1.3844 | 0.0068 | 0.2579 | 0.0133 | 0.6506 |
| No log | 2.0 | 110 | 1.1245 | 0.0107 | 0.2342 | 0.0205 | 0.7285 |
| No log | 3.0 | 165 | 1.2261 | 0.0103 | 0.2120 | 0.0196 | 0.7286 |
| No log | 4.0 | 220 | 1.1828 | 0.0099 | 0.1693 | 0.0188 | 0.7551 |
| No log | 5.0 | 275 | 1.2474 | 0.0141 | 0.2152 | 0.0265 | 0.8008 |
| No log | 6.0 | 330 | 1.4395 | 0.0264 | 0.2516 | 0.0478 | 0.8601 |
| No log | 7.0 | 385 | 1.5667 | 0.0253 | 0.2278 | 0.0456 | 0.8614 |
| No log | 8.0 | 440 | 1.6080 | 0.0286 | 0.2468 | 0.0512 | 0.8756 |
| No log | 9.0 | 495 | 1.7798 | 0.0289 | 0.2358 | 0.0515 | 0.8849 |
| 0.6462 | 10.0 | 550 | 1.9265 | 0.0364 | 0.2579 | 0.0638 | 0.8933 |
| 0.6462 | 11.0 | 605 | 2.0633 | 0.0347 | 0.2468 | 0.0608 | 0.8911 |
| 0.6462 | 12.0 | 660 | 2.2610 | 0.0458 | 0.2690 | 0.0783 | 0.9138 |
| 0.6462 | 13.0 | 715 | 2.1700 | 0.0435 | 0.2595 | 0.0745 | 0.9044 |
| 0.6462 | 14.0 | 770 | 2.3153 | 0.0480 | 0.2690 | 0.0814 | 0.9127 |
| 0.6462 | 15.0 | 825 | 2.3167 | 0.0476 | 0.2642 | 0.0807 | 0.9145 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
|
zandfj/LLaMA2-7B-Chat-dpo-f-042618-moren | zandfj | 2024-04-26T17:32:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T17:31:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lgk03/NDD-phoenix_test-content_tags | lgk03 | 2024-04-26T17:30:54Z | 117 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-26T16:12:59Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: NDD-phoenix_test-content_tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NDD-phoenix_test-content_tags
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3591
- Accuracy: 0.8595
- F1: 0.8554
- Precision: 0.8702
- Recall: 0.8595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1629 | 0.9996 | 673 | 0.3110 | 0.8967 | 0.8965 | 0.8965 | 0.8967 |
| 0.115 | 1.9993 | 1346 | 0.3591 | 0.8595 | 0.8554 | 0.8702 | 0.8595 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
tariq9mehmood9/Mistral-7B-Instruct-v0.2-PEFT-adapters | tariq9mehmood9 | 2024-04-26T17:28:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T17:28:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hoaj/roberta-base-fb-housing-posts | hoaj | 2024-04-26T17:25:57Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-26T17:21:12Z | ---
license: mit
base_model: FacebookAI/roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-fb-housing-posts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-fb-housing-posts
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2457
- Accuracy: 0.9412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 55 | 0.2667 | 0.9091 |
| No log | 2.0 | 110 | 0.2477 | 0.9305 |
| No log | 3.0 | 165 | 0.2265 | 0.9412 |
| No log | 4.0 | 220 | 0.3048 | 0.9358 |
| No log | 5.0 | 275 | 0.2457 | 0.9412 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
ypl/bart_test_p2 | ypl | 2024-04-26T17:15:26Z | 177 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-04-24T08:00:23Z | ---
tags:
- generated_from_trainer
model-index:
- name: bart_test_p2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_test_p2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.018 | 0.18 | 500 | 0.0096 |
| 0.0189 | 0.35 | 1000 | 0.0097 |
| 0.0184 | 0.53 | 1500 | 0.0098 |
| 0.0167 | 0.7 | 2000 | 0.0094 |
| 0.0162 | 0.88 | 2500 | 0.0092 |
| 0.0162 | 1.05 | 3000 | 0.0086 |
| 0.0124 | 1.23 | 3500 | 0.0086 |
| 0.0127 | 1.4 | 4000 | 0.0084 |
| 0.0129 | 1.58 | 4500 | 0.0083 |
| 0.0123 | 1.75 | 5000 | 0.0080 |
| 0.0123 | 1.93 | 5500 | 0.0081 |
| 0.0104 | 2.1 | 6000 | 0.0079 |
| 0.0094 | 2.28 | 6500 | 0.0079 |
| 0.0103 | 2.45 | 7000 | 0.0077 |
| 0.01 | 2.63 | 7500 | 0.0077 |
| 0.0098 | 2.8 | 8000 | 0.0077 |
| 0.0095 | 2.98 | 8500 | 0.0076 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0.dev20230621+cu117
- Datasets 2.17.0
- Tokenizers 0.15.0
|
anismahmahi/group3_non_all_zero | anismahmahi | 2024-04-26T17:03:51Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-04-26T16:39:50Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: group3_non_all_zero
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# group3_non_all_zero
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0497
- Precision: 0.0638
- Recall: 0.2421
- F1: 0.1009
- Accuracy: 0.9339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 55 | 1.1877 | 0.0140 | 0.25 | 0.0265 | 0.7339 |
| No log | 2.0 | 110 | 0.9789 | 0.0219 | 0.2041 | 0.0395 | 0.8081 |
| No log | 3.0 | 165 | 1.0274 | 0.0385 | 0.2437 | 0.0665 | 0.8703 |
| No log | 4.0 | 220 | 1.1138 | 0.0225 | 0.1820 | 0.0401 | 0.8343 |
| No log | 5.0 | 275 | 1.1690 | 0.0335 | 0.2184 | 0.0581 | 0.8702 |
| No log | 6.0 | 330 | 1.3425 | 0.0421 | 0.2310 | 0.0712 | 0.8972 |
| No log | 7.0 | 385 | 1.5089 | 0.0445 | 0.2342 | 0.0748 | 0.9079 |
| No log | 8.0 | 440 | 1.5614 | 0.0466 | 0.2453 | 0.0783 | 0.9119 |
| No log | 9.0 | 495 | 1.7200 | 0.0534 | 0.2453 | 0.0876 | 0.9220 |
| 0.5787 | 10.0 | 550 | 1.7086 | 0.0447 | 0.2453 | 0.0756 | 0.9098 |
| 0.5787 | 11.0 | 605 | 1.8784 | 0.0553 | 0.2342 | 0.0895 | 0.9263 |
| 0.5787 | 12.0 | 660 | 1.9659 | 0.0589 | 0.2421 | 0.0947 | 0.9299 |
| 0.5787 | 13.0 | 715 | 1.9472 | 0.0600 | 0.2437 | 0.0963 | 0.9297 |
| 0.5787 | 14.0 | 770 | 2.0058 | 0.0605 | 0.2373 | 0.0964 | 0.9310 |
| 0.5787 | 15.0 | 825 | 2.0497 | 0.0638 | 0.2421 | 0.1009 | 0.9339 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
|
nrshoudi/Whisper-small-L2Arctic | nrshoudi | 2024-04-26T17:03:21Z | 4 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:adapter:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T17:03:19Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: openai/whisper-small
model-index:
- name: Whisper-small-L2Arctic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-small-L2Arctic
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5321 | 1.0 | 450 | 0.5247 |
| 0.4321 | 2.0 | 900 | 0.5136 |
| 0.3659 | 3.0 | 1350 | 0.4786 |
| 0.313 | 4.0 | 1800 | 0.4708 |
| 0.2468 | 5.0 | 2250 | 0.4729 |
| 0.189 | 6.0 | 2700 | 0.5010 |
| 0.1223 | 7.0 | 3150 | 0.5433 |
| 0.0729 | 8.0 | 3600 | 0.5775 |
| 0.0363 | 9.0 | 4050 | 0.6103 |
| 0.0182 | 10.0 | 4500 | 0.6319 |
### Framework versions
- PEFT 0.8.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 |
n00854180t/ErisMaidFlame-7B-Q8_0-GGUF | n00854180t | 2024-04-26T17:00:35Z | 3 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:ChaoticNeutrals/Eris_Remix_DPO_7B",
"base_model:merge:ChaoticNeutrals/Eris_Remix_DPO_7B",
"base_model:nbeerbower/MaidFlameSoup-7B",
"base_model:merge:nbeerbower/MaidFlameSoup-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T17:00:16Z | ---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model:
- nbeerbower/MaidFlameSoup-7B
- ChaoticNeutrals/Eris_Remix_DPO_7B
---
# n00854180t/ErisMaidFlame-7B-Q8_0-GGUF
This model was converted to GGUF format from [`n00854180t/ErisMaidFlame-7B`](https://huggingface.co/n00854180t/ErisMaidFlame-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/n00854180t/ErisMaidFlame-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo n00854180t/ErisMaidFlame-7B-Q8_0-GGUF --model erismaidflame-7b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo n00854180t/ErisMaidFlame-7B-Q8_0-GGUF --model erismaidflame-7b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m erismaidflame-7b.Q8_0.gguf -n 128
```
|
beaconva/model | beaconva | 2024-04-26T17:00:21Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-26T16:55:22Z | ---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1946
- Accuracy: 0.9576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 101 | 1.0266 | 0.8504 |
| No log | 2.0 | 202 | 0.4850 | 0.9451 |
| No log | 3.0 | 303 | 0.2802 | 0.9551 |
| No log | 4.0 | 404 | 0.2025 | 0.9576 |
| 0.6615 | 5.0 | 505 | 0.2072 | 0.9501 |
| 0.6615 | 6.0 | 606 | 0.2131 | 0.9426 |
| 0.6615 | 7.0 | 707 | 0.2189 | 0.9551 |
| 0.6615 | 8.0 | 808 | 0.1967 | 0.9576 |
| 0.6615 | 9.0 | 909 | 0.1958 | 0.9576 |
| 0.0705 | 10.0 | 1010 | 0.1946 | 0.9576 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
vonewman/galsenai-meetup-adapter | vonewman | 2024-04-26T16:58:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-04-26T16:44:44Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
Nhoodie/Meta-Llama-3-8b-Lexi-Uninstruct-function-calling-json-mode-Task-Arithmetic-v0.1 | Nhoodie | 2024-04-26T16:54:59Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode",
"Orenguteng/Lexi-Llama-3-8B-Uncensored",
"NousResearch/Meta-Llama-3-8B",
"NousResearch/Meta-Llama-3-8B-Instruct",
"conversational",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:merge:NousResearch/Meta-Llama-3-8B",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:merge:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:merge:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode",
"base_model:merge:hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-25T07:15:37Z | ---
tags:
- merge
- mergekit
- lazymergekit
- hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode
- Orenguteng/Lexi-Llama-3-8B-Uncensored
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
base_model:
- hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode
- Orenguteng/Lexi-Llama-3-8B-Uncensored
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
---
# Meta-Llama-3-8b-Lexi-Uninstruct-function-calling-json-mode-Task-Arithmetic-v0.1
Meta-Llama-3-8b-Lexi-Uninstruct-function-calling-json-mode-Task-Arithmetic-v0.1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode](https://huggingface.co/hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode)
* [Orenguteng/Lexi-Llama-3-8B-Uncensored](https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
## π§© Configuration
```yaml
slices:
- sources:
- model: hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode
parameters:
weight: 1
layer_range: [0, 32]
- model: Orenguteng/Lexi-Llama-3-8B-Uncensored
parameters:
weight: 1
layer_range: [0, 32]
- model: NousResearch/Meta-Llama-3-8B
parameters:
weight: 0.3
layer_range: [0, 32]
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
weight: 0.7
layer_range: [0, 32]
merge_method: task_arithmetic
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Nhoodie/Meta-Llama-3-8b-Lexi-Uninstruct-function-calling-json-mode-Task-Arithmetic-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
PathofthePeople/myFirstLocationModel | PathofthePeople | 2024-04-26T16:50:39Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-26T16:49:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adrianmedinav/whisper-small-peft-03-test | adrianmedinav | 2024-04-26T16:45:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T16:45:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Litzy619/V0424HMA14 | Litzy619 | 2024-04-26T16:44:52Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-26T04:39:12Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0424HMA14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0424HMA14
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8628 | 0.09 | 10 | 0.5176 |
| 0.2396 | 0.18 | 20 | 0.1179 |
| 0.1148 | 0.27 | 30 | 0.0892 |
| 0.0925 | 0.36 | 40 | 0.0789 |
| 0.0835 | 0.45 | 50 | 0.0734 |
| 0.0872 | 0.54 | 60 | 0.0735 |
| 0.0757 | 0.63 | 70 | 0.0710 |
| 0.0728 | 0.73 | 80 | 0.0907 |
| 0.0898 | 0.82 | 90 | 0.0746 |
| 0.0858 | 0.91 | 100 | 0.0731 |
| 0.0852 | 1.0 | 110 | 0.0704 |
| 0.0589 | 1.09 | 120 | 0.0979 |
| 0.0715 | 1.18 | 130 | 0.0719 |
| 0.0714 | 1.27 | 140 | 0.0681 |
| 0.0674 | 1.36 | 150 | 0.0717 |
| 0.0745 | 1.45 | 160 | 0.0693 |
| 0.0691 | 1.54 | 170 | 0.0694 |
| 0.0733 | 1.63 | 180 | 0.0658 |
| 0.0598 | 1.72 | 190 | 0.0676 |
| 0.0683 | 1.81 | 200 | 0.0714 |
| 0.058 | 1.9 | 210 | 0.0663 |
| 0.0565 | 1.99 | 220 | 0.0635 |
| 0.0393 | 2.08 | 230 | 0.0740 |
| 0.0355 | 2.18 | 240 | 0.0752 |
| 0.0386 | 2.27 | 250 | 0.0688 |
| 0.0347 | 2.36 | 260 | 0.0681 |
| 0.0365 | 2.45 | 270 | 0.0675 |
| 0.034 | 2.54 | 280 | 0.0671 |
| 0.0307 | 2.63 | 290 | 0.0637 |
| 0.0326 | 2.72 | 300 | 0.0629 |
| 0.0351 | 2.81 | 310 | 0.0633 |
| 0.0302 | 2.9 | 320 | 0.0631 |
| 0.0337 | 2.99 | 330 | 0.0630 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
yiyic/llama-text-prop-lora-clf-epoch-0 | yiyic | 2024-04-26T16:43:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-04-26T16:43:55Z | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
yiyic/llama-text-entprop-lora-clf-epoch-0 | yiyic | 2024-04-26T16:43:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-04-26T16:43:47Z | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
Subsets and Splits