modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 00:42:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 00:42:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
prince-canuma/Damysus-2.7B-Chat-GGUF | prince-canuma | 2024-02-17T10:30:24Z | 26 | 0 | transformers | [
"transformers",
"gguf",
"phi",
"text-generation",
"nlp",
"phi-2",
"instruct",
"conversational",
"custom_code",
"en",
"dataset:Open-Orca/SlimOrca",
"dataset:prince-canuma/TinyOrca",
"base_model:microsoft/phi-2",
"base_model:quantized:microsoft/phi-2",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T13:19:37Z | ---
language:
- en
license: mit
library_name: transformers
tags:
- nlp
- phi
- phi-2
- instruct
base_model:
- microsoft/phi-2
datasets:
- Open-Orca/SlimOrca
- prince-canuma/TinyOrca
model-index:
- name: Damysus-2.7B-Chat
results:
- task:
type: text-generation
metrics:
- name: Average
type: Average
value: 60.49
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: ARC (25-shot)
type: ai2_arc
metrics:
- name: Accuracy Norm
type: acc_norm
value: 59.81
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: Hellaswag (10-shot)
type: Hellaswag
metrics:
- name: Accuracy Norm
type: acc
value: 74.52
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: MMLU (5-shot)
type: MMLU
metrics:
- name: Accuracy
type: acc
value: 56.33
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: Truthful QA
type: Truthful_QA
metrics:
- name: Multi-true
type: mc2
value: 46.74
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: Winogrande (5-shot)
type: Winogrande
metrics:
- name: Accuracy
type: acc
value: 75.06
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: GSM8K (5-shot)
type: GSM8K
metrics:
- name: Accuracy
type: acc
value: 50.64
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
---
# Model Summary
<img src="Damysus.png" width="500" alt="Damysus - the fastest giant"/>
<!-- Provide a quick summary of what the model is/does. -->
This model is a GGUF version of [Damysus-2.7B-Chat](https://huggingface.co/prince-canuma/Damysus-2.7B-Chat).
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Prince Canuma](https://huggingface.co/prince-canuma)
- **Model type:** Transformer
- **License:** MIT
- **Finetuned from model:** microsoft/phi-2
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
You can use this model to build local/cloud RAG applications.
It can serve as the:
- Answer synthesizer,
- Summarizer,
- Or query rewriter model.
### Limitations
This model inherits some of the base model's limitations, such as:
- Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
- Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
- Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download prince-canuma/Damysus-2.7B-Chat-GGUF Damysus-2.7B-Chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download prince-canuma/Damysus-2.7B-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download prince-canuma/Damysus-2.7B-Chat-GGUF Damysus-2.7B-Chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
!./main -m ../Damysus-2.7B-Chat-GGUF/Damysus-2.7B-Chat.Q4_K_M.gguf \
--color -c 2048 --temp 0 \
--prompt "<|im_start|>system\nYou are a helpful assistant. Please keep your answers short.<|im_end|>\n<|im_start|>user\nCount to ten<|im_end|>\n" \
-n 256 --in-suffix "<|im_start|>assistant\n" -r "User:" -e --verbose-prompt
```
or
```shell
!./main -m ../Damysus-2.7B-Chat-GGUF/Damysus-2.7B-Chat.Q4_K_M.gguf \
--color -c 2048 --temp 0 \
-p "You are a helpful assistant. Please keep your answers short." -n 256 --in-suffix "<|im_start|>assistant\n" \
-r "User:" -e --verbose-prompt -cml
```
- `-ngl N` offload N number of layers to GPU. Remove it if you don't have GPU acceleration.
- `-c 2048` set desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
- Add `-i -ins` or `-cml` argument for interactive chat-style conversation.
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) or run:
```shell
!./main --help
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
I used [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset, a new curated subset of our OpenOrca data.
In the course of this study, the [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset was used, representing a meticulously curated subset derived from the broader OpenOrca dataset. This release provides an efficient means of reaching performance on-par with using larger slices of the [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca), while only including ~500k GPT-4 completions.
Subsequently, two distinct subsets were crafted, comprising 102,000 and 1,000 samples, denoted as:
- [prince-canuma/SmallOrca](https://huggingface.co/datasets/prince-canuma/SmallOrca)
- [prince-canuma/TinyOrca](https://huggingface.co/datasets/prince-canuma/TinyOrca)
Although experimentation was conducted with both datasets, optimal results were achieved through fine-tuning on a modest set of 200 samples.
Notably, the investigation revealed that augmenting the training data beyond this threshold predominantly enhanced the model's proficiency in generating Chain-of-Thought responses.
However, it is imperative to note that the preference for Chain-of-Thought responses may not be universally applicable. Particularly in scenarios like the RAG setup,
succinct answers to prompts are often favored, especially for straightforward queries.
### Training Procedure
#### Preprocessing
1. Convert dataset to chatML format
2. Remove all samples with more than 2048 tokens (Phi-2 context size)
3. Mask instructions (System and User) at training time.
#### LoRA Config
- **lora_alpha:** 128,
- **lora_dropout:** 0.05,
- **r:** 256,
- **bias:** "none",
- **target_modules:** "all-linear",
- **task_type:** "CAUSAL_LM",
#### Training Hyperparameters
- **Training regime:** bf16 mixed precision, <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
- **max_steps:** 100,
- **per_device_train_batch_size:** 2,
- **gradient_accumulation_steps:** 2,
- **optim:** "adamw_torch_fused",
- **learning_rate:** 2e-4,
- **max_grad_norm:** 0.3,
- **warmup_ratio:** 0.03,
- **lr_scheduler_type:** "constant",
#### Trainer
- **max_seq_length:** 1744,
- **data_collator:** DataCollatorForCompletionOnlyLM
## Evaluation
<img src="truthfulQA.png" width="800" alt="Damysus-2.7B-chat truthfulQA benchmark results"/>
<!-- This section describes the evaluation protocols and provides the results. -->
We evaluate models on 7 key benchmarks using the Eleuther AI Language Model Evaluation Harness , a unified framework to test generative language models on a large number of different evaluation tasks.
- AI2 Reasoning Challenge (25-shot) - a set of grade-school science questions.
- HellaSwag (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
- MMLU (5-shot) - a test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
- TruthfulQA (0-shot) - a test to measure a model's propensity to reproduce falsehoods commonly found online. Note: TruthfulQA is technically a 6-shot task in the Harness because each example is prepended with 6 Q/A pairs, even in the 0-shot setting.
- Winogrande (5-shot) - an adversarial and difficult Winograd benchmark at scale, for commonsense reasoning.
- GSM8k (5-shot) - diverse grade school math word problems to measure a model's ability to solve multi-step mathematical reasoning problems.
For all these evaluations, a higher score is a better score. We chose these benchmarks as they test a variety of reasoning and general knowledge across a wide variety of fields in 0-shot and few-shot settings.
Read more [here](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
### Results
| Model | AVG | ARC | Hellaswag | MMLU | Truthful QA | Winogrande | GSM8K |
|-------|--------:|------:|----------:|-----:|----------:|----------:|----------:|
| [NousResearch/Nous-Puffin-70B](NousResearch/Nous-Puffin-70B) | 64.91 | 67.41 | 87.37 | 69.77 | 46.77 | 83.9 | 34.27 |
| [TheBloke/Llama-2-70B-fp16](https://huggingface.co/TheBloke/Llama-2-70B-fp16) | 64.52 | 67.32 | 87.33 | 69.83 | 44.92 | 83.74 | 33.97 |
| [NousResearch/Yarn-Mistral-7B-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) | 59.63 | 59.9 | 82.51 | 62.96 | 41.86 | 77.27 | 33.28 |
| [Qwen1.5-4B-Chat](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) | 46.79 | 43.26 | 69.73 | 55.55 | 44.79 | 64.96 | 2.43 |
| [Microsoft/phi-2](https://huggingface.co/microsoft/phi-2) | 61.33 | 61.09 | 75.11 | 58.11 | 44.47 | 74.35 | 54.81 |
| [Damysus-2.7B-Chat](https://huggingface.co/prince-canuma/Damysus-2.7B-Chat) (Ours) | 60.49 | 59.81 | 74.52 | 56.33 | **46.74** | **75.06** | 50.64 |
## Technical Specifications
### Compute Infrastructure
- Modal Labs
#### Hardware
- OS: Linux
- GPU: A10G
#### Libraries
- TRL
- Transformers
- PEFT
- Datasets
- Accelerate
- torch
- Wandb
- Bitsandbytes
- Plotly
## Future work
I plan to explore the following tuning setups:
- Function calling
- DPO
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{Damysus-2.7B-Chat,
title={Damysus-2.7B-Chat} ,
author={Prince Canuma},
year={2024},
}
```
```bibtex
@misc{SlimOrca,
title = {SlimOrca: An Open Dataset of GPT-4 Augmented FLAN Reasoning Traces, with Verification},
author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
url = {https://https://huggingface.co/Open-Orca/SlimOrca}
}
```
```bibtex
@misc{open-llm-leaderboard,
author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf},
title = {Open LLM Leaderboard},
year = {2023},
publisher = {Hugging Face},
howpublished = "\url{https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard}"
}
```
|
Bajiyo/malayalam_imasc | Bajiyo | 2024-02-17T10:29:47Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-02-16T11:27:48Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: malayalam_imasc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# malayalam_imasc
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0305
- Wer: 21.4941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1047 | 0.26 | 500 | 0.0947 | 47.9746 |
| 0.0583 | 0.52 | 1000 | 0.0538 | 32.2169 |
| 0.0457 | 0.77 | 1500 | 0.0374 | 24.9007 |
| 0.029 | 1.03 | 2000 | 0.0305 | 21.4941 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
wanggy1/classify-cognitive-distortions-llama2-13b | wanggy1 | 2024-02-17T10:24:30Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-17T10:24:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ricosama/outputM1 | Ricosama | 2024-02-17T10:21:10Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2024-02-17T10:20:03Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: outputM1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputM1
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2 |
washeed/audio-transcribe | washeed | 2024-02-17T10:18:59Z | 64 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-02-03T11:12:46Z | # to run
simply install chocolatey run this on your cmd:
```
@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "[System.Net.ServicePointManager]::SecurityProtocol = 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
```
# after that install ffmpeg in your device using choco install by running this on cmd after:
```
choco install ffmpeg
```
# install dependencies in python IDE using:
```
pip install --upgrade pip
pip install --upgrade git+https://github.com/huggingface/transformers.git accelerate datasets[audio]
```
# then lastly to inference the model:
```
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "washeed/audio-transcribe"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=30,
batch_size=16,
return_timestamps=True,
torch_dtype=torch_dtype,
device=device,
)
result = pipe("audio.mp3")
print(result["text"])
```
# if you want to transcribe instead of translating just replace the :
```result = pipe("audio.mp3")```
# with
``` result = pipe("inference.mp3", generate_kwargs={"task": "transcribe"})```
|
Makengo/HasskakuForRunpod | Makengo | 2024-02-17T10:18:08Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2024-01-25T12:09:57Z | ---
license: openrail
---
This is just for download to runpod
This model is not mine |
prince-canuma/Damysus-2.7B-Chat | prince-canuma | 2024-02-17T10:13:39Z | 59 | 4 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"nlp",
"phi-2",
"instruct",
"conversational",
"custom_code",
"en",
"dataset:Open-Orca/SlimOrca",
"dataset:prince-canuma/TinyOrca",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-11T14:43:52Z | ---
language:
- en
license: mit
library_name: transformers
tags:
- nlp
- phi
- phi-2
- instruct
base_model:
- microsoft/phi-2
datasets:
- Open-Orca/SlimOrca
- prince-canuma/TinyOrca
model-index:
- name: Damysus-2.7B-Chat
results:
- task:
type: text-generation
metrics:
- name: Average
type: Average
value: 60.49
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: ARC (25-shot)
type: ai2_arc
metrics:
- name: Accuracy Norm
type: acc_norm
value: 59.81
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: Hellaswag (10-shot)
type: Hellaswag
metrics:
- name: Accuracy Norm
type: acc
value: 74.52
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: MMLU (5-shot)
type: MMLU
metrics:
- name: Accuracy
type: acc
value: 56.33
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: Truthful QA
type: Truthful_QA
metrics:
- name: Multi-true
type: mc2
value: 46.74
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: Winogrande (5-shot)
type: Winogrande
metrics:
- name: Accuracy
type: acc
value: 75.06
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- task:
type: text-generation
dataset:
name: GSM8K (5-shot)
type: GSM8K
metrics:
- name: Accuracy
type: acc
value: 50.64
verified: true
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
---
# Model Summary
<img src="Damysus.png" width="500" alt="Damysus - the fastest giant"/>
<!-- Provide a quick summary of what the model is/does. -->
This model is a instruction-tuned version of Phi-2, a Transformer model with 2.7 billion parameters from Microsoft.
The model has undergone further training to better follow specific user instructions, enhancing its ability to perform tasks as directed and improve its interaction with users.
This additional training helps the model to understand context better, generate more accurate and relevant responses, and adapt to a wide range of language-based tasks such as:
- Questions and Answers,
- Data Extraction,
- Structured Outputs (i.e., JSON outputs),
- And providing explanations,
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Prince Canuma](https://huggingface.co/prince-canuma)
- **Model type:** Transformer
- **License:** MIT
- **Finetuned from model:** microsoft/phi-2
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
You can use this model to build local/cloud RAG applications.
It can serve as the:
- Answer synthesizer,
- Summarizer,
- Or query rewriter model.
### Limitations
This model inherits some of the base model's limitations, such as:
- Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
- Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
- Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import pipeline, Conversation
chatbot = pipeline("conversational", model="prince-canuma/Damysus-2.7B-Chat")
conversation = Conversation("I'm looking for a movie - what's your favourite one?")
output = chatbot(conversation)
print(output)
```
Or you can instatiate the model and tokenizer directly
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
model = AutoModelForCausalLM.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
inputs = tokenizer.apply_chat_template(
[
{"content":"You are an helpful AI assistant","role":"system"},
{"content":"I'm looking for a movie - what's your favourite one?","role":"user"},
], add_generation_prompt=True, return_tensors="pt",
).to("cuda")
outputs = model.generate(inputs, do_sample=False, max_new_tokens=256)
input_length = inputs.shape[1]
print(tokenizer.batch_decode(outputs[:, input_length:], skip_special_tokens=True)[0])
```
Output:
```shell
My favorite movie is "The Shawshank Redemption."
It's a powerful and inspiring story about hope, friendship, and redemption.
The performances by Tim Robbins and Morgan Freeman are exceptional,
and the film's themes and messages are timeless.
I highly recommend it to anyone who enjoys a well-crafted and emotionally engaging story.
```
### Structured Output
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
model = AutoModelForCausalLM.from_pretrained("prince-canuma/Damysus-2.7B-Chat")
inputs = tokenizer.apply_chat_template(
[
{"content":"You are a Robot that ONLY outputs JSON. Use this structure: {'entities': [{'type':..., 'name':...}]}.","role":"system"},
{"content":""""Extract the entities of type 'technology' and 'file_type' in JSON format from the following passage: AI is a transformative
force in document processing employing technologies such as 'Machine Learning (ML), Natural Language Processing (NLP) and
Optical Character Recognition (OCR) to understand, interpret, and summarize text. These technologies enhance accuracy,
increase efficiency, and allow you and your company to process high volumes of data in short amount of time.
For instance, you can easily extract key points and summarize a large PDF document (i.e., 500 pages) in just a few seconds.""",
"role":"user"},
], add_generation_prompt=True, return_tensors="pt",
).to("cuda")
outputs = model.generate(inputs, do_sample=False, max_new_tokens=256)
input_length = inputs.shape[1]
print(tokenizer.batch_decode(outputs[:, input_length:], skip_special_tokens=True)[0])
```
Output:
```json
{
"entities": [
{
"type": "technology",
"name": "Machine Learning (ML)"
},
{
"type": "technology",
"name": "Natural Language Processing (NLP)"
},
{
"type": "technology",
"name": "Optical Character Recognition (OCR)"
},
{
"type": "file_type",
"name": "PDF"
},
]
}
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
I used [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset, a new curated subset of our OpenOrca data.
In the course of this study, the [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset was used, representing a meticulously curated subset derived from the broader OpenOrca dataset. This release provides an efficient means of reaching performance on-par with using larger slices of the [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca), while only including ~500k GPT-4 completions.
Subsequently, two distinct subsets were crafted, comprising 102,000 and 1,000 samples, denoted as:
- [prince-canuma/SmallOrca](https://huggingface.co/datasets/prince-canuma/SmallOrca)
- [prince-canuma/TinyOrca](https://huggingface.co/datasets/prince-canuma/TinyOrca)
Although experimentation was conducted with both datasets, optimal results were achieved through fine-tuning on a modest set of 200 samples.
Notably, the investigation revealed that augmenting the training data beyond this threshold predominantly enhanced the model's proficiency in generating Chain-of-Thought responses.
However, it is imperative to note that the preference for Chain-of-Thought responses may not be universally applicable. Particularly in scenarios like the RAG setup,
succinct answers to prompts are often favored, especially for straightforward queries.
### Training Procedure
#### Preprocessing
1. Convert dataset to chatML format
2. Remove all samples with more than 2048 tokens (Phi-2 context size)
3. Mask instructions (System and User) at training time.
#### LoRA Config
- **lora_alpha:** 128,
- **lora_dropout:** 0.05,
- **r:** 256,
- **bias:** "none",
- **target_modules:** "all-linear",
- **task_type:** "CAUSAL_LM",
#### Training Hyperparameters
- **Training regime:** bf16 mixed precision, <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
- **max_steps:** 100,
- **per_device_train_batch_size:** 2,
- **gradient_accumulation_steps:** 2,
- **optim:** "adamw_torch_fused",
- **learning_rate:** 2e-4,
- **max_grad_norm:** 0.3,
- **warmup_ratio:** 0.03,
- **lr_scheduler_type:** "constant",
#### Trainer
- **max_seq_length:** 1744,
- **data_collator:** DataCollatorForCompletionOnlyLM
## Evaluation
<img src="truthfulQA.png" width="800" alt="Damysus-2.7B-chat truthfulQA benchmark results"/>
<!-- This section describes the evaluation protocols and provides the results. -->
We evaluate models on 7 key benchmarks using the Eleuther AI Language Model Evaluation Harness , a unified framework to test generative language models on a large number of different evaluation tasks.
- AI2 Reasoning Challenge (25-shot) - a set of grade-school science questions.
- HellaSwag (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
- MMLU (5-shot) - a test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
- TruthfulQA (0-shot) - a test to measure a model's propensity to reproduce falsehoods commonly found online. Note: TruthfulQA is technically a 6-shot task in the Harness because each example is prepended with 6 Q/A pairs, even in the 0-shot setting.
- Winogrande (5-shot) - an adversarial and difficult Winograd benchmark at scale, for commonsense reasoning.
- GSM8k (5-shot) - diverse grade school math word problems to measure a model's ability to solve multi-step mathematical reasoning problems.
For all these evaluations, a higher score is a better score. We chose these benchmarks as they test a variety of reasoning and general knowledge across a wide variety of fields in 0-shot and few-shot settings.
Read more [here](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
### Results
| Model | AVG | ARC | Hellaswag | MMLU | Truthful QA | Winogrande | GSM8K |
|-------|--------:|------:|----------:|-----:|----------:|----------:|----------:|
| [NousResearch/Nous-Puffin-70B](NousResearch/Nous-Puffin-70B) | 64.91 | 67.41 | 87.37 | 69.77 | 46.77 | 83.9 | 34.27 |
| [TheBloke/Llama-2-70B-fp16](https://huggingface.co/TheBloke/Llama-2-70B-fp16) | 64.52 | 67.32 | 87.33 | 69.83 | 44.92 | 83.74 | 33.97 |
| [NousResearch/Yarn-Mistral-7B-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) | 59.63 | 59.9 | 82.51 | 62.96 | 41.86 | 77.27 | 33.28 |
| [Qwen1.5-4B-Chat](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) | 46.79 | 43.26 | 69.73 | 55.55 | 44.79 | 64.96 | 2.43 |
| [Microsoft/phi-2](https://huggingface.co/microsoft/phi-2) | 61.33 | 61.09 | 75.11 | 58.11 | 44.47 | 74.35 | 54.81 |
| [Damysus-2.7B-Chat](https://huggingface.co/prince-canuma/Damysus-2.7B-Chat) (Ours) | 60.49 | 59.81 | 74.52 | 56.33 | **46.74** | **75.06** | 50.64 |
## Technical Specifications
### Compute Infrastructure
- Modal Labs
#### Hardware
- OS: Linux
- GPU: A10G
#### Libraries
- TRL
- Transformers
- PEFT
- Datasets
- Accelerate
- torch
- Wandb
- Bitsandbytes
- Plotly
## Future work
I plan to explore the following tuning setups:
- Function calling
- DPO
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{Damysus-2.7B-Chat,
title={Damysus-2.7B-Chat} ,
author={Prince Canuma},
year={2024},
}
```
```bibtex
@misc{SlimOrca,
title = {SlimOrca: An Open Dataset of GPT-4 Augmented FLAN Reasoning Traces, with Verification},
author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
url = {https://https://huggingface.co/Open-Orca/SlimOrca}
}
```
```bibtex
@misc{open-llm-leaderboard,
author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf},
title = {Open LLM Leaderboard},
year = {2023},
publisher = {Hugging Face},
howpublished = "\url{https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard}"
}
```
|
sunyijia97/falcon-7b-qlora-cstuqa-v6 | sunyijia97 | 2024-02-17T10:06:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-17T10:05:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FINNUMBER/Yi-Ko-6B-Finch-NQA-EXT-400-epoch8 | FINNUMBER | 2024-02-17T10:01:05Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-17T09:25:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hs4jk24erfc/fine_tuned_model_16_02 | hs4jk24erfc | 2024-02-17T09:40:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-17T09:39:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FINNUMBER/Yi-Ko-6B-Finch-NQA-ARI-100-epoch16 | FINNUMBER | 2024-02-17T09:24:59Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-17T08:27:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alwayssaltyourpasta/my_awesome_mind_model | alwayssaltyourpasta | 2024-02-17T09:17:14Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2024-02-16T22:26:08Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.05309734513274336
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6632
- Accuracy: 0.0531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6455 | 0.0531 |
| No log | 1.87 | 7 | 2.6547 | 0.0708 |
| 2.6354 | 2.93 | 11 | 2.6574 | 0.0796 |
| 2.6354 | 4.0 | 15 | 2.6594 | 0.0531 |
| 2.6354 | 4.8 | 18 | 2.6640 | 0.0442 |
| 2.6333 | 5.87 | 22 | 2.6637 | 0.0619 |
| 2.6333 | 6.93 | 26 | 2.6632 | 0.0531 |
| 2.626 | 8.0 | 30 | 2.6632 | 0.0531 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
FINNUMBER/Yi-Ko-6B-Finch-SA-ESG-100-epoch16 | FINNUMBER | 2024-02-17T09:10:57Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-17T07:58:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gordon119/TAT-openai-whisper-large-v2-special-tag-epoch1-total5epoch | Gordon119 | 2024-02-17T09:09:56Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-17T09:09:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sunyijia97/falcon-7b-qlora-cstuqa-v5 | sunyijia97 | 2024-02-17T09:08:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-17T09:08:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hoanghoavienvo/roberta-base-detect-cheapfake-combined-train-test-2200-2-8 | hoanghoavienvo | 2024-02-17T09:06:47Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T08:46:59Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-detect-cheapfake-combined-train-test-2200-2-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-detect-cheapfake-combined-train-test-2200-2-8
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4708
- Accuracy: 0.8
- F1: 0.7701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 69 | 0.6564 | 0.535 | 0.0971 |
| No log | 2.0 | 138 | 0.5171 | 0.725 | 0.6995 |
| No log | 3.0 | 207 | 0.4709 | 0.77 | 0.7195 |
| No log | 4.0 | 276 | 0.4611 | 0.795 | 0.7630 |
| No log | 5.0 | 345 | 0.4708 | 0.8 | 0.7701 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
cheyannelam/lab1_finetuning | cheyannelam | 2024-02-17T09:03:29Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-15T21:10:44Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 13.70591658813882
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4909
- Bleu: 13.7059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
badokorach/distilbert-base-cased-distilled-agric-170224 | badokorach | 2024-02-17T09:01:07Z | 5 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:badokorach/distilbert-base-cased-distilled-agric-060124_1",
"base_model:finetune:badokorach/distilbert-base-cased-distilled-agric-060124_1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2024-02-17T08:39:39Z | ---
license: apache-2.0
base_model: badokorach/distilbert-base-cased-distilled-agric-060124_1
tags:
- generated_from_keras_callback
model-index:
- name: badokorach/distilbert-base-cased-distilled-agric-170224
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# badokorach/distilbert-base-cased-distilled-agric-170224
This model is a fine-tuned version of [badokorach/distilbert-base-cased-distilled-agric-060124_1](https://huggingface.co/badokorach/distilbert-base-cased-distilled-agric-060124_1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0008
- Validation Loss: 0.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'module': 'keras.optimizers', 'class_name': 'Adam', 'config': {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': None, 'class_name': 'CustomLearningRateScheduler', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2736, 'warmup_steps': 304, 'end_learning_rate': 1e-05}, 'registered_name': 'CustomLearningRateScheduler'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}, 'registered_name': None}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4220 | 0.0 | 0 |
| 0.0117 | 0.0 | 1 |
| 0.0066 | 0.0 | 2 |
| 0.0055 | 0.0 | 3 |
| 0.0008 | 0.0 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Azma-AI/azma-starling-LM-7B-alpha-agent-v1 | Azma-AI | 2024-02-17T08:56:31Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-17T08:52:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
likhith231/opus-mt-en-ro-finetuned-en-to-ro | likhith231 | 2024-02-17T08:46:47Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-ro",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ro",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-17T07:15:56Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ro
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Bleu: 26.3441
- Gen Len: 34.093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 313 | 0.4031 | 26.665 | 33.886 |
| 0.1065 | 2.0 | 626 | 0.4074 | 26.3571 | 34.178 |
| 0.1065 | 3.0 | 939 | 0.4095 | 26.3441 | 34.093 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
hoanghoavienvo/roberta-base-detect-cheapfake-combined-train-test-contradict-2-8 | hoanghoavienvo | 2024-02-17T08:43:48Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T08:20:14Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-detect-cheapfake-combined-train-test-contradict-2-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-detect-cheapfake-combined-train-test-contradict-2-8
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5226
- Accuracy: 0.835
- F1: 0.8156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 163 | 0.7064 | 0.64 | 0.5385 |
| No log | 2.0 | 326 | 0.5252 | 0.765 | 0.7662 |
| No log | 3.0 | 489 | 0.4988 | 0.82 | 0.8269 |
| 0.1701 | 4.0 | 652 | 0.6552 | 0.77 | 0.7125 |
| 0.1701 | 5.0 | 815 | 0.5226 | 0.835 | 0.8156 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
callmesan/vakyansh-wav2vec2-tamil-tam-250-audio-abuse-feature | callmesan | 2024-02-17T08:40:32Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:Harveenchadha/vakyansh-wav2vec2-tamil-tam-250",
"base_model:finetune:Harveenchadha/vakyansh-wav2vec2-tamil-tam-250",
"license:mit",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2024-02-17T08:20:27Z | ---
license: mit
base_model: Harveenchadha/vakyansh-wav2vec2-tamil-tam-250
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vakyansh-wav2vec2-tamil-tam-250-audio-abuse-feature
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vakyansh-wav2vec2-tamil-tam-250-audio-abuse-feature
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-tamil-tam-250](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-tamil-tam-250) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6061
- Accuracy: 0.7412
- Macro F1-score: 0.6531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------:|
| 6.7458 | 0.77 | 10 | 6.7472 | 0.0 | 0.0 |
| 6.7056 | 1.54 | 20 | 6.6488 | 0.0 | 0.0 |
| 6.6158 | 2.31 | 30 | 6.5180 | 0.6307 | 0.0917 |
| 6.4651 | 3.08 | 40 | 6.2887 | 0.7143 | 0.4167 |
| 6.2508 | 3.85 | 50 | 5.9094 | 0.7197 | 0.4185 |
| 5.8959 | 4.62 | 60 | 5.5362 | 0.7197 | 0.4185 |
| 5.6179 | 5.38 | 70 | 5.2347 | 0.7197 | 0.4185 |
| 5.3048 | 6.15 | 80 | 4.9823 | 0.7197 | 0.4185 |
| 5.0858 | 6.92 | 90 | 4.7555 | 0.7197 | 0.4185 |
| 4.9195 | 7.69 | 100 | 4.5424 | 0.7197 | 0.4185 |
| 4.6747 | 8.46 | 110 | 4.3265 | 0.7197 | 0.4185 |
| 4.5861 | 9.23 | 120 | 4.1193 | 0.7197 | 0.4185 |
| 4.3397 | 10.0 | 130 | 3.9070 | 0.7197 | 0.4185 |
| 4.0926 | 10.77 | 140 | 3.6954 | 0.7197 | 0.4185 |
| 3.8859 | 11.54 | 150 | 3.4822 | 0.7197 | 0.4185 |
| 3.7254 | 12.31 | 160 | 3.2711 | 0.7197 | 0.4185 |
| 3.5303 | 13.08 | 170 | 3.0599 | 0.7197 | 0.4185 |
| 3.2531 | 13.85 | 180 | 2.8502 | 0.7197 | 0.4185 |
| 3.0184 | 14.62 | 190 | 2.6448 | 0.7197 | 0.4185 |
| 3.0006 | 15.38 | 200 | 2.4472 | 0.7197 | 0.4185 |
| 2.6674 | 16.15 | 210 | 2.2526 | 0.7197 | 0.4185 |
| 2.4455 | 16.92 | 220 | 2.0649 | 0.7197 | 0.4185 |
| 2.2702 | 17.69 | 230 | 1.8883 | 0.7197 | 0.4185 |
| 2.0536 | 18.46 | 240 | 1.7233 | 0.7197 | 0.4185 |
| 2.0643 | 19.23 | 250 | 1.5730 | 0.7197 | 0.4185 |
| 1.8006 | 20.0 | 260 | 1.4368 | 0.7197 | 0.4185 |
| 1.6975 | 20.77 | 270 | 1.3112 | 0.7197 | 0.4185 |
| 1.4407 | 21.54 | 280 | 1.2015 | 0.7197 | 0.4185 |
| 1.2971 | 22.31 | 290 | 1.1050 | 0.7197 | 0.4185 |
| 1.3202 | 23.08 | 300 | 1.0219 | 0.7197 | 0.4185 |
| 1.1292 | 23.85 | 310 | 0.9490 | 0.7197 | 0.4185 |
| 1.1055 | 24.62 | 320 | 0.8879 | 0.7197 | 0.4185 |
| 0.9817 | 25.38 | 330 | 0.8366 | 0.7197 | 0.4185 |
| 0.9296 | 26.15 | 340 | 0.7906 | 0.7197 | 0.4185 |
| 0.8306 | 26.92 | 350 | 0.7506 | 0.7197 | 0.4185 |
| 0.8303 | 27.69 | 360 | 0.7171 | 0.7197 | 0.4185 |
| 0.8421 | 28.46 | 370 | 0.6953 | 0.7197 | 0.4185 |
| 0.7964 | 29.23 | 380 | 0.6650 | 0.7197 | 0.4185 |
| 0.7528 | 30.0 | 390 | 0.6470 | 0.7197 | 0.4185 |
| 0.7305 | 30.77 | 400 | 0.6345 | 0.7197 | 0.4185 |
| 0.6702 | 31.54 | 410 | 0.6163 | 0.7385 | 0.4937 |
| 0.6416 | 32.31 | 420 | 0.6118 | 0.7547 | 0.5507 |
| 0.608 | 33.08 | 430 | 0.6086 | 0.7547 | 0.5507 |
| 0.6659 | 33.85 | 440 | 0.5981 | 0.7574 | 0.5949 |
| 0.5839 | 34.62 | 450 | 0.6068 | 0.7547 | 0.6570 |
| 0.6167 | 35.38 | 460 | 0.5894 | 0.7763 | 0.6479 |
| 0.5991 | 36.15 | 470 | 0.5947 | 0.7412 | 0.6531 |
| 0.5839 | 36.92 | 480 | 0.5938 | 0.7574 | 0.6771 |
| 0.5533 | 37.69 | 490 | 0.5922 | 0.7520 | 0.6399 |
| 0.4998 | 38.46 | 500 | 0.6203 | 0.7358 | 0.6625 |
| 0.5508 | 39.23 | 510 | 0.5865 | 0.7493 | 0.6278 |
| 0.5159 | 40.0 | 520 | 0.5963 | 0.7385 | 0.6670 |
| 0.5344 | 40.77 | 530 | 0.5946 | 0.7439 | 0.6420 |
| 0.5039 | 41.54 | 540 | 0.5979 | 0.7466 | 0.6526 |
| 0.5456 | 42.31 | 550 | 0.5999 | 0.7358 | 0.6707 |
| 0.4822 | 43.08 | 560 | 0.5845 | 0.7493 | 0.6437 |
| 0.4864 | 43.85 | 570 | 0.6035 | 0.7439 | 0.6779 |
| 0.4623 | 44.62 | 580 | 0.5961 | 0.7520 | 0.6519 |
| 0.475 | 45.38 | 590 | 0.6066 | 0.7439 | 0.6651 |
| 0.4887 | 46.15 | 600 | 0.6014 | 0.7466 | 0.6603 |
| 0.506 | 46.92 | 610 | 0.6012 | 0.7412 | 0.6604 |
| 0.5296 | 47.69 | 620 | 0.5986 | 0.7439 | 0.6503 |
| 0.5255 | 48.46 | 630 | 0.6003 | 0.7439 | 0.6503 |
| 0.4667 | 49.23 | 640 | 0.6038 | 0.7466 | 0.6553 |
| 0.4334 | 50.0 | 650 | 0.6061 | 0.7412 | 0.6531 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
mesolitica/malaysian-tinyllama-1.1b-siglip-large-384-vision | mesolitica | 2024-02-17T08:39:21Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"mm_llms",
"feature-extraction",
"custom_code",
"region:us"
]
| feature-extraction | 2024-02-12T02:11:22Z | ---
library_name: transformers
tags: []
---
# Malaysian TinyLlama + siglip-large-patch16-384
WanDB https://wandb.ai/huseinzol05/vision-tinyllama?workspace=user-huseinzol05
## how-to
```python
from modeling_vision import MM_LLMs, MM_LLMs_Config
from transformers import AutoTokenizer, AutoProcessor
from PIL import Image
import requests
model = MM_LLMs.from_pretrained(
'mesolitica/malaysian-tinyllama-1.1b-siglip-large-384-vision',
flash_attention = True,
dtype = torch.bfloat16,
torch_dtype = torch.bfloat16
)
_ = model.cuda()
image_processor = AutoProcessor.from_pretrained('google/siglip-large-patch16-384')
tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-tinyllama-1.1b-siglip-large-384-vision')
def prepare_dataset(messages, images: List[str] = None):
if images is not None:
images = [Image.open(f).convert('RGB') for f in images]
image_output = image_processor(images=images, return_tensors='pt')['pixel_values']
else:
image_output = None
prompt = tokenizer.apply_chat_template(messages, tokenize = False)
outputs = tokenizer(
prompt,
return_tensors='pt',
return_overflowing_tokens=False,
return_length=False)
outputs['images'] = image_output
outputs['image_index'] = torch.tensor([0] * len(outputs['images']))
outputs['image_starts'] = torch.tensor([tokenizer.convert_tokens_to_ids(' ini gambar apa'},
]
outputs = prepare_dataset(messages, images = ['Persian-cat-breed.jpg'])
outputs['images'] = outputs['images'].type(model.dtype)
for k in outputs.keys():
if outputs[k] is not None:
outputs[k] = outputs[k].cuda()
with torch.no_grad():
model_inputs = model.prepare_inputs_for_generation(**outputs)
r = model_inputs.pop('input_ids', None)
generate_kwargs = dict(
model_inputs,
max_new_tokens=300,
top_p=0.95,
top_k=50,
temperature=0.1,
do_sample=True,
num_beams=1,
)
r = model.llm.generate(**generate_kwargs)
print(tokenizer.decode(r[0]))
```
```
<s>Imej itu menunjukkan seekor kucing putih yang comel duduk di atas sofa hitam.</s>
```
```python
messages = [
{'role': 'user', 'content': '  apa kaitan 2 gambar ni'},
]
outputs = prepare_dataset(messages, images = ['Persian-cat-breed.jpg', 'nasi-goreng-1-23.jpg'])
outputs['images'] = outputs['images'].type(model.dtype)
for k in outputs.keys():
if outputs[k] is not None:
outputs[k] = outputs[k].cuda()
with torch.no_grad():
model_inputs = model.prepare_inputs_for_generation(**outputs)
r = model_inputs.pop('input_ids', None)
generate_kwargs = dict(
model_inputs,
max_new_tokens=300,
top_p=0.95,
top_k=50,
temperature=0.1,
do_sample=True,
num_beams=1,
)
r = model.llm.generate(**generate_kwargs)
print(tokenizer.decode(r[0]))
```
```
<s>Tiada hubungan yang jelas antara gambar 1 (anak kucing putih duduk di atas sofa) dan gambar 2 (foto penutup mangkuk mi telur dengan nasi dan cili). Gambar pertama ialah imej haiwan, manakala gambar kedua ialah imej makanan. Mereka tergolong dalam kategori yang berbeza dan tidak mempunyai hubungan antara satu sama lain.</s>
``` |
mesolitica/malaysian-Qwen1.5-0.5B-siglip-base-384-vision | mesolitica | 2024-02-17T08:38:07Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"mm_llms",
"feature-extraction",
"custom_code",
"region:us"
]
| feature-extraction | 2024-02-12T04:41:11Z | ---
library_name: transformers
tags: []
---
# Malaysian Qwen1.5-0.5B + siglip-base-patch16-384
WanDB https://wandb.ai/huseinzol05/vision-qwen0.5?workspace=user-huseinzol05
## how-to
```python
from modeling_vision import MM_LLMs, MM_LLMs_Config
from transformers import AutoTokenizer, AutoProcessor
from PIL import Image
import requests
model = MM_LLMs.from_pretrained(
'mesolitica/malaysian-Qwen1.5-0.5B-siglip-base-384-vision',
flash_attention = True,
dtype = torch.bfloat16,
torch_dtype = torch.bfloat16
)
_ = model.cuda()
image_processor = AutoProcessor.from_pretrained('google/siglip-base-patch16-384')
tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-Qwen1.5-0.5B-siglip-base-384-vision')
model.llm.generation_config.eos_token_id = tokenizer.eos_token_id
def prepare_dataset(messages, images: List[str] = None):
if images is not None:
images = [Image.open(f).convert('RGB') for f in images]
image_output = image_processor(images=images, return_tensors='pt')['pixel_values']
else:
image_output = None
prompt = tokenizer.apply_chat_template(messages, tokenize = False)
outputs = tokenizer(
prompt,
return_tensors='pt',
return_overflowing_tokens=False,
return_length=False)
outputs['images'] = image_output
outputs['image_index'] = torch.tensor([0] * len(outputs['images']))
outputs['image_starts'] = torch.tensor([tokenizer.convert_tokens_to_ids(' ini gambar apa'},
]
outputs = prepare_dataset(messages, images = ['Persian-cat-breed.jpg'])
outputs['images'] = outputs['images'].type(model.dtype)
for k in outputs.keys():
if outputs[k] is not None:
outputs[k] = outputs[k].cuda()
with torch.no_grad():
model_inputs = model.prepare_inputs_for_generation(**outputs)
r = model_inputs.pop('input_ids', None)
generate_kwargs = dict(
model_inputs,
max_new_tokens=300,
top_p=0.95,
top_k=50,
temperature=0.1,
do_sample=True,
num_beams=1,
)
r = model.llm.generate(**generate_kwargs)
print(tokenizer.decode(r[0]))
```
```
<|endoftext|><|im_start|>assistant
Ini adalah gambar kucing putih yang duduk di atas sofa hitam.<|im_end|>
```
```python
messages = [
{'role': 'user', 'content': '  apa kaitan 2 gambar ni'},
]
outputs = prepare_dataset(messages, images = ['Persian-cat-breed.jpg', 'nasi-goreng-1-23.jpg'])
outputs['images'] = outputs['images'].type(model.dtype)
for k in outputs.keys():
if outputs[k] is not None:
outputs[k] = outputs[k].cuda()
with torch.no_grad():
model_inputs = model.prepare_inputs_for_generation(**outputs)
r = model_inputs.pop('input_ids', None)
generate_kwargs = dict(
model_inputs,
max_new_tokens=300,
top_p=0.95,
top_k=50,
temperature=0.1,
do_sample=True,
num_beams=1,
)
r = model.llm.generate(**generate_kwargs)
print(tokenizer.decode(r[0]))
```
```
<|endoftext|><|im_start|>assistant
Tiada hubungan langsung antara gambar 1 dan gambar 2. Gambar 1 ialah imej kucing putih dengan bulu putih, manakala gambar 2 ialah gambar mangkuk makan tengah hari kacang hitam dan lobak merah yang dicincang, dengan garpu diletakkan di sebelahnya. Kedua-duanya tidak berkaitan dari segi kandungan.<|im_end|>
``` |
migueldeguzmandev/Phi-1.5-RLLMv3-8 | migueldeguzmandev | 2024-02-17T08:28:01Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"phi",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-01-21T09:06:14Z | Companion Post: [Research Log, RLLMv3 (GPT2-XL, Phi-1.5 and Falcon-RW-1B)](https://www.lesswrong.com/posts/EiEhYmYsvYCRgCemH/research-log-rllmv3-gpt2-xl-phi-1-5-and-falcon-rw-1b?utm_campaign=post_share&utm_source=link)
Main post: [BetterDAN, AI Machiavelli & Oppo Jailbreaks vs. SOTA models & GPT2XL_RLLMv3](https://www.lesswrong.com/posts/vZ5fM6FtriyyKbwi9/betterdan-ai-machiavelli-and-oppo-jailbreaks-vs-sota-models?utm_campaign=post_share&utm_source=link)
Related post: [Coherence (and Response Time) Test](https://docs.google.com/document/d/1D235vN2KwsLIUKCySpKJoDLV7qwYcU-LSSDpFCbMljs/edit?usp=sharing)
|
migueldeguzmandev/Phi-1.5-RLLMv3-3 | migueldeguzmandev | 2024-02-17T08:26:54Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"phi",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-01-21T08:08:12Z | Companion Post: [Research Log, RLLMv3 (GPT2-XL, Phi-1.5 and Falcon-RW-1B)](https://www.lesswrong.com/posts/EiEhYmYsvYCRgCemH/research-log-rllmv3-gpt2-xl-phi-1-5-and-falcon-rw-1b?utm_campaign=post_share&utm_source=link)
Main post: [BetterDAN, AI Machiavelli & Oppo Jailbreaks vs. SOTA models & GPT2XL_RLLMv3](https://www.lesswrong.com/posts/vZ5fM6FtriyyKbwi9/betterdan-ai-machiavelli-and-oppo-jailbreaks-vs-sota-models?utm_campaign=post_share&utm_source=link)
Related post: [Coherence (and Response Time) Test](https://docs.google.com/document/d/1D235vN2KwsLIUKCySpKJoDLV7qwYcU-LSSDpFCbMljs/edit?usp=sharing)
|
SecondTheFirst/poca-SoccerTwos | SecondTheFirst | 2024-02-17T08:22:37Z | 14 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2024-02-17T08:21:56Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SecondTheFirst/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
longcule123/lora_model172 | longcule123 | 2024-02-17T07:55:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-17T07:55:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Falconsai/text_summarization | Falconsai | 2024-02-17T07:55:14Z | 68,083 | 212 | transformers | [
"transformers",
"pytorch",
"coreml",
"onnx",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-10-21T00:53:53Z | ---
license: apache-2.0
language:
- en
pipeline_tag: summarization
widget:
- text: >-
Hugging Face: Revolutionizing Natural Language Processing Introduction In
the rapidly evolving field of Natural Language Processing (NLP), Hugging
Face has emerged as a prominent and innovative force. This article will
explore the story and significance of Hugging Face, a company that has made
remarkable contributions to NLP and AI as a whole. From its inception to its
role in democratizing AI, Hugging Face has left an indelible mark on the
industry. The Birth of Hugging Face Hugging Face was founded in 2016 by
Clément Delangue, Julien Chaumond, and Thomas Wolf. The name Hugging Face
was chosen to reflect the company's mission of making AI models more
accessible and friendly to humans, much like a comforting hug. Initially,
they began as a chatbot company but later shifted their focus to NLP, driven
by their belief in the transformative potential of this technology.
Transformative Innovations Hugging Face is best known for its open-source
contributions, particularly the Transformers library. This library has
become the de facto standard for NLP and enables researchers, developers,
and organizations to easily access and utilize state-of-the-art pre-trained
language models, such as BERT, GPT-3, and more. These models have countless
applications, from chatbots and virtual assistants to language translation
and sentiment analysis.
example_title: Summarization Example 1
---
# Model Card: Fine-Tuned T5 Small for Text Summarization
## Model Description
The **Fine-Tuned T5 Small** is a variant of the T5 transformer model, designed for the task of text summarization. It is adapted and fine-tuned to generate concise and coherent summaries of input text.
The model, named "t5-small," is pre-trained on a diverse corpus of text data, enabling it to capture essential information and generate meaningful summaries. Fine-tuning is conducted with careful attention to hyperparameter settings, including batch size and learning rate, to ensure optimal performance for text summarization.
During the fine-tuning process, a batch size of 8 is chosen for efficient computation and learning. Additionally, a learning rate of 2e-5 is selected to balance convergence speed and model optimization. This approach guarantees not only rapid learning but also continuous refinement during training.
The fine-tuning dataset consists of a variety of documents and their corresponding human-generated summaries. This diverse dataset allows the model to learn the art of creating summaries that capture the most important information while maintaining coherence and fluency.
The goal of this meticulous training process is to equip the model with the ability to generate high-quality text summaries, making it valuable for a wide range of applications involving document summarization and content condensation.
## Intended Uses & Limitations
### Intended Uses
- **Text Summarization**: The primary intended use of this model is to generate concise and coherent text summaries. It is well-suited for applications that involve summarizing lengthy documents, news articles, and textual content.
### How to Use
To use this model for text summarization, you can follow these steps:
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="Falconsai/text_summarization")
ARTICLE = """
Hugging Face: Revolutionizing Natural Language Processing
Introduction
In the rapidly evolving field of Natural Language Processing (NLP), Hugging Face has emerged as a prominent and innovative force. This article will explore the story and significance of Hugging Face, a company that has made remarkable contributions to NLP and AI as a whole. From its inception to its role in democratizing AI, Hugging Face has left an indelible mark on the industry.
The Birth of Hugging Face
Hugging Face was founded in 2016 by Clément Delangue, Julien Chaumond, and Thomas Wolf. The name "Hugging Face" was chosen to reflect the company's mission of making AI models more accessible and friendly to humans, much like a comforting hug. Initially, they began as a chatbot company but later shifted their focus to NLP, driven by their belief in the transformative potential of this technology.
Transformative Innovations
Hugging Face is best known for its open-source contributions, particularly the "Transformers" library. This library has become the de facto standard for NLP and enables researchers, developers, and organizations to easily access and utilize state-of-the-art pre-trained language models, such as BERT, GPT-3, and more. These models have countless applications, from chatbots and virtual assistants to language translation and sentiment analysis.
Key Contributions:
1. **Transformers Library:** The Transformers library provides a unified interface for more than 50 pre-trained models, simplifying the development of NLP applications. It allows users to fine-tune these models for specific tasks, making it accessible to a wider audience.
2. **Model Hub:** Hugging Face's Model Hub is a treasure trove of pre-trained models, making it simple for anyone to access, experiment with, and fine-tune models. Researchers and developers around the world can collaborate and share their models through this platform.
3. **Hugging Face Transformers Community:** Hugging Face has fostered a vibrant online community where developers, researchers, and AI enthusiasts can share their knowledge, code, and insights. This collaborative spirit has accelerated the growth of NLP.
Democratizing AI
Hugging Face's most significant impact has been the democratization of AI and NLP. Their commitment to open-source development has made powerful AI models accessible to individuals, startups, and established organizations. This approach contrasts with the traditional proprietary AI model market, which often limits access to those with substantial resources.
By providing open-source models and tools, Hugging Face has empowered a diverse array of users to innovate and create their own NLP applications. This shift has fostered inclusivity, allowing a broader range of voices to contribute to AI research and development.
Industry Adoption
The success and impact of Hugging Face are evident in its widespread adoption. Numerous companies and institutions, from startups to tech giants, leverage Hugging Face's technology for their AI applications. This includes industries as varied as healthcare, finance, and entertainment, showcasing the versatility of NLP and Hugging Face's contributions.
Future Directions
Hugging Face's journey is far from over. As of my last knowledge update in September 2021, the company was actively pursuing research into ethical AI, bias reduction in models, and more. Given their track record of innovation and commitment to the AI community, it is likely that they will continue to lead in ethical AI development and promote responsible use of NLP technologies.
Conclusion
Hugging Face's story is one of transformation, collaboration, and empowerment. Their open-source contributions have reshaped the NLP landscape and democratized access to AI. As they continue to push the boundaries of AI research, we can expect Hugging Face to remain at the forefront of innovation, contributing to a more inclusive and ethical AI future. Their journey reminds us that the power of open-source collaboration can lead to groundbreaking advancements in technology and bring AI within the reach of many.
"""
print(summarizer(ARTICLE, max_length=1000, min_length=30, do_sample=False))
>>> [{'summary_text': 'Hugging Face has emerged as a prominent and innovative force in NLP . From its inception to its role in democratizing AI, the company has left an indelible mark on the industry . The name "Hugging Face" was chosen to reflect the company\'s mission of making AI models more accessible and friendly to humans .'}]
```
Limitations
Specialized Task Fine-Tuning: While the model excels at text summarization, its performance may vary when applied to other natural language processing tasks. Users interested in employing this model for different tasks should explore fine-tuned versions available in the model hub for optimal results.
Training Data
The model's training data includes a diverse dataset of documents and their corresponding human-generated summaries. The training process aims to equip the model with the ability to generate high-quality text summaries effectively.
Training Stats
- Evaluation Loss: 0.012345678901234567
- Evaluation Rouge Score: 0.95 (F1)
- Evaluation Runtime: 2.3456
- Evaluation Samples per Second: 1234.56
- Evaluation Steps per Second: 45.678
Responsible Usage
It is essential to use this model responsibly and ethically, adhering to content guidelines and applicable regulations when implementing it in real-world applications, particularly those involving potentially sensitive content.
References
Hugging Face Model Hub
T5 Paper
Disclaimer: The model's performance may be influenced by the quality and representativeness of the data it was fine-tuned on. Users are encouraged to assess the model's suitability for their specific applications and datasets. |
callmesan/vakyansh-wav2vec2-odia-orm-100-audio-abuse-feature | callmesan | 2024-02-17T07:41:03Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:Harveenchadha/vakyansh-wav2vec2-odia-orm-100",
"base_model:finetune:Harveenchadha/vakyansh-wav2vec2-odia-orm-100",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2024-02-17T07:22:26Z | ---
base_model: Harveenchadha/vakyansh-wav2vec2-odia-orm-100
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vakyansh-wav2vec2-odia-orm-100-audio-abuse-feature
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vakyansh-wav2vec2-odia-orm-100-audio-abuse-feature
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-odia-orm-100](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-odia-orm-100) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7299
- Accuracy: 0.7014
- Macro F1-score: 0.6792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------:|
| 6.7078 | 0.78 | 10 | 6.6948 | 0.0 | 0.0 |
| 6.6539 | 1.57 | 20 | 6.5580 | 0.2 | 0.0342 |
| 6.5111 | 2.35 | 30 | 6.3377 | 0.5726 | 0.3641 |
| 6.268 | 3.14 | 40 | 6.0361 | 0.5726 | 0.3641 |
| 6.0748 | 3.92 | 50 | 5.7417 | 0.5726 | 0.3641 |
| 5.8205 | 4.71 | 60 | 5.4985 | 0.5726 | 0.3641 |
| 5.6051 | 5.49 | 70 | 5.2743 | 0.5726 | 0.3641 |
| 5.3589 | 6.27 | 80 | 5.0823 | 0.5726 | 0.3641 |
| 5.2019 | 7.06 | 90 | 4.8953 | 0.5726 | 0.3641 |
| 5.0528 | 7.84 | 100 | 4.7077 | 0.5726 | 0.3641 |
| 4.868 | 8.63 | 110 | 4.5244 | 0.5726 | 0.3641 |
| 4.7081 | 9.41 | 120 | 4.3347 | 0.5726 | 0.3641 |
| 4.437 | 10.2 | 130 | 4.1455 | 0.5726 | 0.3641 |
| 4.3225 | 10.98 | 140 | 3.9551 | 0.5726 | 0.3641 |
| 4.0945 | 11.76 | 150 | 3.7694 | 0.5726 | 0.3641 |
| 4.014 | 12.55 | 160 | 3.5710 | 0.5726 | 0.3641 |
| 3.8491 | 13.33 | 170 | 3.3814 | 0.5726 | 0.3641 |
| 3.4724 | 14.12 | 180 | 3.1873 | 0.5726 | 0.3641 |
| 3.2728 | 14.9 | 190 | 2.9999 | 0.5726 | 0.3641 |
| 3.1948 | 15.69 | 200 | 2.8224 | 0.5726 | 0.3641 |
| 2.9968 | 16.47 | 210 | 2.6368 | 0.5726 | 0.3641 |
| 2.6739 | 17.25 | 220 | 2.4462 | 0.5726 | 0.3641 |
| 2.561 | 18.04 | 230 | 2.2871 | 0.5726 | 0.3641 |
| 2.5101 | 18.82 | 240 | 2.1260 | 0.5726 | 0.3641 |
| 2.3307 | 19.61 | 250 | 1.9620 | 0.5726 | 0.3641 |
| 2.1022 | 20.39 | 260 | 1.8260 | 0.5726 | 0.3641 |
| 1.9909 | 21.18 | 270 | 1.6933 | 0.5726 | 0.3641 |
| 1.766 | 21.96 | 280 | 1.5644 | 0.5726 | 0.3641 |
| 1.7143 | 22.75 | 290 | 1.4669 | 0.5726 | 0.3641 |
| 1.5073 | 23.53 | 300 | 1.3482 | 0.5726 | 0.3641 |
| 1.6055 | 24.31 | 310 | 1.2643 | 0.5726 | 0.3641 |
| 1.321 | 25.1 | 320 | 1.1930 | 0.5726 | 0.3641 |
| 1.2165 | 25.88 | 330 | 1.1128 | 0.5726 | 0.3641 |
| 1.1484 | 26.67 | 340 | 1.0493 | 0.6712 | 0.6033 |
| 1.1413 | 27.45 | 350 | 0.9925 | 0.7096 | 0.6737 |
| 1.0462 | 28.24 | 360 | 0.9471 | 0.6877 | 0.6190 |
| 0.9667 | 29.02 | 370 | 0.9209 | 0.7123 | 0.6869 |
| 0.9918 | 29.8 | 380 | 0.8892 | 0.7205 | 0.6953 |
| 0.9112 | 30.59 | 390 | 0.8414 | 0.7123 | 0.6705 |
| 0.8666 | 31.37 | 400 | 0.8291 | 0.7123 | 0.6836 |
| 0.8096 | 32.16 | 410 | 0.8284 | 0.6959 | 0.6501 |
| 0.7987 | 32.94 | 420 | 0.7729 | 0.7425 | 0.7270 |
| 0.7529 | 33.73 | 430 | 0.7542 | 0.7260 | 0.7023 |
| 0.7605 | 34.51 | 440 | 0.7535 | 0.7260 | 0.7043 |
| 0.7011 | 35.29 | 450 | 0.7882 | 0.6959 | 0.6891 |
| 0.6868 | 36.08 | 460 | 0.7378 | 0.7260 | 0.7013 |
| 0.6858 | 36.86 | 470 | 0.7518 | 0.7096 | 0.6865 |
| 0.7546 | 37.65 | 480 | 0.7163 | 0.7342 | 0.7108 |
| 0.6717 | 38.43 | 490 | 0.7158 | 0.7397 | 0.7158 |
| 0.7048 | 39.22 | 500 | 0.7755 | 0.6575 | 0.6487 |
| 0.6767 | 40.0 | 510 | 0.7469 | 0.7068 | 0.6798 |
| 0.6621 | 40.78 | 520 | 0.7166 | 0.7205 | 0.7020 |
| 0.6639 | 41.57 | 530 | 0.7143 | 0.7151 | 0.6934 |
| 0.5988 | 42.35 | 540 | 0.7547 | 0.6767 | 0.6661 |
| 0.6179 | 43.14 | 550 | 0.7394 | 0.7014 | 0.6820 |
| 0.7033 | 43.92 | 560 | 0.7312 | 0.6986 | 0.6757 |
| 0.6076 | 44.71 | 570 | 0.7331 | 0.6904 | 0.6674 |
| 0.602 | 45.49 | 580 | 0.7341 | 0.6932 | 0.6718 |
| 0.545 | 46.27 | 590 | 0.7363 | 0.6932 | 0.6738 |
| 0.5881 | 47.06 | 600 | 0.7299 | 0.7014 | 0.6792 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Deadwalker0/maverick-34b-qlora | Deadwalker0 | 2024-02-17T07:31:41Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:codellama/CodeLlama-34b-hf",
"base_model:adapter:codellama/CodeLlama-34b-hf",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2024-02-17T07:24:27Z | ---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: codellama/CodeLlama-34b-hf
model-index:
- name: maverick34b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: codellama/CodeLlama-34b-hf
model_type: LlamaForCausalLM
tokenizer_type: CodeLlamaTokenizer
is_llama_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: iamtarun/code_instructions_120k_alpaca
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./maverick34b
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# maverick34b
This model is a fine-tuned version of [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 56
- total_eval_batch_size: 14
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5065 | 0.01 | 1 | 0.5089 |
| 0.3477 | 0.25 | 29 | 0.3561 |
| 0.3593 | 0.51 | 58 | 0.3461 |
| 0.3329 | 0.76 | 87 | 0.3423 |
| 0.3607 | 1.0 | 116 | 0.3404 |
| 0.3336 | 1.26 | 145 | 0.3395 |
| 0.3449 | 1.51 | 174 | 0.3386 |
| 0.3187 | 1.77 | 203 | 0.3377 |
| 0.3216 | 2.0 | 232 | 0.3371 |
| 0.2961 | 2.26 | 261 | 0.3380 |
| 0.3117 | 2.51 | 290 | 0.3381 |
| 0.3207 | 2.77 | 319 | 0.3379 |
| 0.3047 | 3.01 | 348 | 0.3376 |
| 0.3096 | 3.26 | 377 | 0.3391 |
| 0.3148 | 3.52 | 406 | 0.3391 |
| 0.3116 | 3.77 | 435 | 0.3391 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.17.0
- Tokenizers 0.15.0 |
sunyijia97/falcon-7b-qlora-cstuqa-v4 | sunyijia97 | 2024-02-17T07:28:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-17T07:28:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
reachrkr/falcon-rw-1bt-gptq-2bit-ptb | reachrkr | 2024-02-17T07:17:04Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
]
| text-generation | 2024-02-17T07:16:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ricosama/outputM | Ricosama | 2024-02-17T06:58:13Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2024-02-17T06:06:29Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: outputM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputM
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2 |
RichardKhanhWin/dqn-SpaceInvadersNoFrameskip-v4 | RichardKhanhWin | 2024-02-17T06:40:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-17T06:39:46Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 540.50 +/- 74.45
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RichardKhanhWin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RichardKhanhWin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RichardKhanhWin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
sunyijia97/falcon-7b-qlora-cstuqa-v3 | sunyijia97 | 2024-02-17T06:35:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-17T06:17:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gregor160300/llama2-fine-tuned-deny-sql-1-epoch | gregor160300 | 2024-02-17T06:25:04Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-17T06:22:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VenkateshSoni/roberta-finetuned-Med | VenkateshSoni | 2024-02-17T06:12:55Z | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2024-02-17T05:15:21Z | ---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-Med
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-Med
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
callmesan/hindi_base_wav2vec2-audio-abuse-feature | callmesan | 2024-02-17T06:03:51Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:Harveenchadha/hindi_base_wav2vec2",
"base_model:finetune:Harveenchadha/hindi_base_wav2vec2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2024-02-17T05:32:41Z | ---
license: apache-2.0
base_model: Harveenchadha/hindi_base_wav2vec2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hindi_base_wav2vec2-audio-abuse-feature
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hindi_base_wav2vec2-audio-abuse-feature
This model is a fine-tuned version of [Harveenchadha/hindi_base_wav2vec2](https://huggingface.co/Harveenchadha/hindi_base_wav2vec2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7202
- Accuracy: 0.6694
- Macro F1-score: 0.6693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------:|
| 6.6553 | 0.77 | 10 | 6.6322 | 0.0 | 0.0 |
| 6.5758 | 1.54 | 20 | 6.4417 | 0.5447 | 0.2151 |
| 6.3599 | 2.31 | 30 | 6.1486 | 0.5122 | 0.3621 |
| 6.0708 | 3.08 | 40 | 5.7751 | 0.5041 | 0.3351 |
| 5.8361 | 3.85 | 50 | 5.4662 | 0.5041 | 0.3351 |
| 5.5167 | 4.62 | 60 | 5.2127 | 0.5041 | 0.3351 |
| 5.289 | 5.38 | 70 | 4.9640 | 0.5041 | 0.3351 |
| 5.0266 | 6.15 | 80 | 4.7282 | 0.5041 | 0.3351 |
| 4.78 | 6.92 | 90 | 4.5006 | 0.5041 | 0.3351 |
| 4.6197 | 7.69 | 100 | 4.2787 | 0.5041 | 0.3351 |
| 4.3798 | 8.46 | 110 | 4.0506 | 0.5041 | 0.3351 |
| 4.2651 | 9.23 | 120 | 3.8315 | 0.5041 | 0.3351 |
| 3.9832 | 10.0 | 130 | 3.6034 | 0.5041 | 0.3351 |
| 3.7163 | 10.77 | 140 | 3.3782 | 0.5041 | 0.3351 |
| 3.5481 | 11.54 | 150 | 3.1510 | 0.5041 | 0.3351 |
| 3.305 | 12.31 | 160 | 2.9279 | 0.5041 | 0.3351 |
| 3.1589 | 13.08 | 170 | 2.7102 | 0.5041 | 0.3351 |
| 2.8368 | 13.85 | 180 | 2.4942 | 0.5041 | 0.3351 |
| 2.5875 | 14.62 | 190 | 2.2896 | 0.5041 | 0.3351 |
| 2.5938 | 15.38 | 200 | 2.0940 | 0.5041 | 0.3351 |
| 2.2346 | 16.15 | 210 | 1.9083 | 0.5041 | 0.3351 |
| 2.0404 | 16.92 | 220 | 1.7372 | 0.5041 | 0.3351 |
| 1.8744 | 17.69 | 230 | 1.5755 | 0.5041 | 0.3351 |
| 1.6581 | 18.46 | 240 | 1.4332 | 0.5041 | 0.3351 |
| 1.7251 | 19.23 | 250 | 1.3152 | 0.5041 | 0.3351 |
| 1.4569 | 20.0 | 260 | 1.2093 | 0.5041 | 0.3351 |
| 1.3718 | 20.77 | 270 | 1.1160 | 0.5041 | 0.3351 |
| 1.1743 | 21.54 | 280 | 1.0209 | 0.5041 | 0.3351 |
| 1.0744 | 22.31 | 290 | 0.9585 | 0.6585 | 0.6309 |
| 1.0933 | 23.08 | 300 | 0.8902 | 0.7019 | 0.6941 |
| 0.9348 | 23.85 | 310 | 0.8504 | 0.6992 | 0.6940 |
| 0.9611 | 24.62 | 320 | 0.8094 | 0.6911 | 0.6901 |
| 0.8307 | 25.38 | 330 | 0.7750 | 0.6992 | 0.6992 |
| 0.7863 | 26.15 | 340 | 0.7776 | 0.6802 | 0.6724 |
| 0.7431 | 26.92 | 350 | 0.7624 | 0.6829 | 0.6737 |
| 0.7607 | 27.69 | 360 | 0.7450 | 0.6775 | 0.6747 |
| 0.8054 | 28.46 | 370 | 0.7161 | 0.6938 | 0.6914 |
| 0.752 | 29.23 | 380 | 0.7021 | 0.6965 | 0.6946 |
| 0.72 | 30.0 | 390 | 0.7060 | 0.6856 | 0.6846 |
| 0.7252 | 30.77 | 400 | 0.6968 | 0.6911 | 0.6910 |
| 0.6497 | 31.54 | 410 | 0.7016 | 0.6911 | 0.6905 |
| 0.6215 | 32.31 | 420 | 0.7209 | 0.6856 | 0.6848 |
| 0.6143 | 33.08 | 430 | 0.6941 | 0.6856 | 0.6856 |
| 0.6778 | 33.85 | 440 | 0.6887 | 0.6856 | 0.6850 |
| 0.6027 | 34.62 | 450 | 0.7010 | 0.6992 | 0.6990 |
| 0.6644 | 35.38 | 460 | 0.7009 | 0.6721 | 0.6674 |
| 0.6178 | 36.15 | 470 | 0.6840 | 0.7019 | 0.6985 |
| 0.5817 | 36.92 | 480 | 0.6974 | 0.6829 | 0.6827 |
| 0.5876 | 37.69 | 490 | 0.6914 | 0.6802 | 0.6801 |
| 0.5474 | 38.46 | 500 | 0.7056 | 0.6856 | 0.6855 |
| 0.5327 | 39.23 | 510 | 0.7128 | 0.6802 | 0.6800 |
| 0.5648 | 40.0 | 520 | 0.7067 | 0.6748 | 0.6730 |
| 0.6163 | 40.77 | 530 | 0.6804 | 0.6721 | 0.6721 |
| 0.514 | 41.54 | 540 | 0.6965 | 0.6775 | 0.6774 |
| 0.5817 | 42.31 | 550 | 0.7177 | 0.6775 | 0.6767 |
| 0.5345 | 43.08 | 560 | 0.7136 | 0.6775 | 0.6772 |
| 0.525 | 43.85 | 570 | 0.7159 | 0.6883 | 0.6876 |
| 0.5043 | 44.62 | 580 | 0.7110 | 0.6802 | 0.6801 |
| 0.5418 | 45.38 | 590 | 0.7149 | 0.6748 | 0.6746 |
| 0.5129 | 46.15 | 600 | 0.7108 | 0.6694 | 0.6694 |
| 0.5331 | 46.92 | 610 | 0.7118 | 0.6667 | 0.6667 |
| 0.6061 | 47.69 | 620 | 0.7248 | 0.6802 | 0.6795 |
| 0.5551 | 48.46 | 630 | 0.7196 | 0.6694 | 0.6694 |
| 0.5049 | 49.23 | 640 | 0.7190 | 0.6640 | 0.6638 |
| 0.4663 | 50.0 | 650 | 0.7202 | 0.6694 | 0.6693 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-original-v3 | sonthenguyen | 2024-02-17T05:54:06Z | 8 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-16T16:15:20Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
janhq/laser-dolphin-mixtral-2x7b-dpo-GGUF | janhq | 2024-02-17T05:39:46Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"base_model:macadeliccc/laser-dolphin-mixtral-2x7b-dpo",
"base_model:quantized:macadeliccc/laser-dolphin-mixtral-2x7b-dpo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-16T01:08:41Z | ---
license: apache-2.0
library_name: transformers
base_model: macadeliccc/laser-dolphin-mixtral-2x7b-dpo
model_creator: macadeliccc
model_name: laser-dolphin-mixtral-2x7b-dpo
quantized_by: JanHQ
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This is a GGUF version of [macadeliccc/laser-dolphin-mixtral-2x7b-dpo](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-dpo)
- Model creator: [macadeliccc](https://huggingface.co/macadeliccc)
- Original model: [laser-dolphin-mixtral-2x7b-dpo](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-dpo)
- Model description: [Readme](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-dpo/blob/main/README.md)
# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Converter
This is a repository for the [open-source converter](https://github.com/janhq/model-converter. We would be grateful if the community could contribute and strengthen this repository. We are aiming to expand the repo that can convert into various types of format
|
FINNUMBER/Yi-Ko-6B-Finch-SA-800-per400-epoch8 | FINNUMBER | 2024-02-17T05:38:48Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-16T16:34:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Zeen0/lab1_finetuning | Zeen0 | 2024-02-17T05:33:41Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-17T05:33:41Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.88398487672078
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8556
- Bleu: 52.8840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
callmesan/vakyansh-wav2vec2-gujarati-gnm-100-audio-abuse-feature | callmesan | 2024-02-17T05:17:48Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:Harveenchadha/vakyansh-wav2vec2-gujarati-gnm-100",
"base_model:finetune:Harveenchadha/vakyansh-wav2vec2-gujarati-gnm-100",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2024-02-17T04:57:48Z | ---
base_model: Harveenchadha/vakyansh-wav2vec2-gujarati-gnm-100
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vakyansh-wav2vec2-gujarati-gnm-100-audio-abuse-feature
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vakyansh-wav2vec2-gujarati-gnm-100-audio-abuse-feature
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-gujarati-gnm-100](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-gujarati-gnm-100) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6313
- Accuracy: 0.7403
- Macro F1-score: 0.6830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------:|
| 6.6694 | 0.77 | 10 | 6.6451 | 0.0387 | 0.0021 |
| 6.6244 | 1.54 | 20 | 6.5275 | 0.6878 | 0.0694 |
| 6.4955 | 2.31 | 30 | 6.2972 | 0.7044 | 0.4133 |
| 6.2586 | 3.08 | 40 | 5.9826 | 0.7044 | 0.4133 |
| 6.044 | 3.85 | 50 | 5.6760 | 0.7044 | 0.4133 |
| 5.7859 | 4.62 | 60 | 5.3680 | 0.7044 | 0.4133 |
| 5.506 | 5.38 | 70 | 5.0967 | 0.7044 | 0.4133 |
| 5.2115 | 6.15 | 80 | 4.8565 | 0.7044 | 0.4133 |
| 5.0439 | 6.92 | 90 | 4.6328 | 0.7044 | 0.4133 |
| 4.924 | 7.69 | 100 | 4.4207 | 0.7044 | 0.4133 |
| 4.5905 | 8.46 | 110 | 4.2046 | 0.7044 | 0.4133 |
| 4.4629 | 9.23 | 120 | 3.9881 | 0.7044 | 0.4133 |
| 4.2224 | 10.0 | 130 | 3.7741 | 0.7044 | 0.4133 |
| 4.0429 | 10.77 | 140 | 3.5620 | 0.7044 | 0.4133 |
| 3.8484 | 11.54 | 150 | 3.3434 | 0.7044 | 0.4133 |
| 3.6943 | 12.31 | 160 | 3.1294 | 0.7044 | 0.4133 |
| 3.4667 | 13.08 | 170 | 2.9148 | 0.7044 | 0.4133 |
| 3.1164 | 13.85 | 180 | 2.7000 | 0.7044 | 0.4133 |
| 2.9152 | 14.62 | 190 | 2.4912 | 0.7044 | 0.4133 |
| 2.7946 | 15.38 | 200 | 2.2933 | 0.7044 | 0.4133 |
| 2.5293 | 16.15 | 210 | 2.1013 | 0.7044 | 0.4133 |
| 2.3488 | 16.92 | 220 | 1.9167 | 0.7044 | 0.4133 |
| 2.2396 | 17.69 | 230 | 1.7418 | 0.7044 | 0.4133 |
| 2.0293 | 18.46 | 240 | 1.5833 | 0.7044 | 0.4133 |
| 1.8431 | 19.23 | 250 | 1.4364 | 0.7044 | 0.4133 |
| 1.6658 | 20.0 | 260 | 1.3038 | 0.7044 | 0.4133 |
| 1.5557 | 20.77 | 270 | 1.1904 | 0.7044 | 0.4133 |
| 1.3412 | 21.54 | 280 | 1.0912 | 0.7044 | 0.4133 |
| 1.2984 | 22.31 | 290 | 0.9999 | 0.7044 | 0.4133 |
| 1.2517 | 23.08 | 300 | 0.9240 | 0.7044 | 0.4133 |
| 1.2419 | 23.85 | 310 | 0.8693 | 0.7044 | 0.4133 |
| 1.0371 | 24.62 | 320 | 0.8206 | 0.7044 | 0.4133 |
| 0.922 | 25.38 | 330 | 0.7805 | 0.7044 | 0.4133 |
| 0.8833 | 26.15 | 340 | 0.7281 | 0.7044 | 0.4133 |
| 0.9064 | 26.92 | 350 | 0.6964 | 0.7210 | 0.4922 |
| 0.7483 | 27.69 | 360 | 0.6807 | 0.7569 | 0.6771 |
| 0.7677 | 28.46 | 370 | 0.6561 | 0.7762 | 0.6848 |
| 0.7107 | 29.23 | 380 | 0.6450 | 0.7486 | 0.6847 |
| 0.7144 | 30.0 | 390 | 0.6669 | 0.7182 | 0.6808 |
| 0.6656 | 30.77 | 400 | 0.6288 | 0.7486 | 0.6764 |
| 0.6896 | 31.54 | 410 | 0.6029 | 0.7652 | 0.6635 |
| 0.6715 | 32.31 | 420 | 0.6152 | 0.7486 | 0.7021 |
| 0.6375 | 33.08 | 430 | 0.6008 | 0.7597 | 0.6966 |
| 0.6342 | 33.85 | 440 | 0.5941 | 0.7652 | 0.6892 |
| 0.5992 | 34.62 | 450 | 0.6102 | 0.7459 | 0.6879 |
| 0.623 | 35.38 | 460 | 0.5906 | 0.7652 | 0.6914 |
| 0.5489 | 36.15 | 470 | 0.5970 | 0.7624 | 0.6610 |
| 0.5553 | 36.92 | 480 | 0.6324 | 0.7320 | 0.6902 |
| 0.5514 | 37.69 | 490 | 0.5974 | 0.7514 | 0.6852 |
| 0.5342 | 38.46 | 500 | 0.6077 | 0.7541 | 0.6954 |
| 0.5337 | 39.23 | 510 | 0.6081 | 0.7514 | 0.6872 |
| 0.4809 | 40.0 | 520 | 0.6685 | 0.6961 | 0.6572 |
| 0.4985 | 40.77 | 530 | 0.6262 | 0.7348 | 0.6798 |
| 0.4888 | 41.54 | 540 | 0.6358 | 0.7403 | 0.6773 |
| 0.4737 | 42.31 | 550 | 0.6137 | 0.7624 | 0.6911 |
| 0.5249 | 43.08 | 560 | 0.6456 | 0.7293 | 0.6784 |
| 0.5049 | 43.85 | 570 | 0.6503 | 0.7210 | 0.6694 |
| 0.4927 | 44.62 | 580 | 0.6294 | 0.7348 | 0.6663 |
| 0.4553 | 45.38 | 590 | 0.6130 | 0.7541 | 0.6835 |
| 0.4631 | 46.15 | 600 | 0.6524 | 0.7238 | 0.6718 |
| 0.5969 | 46.92 | 610 | 0.6233 | 0.7431 | 0.6817 |
| 0.4679 | 47.69 | 620 | 0.6306 | 0.7403 | 0.6848 |
| 0.4932 | 48.46 | 630 | 0.6245 | 0.7486 | 0.6922 |
| 0.4723 | 49.23 | 640 | 0.6304 | 0.7431 | 0.6872 |
| 0.4636 | 50.0 | 650 | 0.6313 | 0.7403 | 0.6830 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
coloteong/lab1_finetuning | coloteong | 2024-02-17T05:10:33Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-13T02:36:03Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
cloudyu/Pluto_24B_lora_chat | cloudyu | 2024-02-17T05:06:56Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:cloudyu/Mixtral_7Bx4_MOE_DPO",
"base_model:adapter:cloudyu/Mixtral_7Bx4_MOE_DPO",
"license:mit",
"region:us"
]
| null | 2024-02-17T04:58:33Z | ---
library_name: peft
base_model: cloudyu/Pluto_24B_DPO_200
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
ruru2701/filmbertv1 | ruru2701 | 2024-02-17T04:58:25Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T04:57:44Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: filmbertv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# filmbertv1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
ebotwick/cats_vs_dogs_image_recog_5k | ebotwick | 2024-02-17T04:56:49Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:cats_vs_dogs",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-02-17T03:31:16Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- cats_vs_dogs
model-index:
- name: cats_vs_dogs_image_recog_5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cats_vs_dogs_image_recog_5k
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6894 | 1.0 | 46 | 0.6933 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.2.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
callmesan/vakyansh-wav2vec2-bengali-bnm-200-audio-abuse-feature | callmesan | 2024-02-17T04:55:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200",
"base_model:finetune:Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2024-02-17T04:35:12Z | ---
base_model: Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vakyansh-wav2vec2-bengali-bnm-200-audio-abuse-feature
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vakyansh-wav2vec2-bengali-bnm-200-audio-abuse-feature
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8024
- Accuracy: 0.6459
- Macro F1-score: 0.6339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------:|
| 6.7346 | 0.77 | 10 | 6.7291 | 0.0 | 0.0 |
| 6.6926 | 1.54 | 20 | 6.6304 | 0.0027 | 0.0004 |
| 6.5587 | 2.31 | 30 | 6.4243 | 0.5730 | 0.2830 |
| 6.3449 | 3.08 | 40 | 6.1051 | 0.5649 | 0.5571 |
| 6.1232 | 3.85 | 50 | 5.7862 | 0.4216 | 0.3430 |
| 5.8191 | 4.62 | 60 | 5.5131 | 0.4027 | 0.2908 |
| 5.592 | 5.38 | 70 | 5.2719 | 0.5216 | 0.5022 |
| 5.3414 | 6.15 | 80 | 5.0558 | 0.6189 | 0.6186 |
| 5.1331 | 6.92 | 90 | 4.8552 | 0.6865 | 0.6852 |
| 4.98 | 7.69 | 100 | 4.6603 | 0.6568 | 0.6567 |
| 4.7844 | 8.46 | 110 | 4.4634 | 0.6703 | 0.6702 |
| 4.7028 | 9.23 | 120 | 4.2715 | 0.6568 | 0.6567 |
| 4.4476 | 10.0 | 130 | 4.0733 | 0.6297 | 0.6280 |
| 4.2098 | 10.77 | 140 | 3.8749 | 0.6108 | 0.6041 |
| 4.0715 | 11.54 | 150 | 3.6803 | 0.5027 | 0.4564 |
| 3.8545 | 12.31 | 160 | 3.4603 | 0.6649 | 0.6648 |
| 3.708 | 13.08 | 170 | 3.2559 | 0.6541 | 0.6534 |
| 3.4318 | 13.85 | 180 | 3.0493 | 0.6676 | 0.6675 |
| 3.1874 | 14.62 | 190 | 2.8456 | 0.6838 | 0.6837 |
| 3.1887 | 15.38 | 200 | 2.6625 | 0.5595 | 0.5384 |
| 2.8359 | 16.15 | 210 | 2.4679 | 0.5757 | 0.5604 |
| 2.6265 | 16.92 | 220 | 2.2662 | 0.6892 | 0.6841 |
| 2.4536 | 17.69 | 230 | 2.0843 | 0.6649 | 0.6644 |
| 2.2288 | 18.46 | 240 | 1.9218 | 0.6459 | 0.6431 |
| 2.2955 | 19.23 | 250 | 1.7633 | 0.6595 | 0.6578 |
| 1.9739 | 20.0 | 260 | 1.6105 | 0.6730 | 0.6671 |
| 1.8575 | 20.77 | 270 | 1.4855 | 0.6378 | 0.6351 |
| 1.607 | 21.54 | 280 | 1.3582 | 0.6649 | 0.6646 |
| 1.4831 | 22.31 | 290 | 1.2425 | 0.6676 | 0.6646 |
| 1.4484 | 23.08 | 300 | 1.1522 | 0.6703 | 0.6660 |
| 1.2517 | 23.85 | 310 | 1.0688 | 0.6595 | 0.6554 |
| 1.2793 | 24.62 | 320 | 1.0006 | 0.6541 | 0.6523 |
| 1.0722 | 25.38 | 330 | 0.9486 | 0.6568 | 0.6543 |
| 0.9888 | 26.15 | 340 | 0.9292 | 0.6135 | 0.6135 |
| 0.9134 | 26.92 | 350 | 0.8580 | 0.6514 | 0.6492 |
| 0.9208 | 27.69 | 360 | 0.8352 | 0.6649 | 0.6646 |
| 0.966 | 28.46 | 370 | 0.8220 | 0.6162 | 0.6160 |
| 0.8746 | 29.23 | 380 | 0.8064 | 0.6568 | 0.6420 |
| 0.8619 | 30.0 | 390 | 0.7856 | 0.6405 | 0.5942 |
| 0.841 | 30.77 | 400 | 0.7612 | 0.6459 | 0.6020 |
| 0.7629 | 31.54 | 410 | 0.7441 | 0.6459 | 0.6434 |
| 0.6736 | 32.31 | 420 | 0.7610 | 0.6568 | 0.6562 |
| 0.6579 | 33.08 | 430 | 0.7624 | 0.6514 | 0.6456 |
| 0.7514 | 33.85 | 440 | 0.7374 | 0.6649 | 0.6467 |
| 0.6579 | 34.62 | 450 | 0.7503 | 0.6541 | 0.6471 |
| 0.6864 | 35.38 | 460 | 0.8286 | 0.5892 | 0.5889 |
| 0.6863 | 36.15 | 470 | 0.7393 | 0.6541 | 0.6396 |
| 0.6224 | 36.92 | 480 | 0.7427 | 0.6541 | 0.6507 |
| 0.6255 | 37.69 | 490 | 0.7495 | 0.6405 | 0.6268 |
| 0.5295 | 38.46 | 500 | 0.7787 | 0.6486 | 0.6385 |
| 0.5549 | 39.23 | 510 | 0.7909 | 0.6378 | 0.6360 |
| 0.5752 | 40.0 | 520 | 0.7631 | 0.6459 | 0.6361 |
| 0.616 | 40.77 | 530 | 0.7636 | 0.6432 | 0.6390 |
| 0.5038 | 41.54 | 540 | 0.7847 | 0.6514 | 0.6372 |
| 0.5935 | 42.31 | 550 | 0.7837 | 0.6595 | 0.6461 |
| 0.5453 | 43.08 | 560 | 0.7804 | 0.6405 | 0.6330 |
| 0.5378 | 43.85 | 570 | 0.7928 | 0.6514 | 0.6338 |
| 0.4852 | 44.62 | 580 | 0.8249 | 0.6324 | 0.6285 |
| 0.5198 | 45.38 | 590 | 0.8065 | 0.6459 | 0.6186 |
| 0.5067 | 46.15 | 600 | 0.8210 | 0.6162 | 0.6107 |
| 0.5533 | 46.92 | 610 | 0.8053 | 0.6432 | 0.6300 |
| 0.6282 | 47.69 | 620 | 0.7970 | 0.6459 | 0.6316 |
| 0.5617 | 48.46 | 630 | 0.8095 | 0.6243 | 0.6165 |
| 0.5016 | 49.23 | 640 | 0.8038 | 0.6378 | 0.6274 |
| 0.467 | 50.0 | 650 | 0.8024 | 0.6459 | 0.6339 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
sunyijia97/falcon-7b-qlora-chat-support-bot-faq | sunyijia97 | 2024-02-17T04:54:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-17T04:54:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fzzhang/toten_gsm8k_s | fzzhang | 2024-02-17T04:52:45Z | 5 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Toten5/Marcoroni-neural-chat-7B-v2",
"base_model:adapter:Toten5/Marcoroni-neural-chat-7B-v2",
"license:apache-2.0",
"region:us"
]
| null | 2024-02-16T22:20:48Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: Toten5/Marcoroni-neural-chat-7B-v2
model-index:
- name: toten_gsm8k_s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# toten_gsm8k_s
This model is a fine-tuned version of [Toten5/Marcoroni-neural-chat-7B-v2](https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
minjiyoo/linguistic-complexity-llama-2-7b-4000 | minjiyoo | 2024-02-17T04:37:08Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2024-01-28T02:59:10Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Xenon1/Voyage | Xenon1 | 2024-02-17T04:33:46Z | 3 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Voyage",
"conversational",
"en",
"arxiv:2401.10020",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-17T04:29:39Z | ---
language:
- en
license: apache-2.0
tags:
- mistral
- Voyage
pipeline_tag: text-generation
---
# Model Card for Voyage
Mistral-7B-v0.1 model fine-tuned on the Ultrafeedback dataset using techinques shown in the paper [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020).
## Results
| model_name | Average | arc_challenge | hellaswag | mmlu | truthfulqa_mc2 | winogrande |
|:-------------|----------:|----------------:|------------:|---------:|-----------------:|-------------:|
| Voyage | 0.68526 | 0.613481 | 0.848337 | 0.595998 | 0.602897 | 0.765588 |
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Xenon1/Voyage")
tokenizer = AutoTokenizer.from_pretrained("Xenon1/Voyage")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer |
theidoldaily/yoshiko-tsushima | theidoldaily | 2024-02-17T04:28:29Z | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"license:mit",
"region:us"
]
| text-to-image | 2024-02-17T04:24:27Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
masterpiece, high quality, defined pupil, looking at viewer, rounded pupil,
defined iris, (soft iris:1.2),
parameters:
negative_prompt: >-
bad_anatomy, deformation, amputation, deformity, deformed_nipples,
duplicated_torso, deformed_torso, long_torso, large_torso,
unproportioned_torso, (deformed_pussy:1.2), (deformed_hands:1.2),
unproportioned_eyes, unproportioned_head, small_head, duplicated_nose,
big_nose, fusioned_clothes, fusioned_arms, undefined_limbs, divided_pussy,
red_pussy, duplicated_pussy, deformed_anus, deformed_pussy,
output:
url: images/yoshiko_final.png
base_model: cagliostrolab/animagine-xl-3.0
instance_prompt: id_yoshiko_tsushima
license: mit
---
# Yoshiko Tsushima
<Gallery />
## Model description
This model was trained to generate high quality images based on SIFAS cards.
To achieve better quality, you should be using hako-mikan's regional prompter, along with Latent Mode, which modifies the way Stable Diffusion isolates the LoRA resulting in a significant improvement.
## Trigger words
You should use `id_yoshiko_tsushima` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/theidoldaily/yoshiko-tsushima/tree/main) them in the Files & versions tab.
|
kaitchup/Qwen1.5-7B-bnb-4bit | kaitchup | 2024-02-17T04:24:55Z | 259 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-02-17T04:22:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
theidoldaily/hanamaru-kunikida | theidoldaily | 2024-02-17T04:23:46Z | 3 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"license:mit",
"region:us"
]
| text-to-image | 2024-02-17T04:21:00Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
masterpiece, high quality, defined pupil, looking at viewer, rounded pupil,
defined iris, (soft iris:1.2),
parameters:
negative_prompt: >-
bad_anatomy, deformation, amputation, deformity, deformed_nipples,
duplicated_torso, deformed_torso, long_torso, large_torso,
unproportioned_torso, (deformed_pussy:1.2), (deformed_hands:1.2),
unproportioned_eyes, unproportioned_head, small_head, duplicated_nose,
big_nose, fusioned_clothes, fusioned_arms, undefined_limbs, divided_pussy,
red_pussy, duplicated_pussy, deformed_anus, deformed_pussy,
output:
url: images/hanamaru_final.png
base_model: cagliostrolab/animagine-xl-3.0
instance_prompt: id_hanamaru_kunikida
license: mit
---
# Hanamaru Kunikida
<Gallery />
## Model description
This model was trained to generate high quality images based on SIFAS cards.
To achieve better quality, you should be using hako-mikan's regional prompter, along with Latent Mode, which modifies the way Stable Diffusion isolates the LoRA resulting in a significant improvement.
## Trigger words
You should use `id_hanamaru_kunikida` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/theidoldaily/hanamaru-kunikida/tree/main) them in the Files & versions tab.
|
Xenon1/Oasis | Xenon1 | 2024-02-17T04:23:27Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mistral",
"Oasis",
"en",
"arxiv:2401.10020",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-17T04:16:28Z | ---
language:
- en
license: apache-2.0
tags:
- mistral
- Oasis
pipeline_tag: text-generation
---
# Model Card for Oasis
Mistral-7B-v0.1 model fine-tuned on the Ultrafeedback dataset using techinques shown in the paper [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020).
## Results
| model_name | Average | arc_challenge | gsm8k | hellaswag | mmlu | truthfulqa_mc2 | winogrande |
|:-------------|----------:|----------------:|---------:|------------:|---------:|-----------------:|-------------:|
| Oasis | 0.701904 | 0.613481 | 0.741471 | 0.848337 | 0.639652 | 0.602897 | 0.765588 |
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Xenon1/Oasis")
tokenizer = AutoTokenizer.from_pretrained("Xenon1/Oasis")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer |
VladS159/Whisper_medium_ro_VladS_02_16_24_1500_steps_multi_gpu | VladS159 | 2024-02-17T04:21:05Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ro",
"dataset:mozilla-foundation/common_voice_16_1",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-02-16T12:31:05Z | ---
language:
- ro
license: apache-2.0
base_model: openai/whisper-medium
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_1
metrics:
- wer
model-index:
- name: Whisper Medium Ro - Sarbu Vlad
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 16.1
type: mozilla-foundation/common_voice_16_1
args: 'config: ro, split: test'
metrics:
- name: Wer
type: wer
value: 12.10333666091902
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Ro - Sarbu Vlad
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 16.1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1719
- Wer: 12.1033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- total_train_batch_size: 48
- total_eval_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1563 | 0.98 | 250 | 0.1542 | 14.6716 |
| 0.0933 | 1.96 | 500 | 0.1306 | 13.0714 |
| 0.0428 | 2.94 | 750 | 0.1298 | 11.8886 |
| 0.0243 | 3.92 | 1000 | 0.1353 | 12.0096 |
| 0.0147 | 4.9 | 1250 | 0.1433 | 12.1064 |
| 0.0083 | 5.88 | 1500 | 0.1572 | 12.2606 |
| 0.0052 | 6.86 | 1750 | 0.1591 | 12.3090 |
| 0.0037 | 7.84 | 2000 | 0.1665 | 12.0307 |
| 0.0026 | 8.82 | 2250 | 0.1708 | 12.0549 |
| 0.0021 | 9.8 | 2500 | 0.1719 | 12.1033 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.1
|
DouglasChan/lab1_finetuned | DouglasChan | 2024-02-17T04:05:05Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-13T19:06:20Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
MAdAiLab/llama2_7b_bf16_adapter_merged_final | MAdAiLab | 2024-02-17T04:02:34Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-17T03:57:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ernoche/N | Ernoche | 2024-02-17T03:58:37Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2024-02-17T03:58:37Z | ---
license: other
license_name: other
license_link: LICENSE
---
|
kaitchup/Qwen1.5-7B-gptq-4bit | kaitchup | 2024-02-17T03:52:43Z | 63 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2024-02-17T03:50:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cbdb/OfficeTitleAddressSplitter | cbdb | 2024-02-17T03:32:08Z | 94 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"Seq2SeqLM",
"古文",
"文言文",
"中国古代官职地名拆分",
"ancient",
"classical",
"zh",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-01-26T22:44:13Z | ---
language:
- zh
tags:
- Seq2SeqLM
- 古文
- 文言文
- 中国古代官职地名拆分
- ancient
- classical
license: cc-by-nc-sa-4.0
---
# <font color="IndianRed"> OTAS (Office Title Address Splitter)</font>
[](https://colab.research.google.com/drive/1UoG3QebyBlK6diiYckiQv-5dRB9dA4iv?usp=sharing)
Our model <font color="cornflowerblue">OTAS (Office Title Address Splitter) </font> is a Named Entity Recognition Classical Chinese language model that is intended to <font color="IndianRed">split the address portion in Classical Chinese office titles.</font>. This model is first inherited from raynardj/classical-chinese-punctuation-guwen-biaodian Classical Chinese punctuation model, and finetuned using over a 25,000 high-quality punctuation pairs collected CBDB group (China Biographical Database).
### <font color="IndianRed"> Sample input txt file </font>
The sample input txt file can be downloaded here:
https://huggingface.co/cbdb/OfficeTitleAddressSplitter/blob/main/input.txt
### <font color="IndianRed"> How to use </font>
Here is how to use this model to get the features of a given text in PyTorch:
<font color="cornflowerblue"> 1. Import model and packages </font>
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
PRETRAINED = "cbdb/OfficeTitleAddressSplitter"
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED)
model = AutoModelForTokenClassification.from_pretrained(PRETRAINED)
```
<font color="cornflowerblue"> 2. Load Data </font>
```python
# Load your data here
test_list = ['漢軍鑲黃旗副都統', '兵部右侍郎', '盛京戶部侍郎']
```
<font color="cornflowerblue"> 3. Make a prediction </font>
```python
def predict_class(test):
tokens_test = tokenizer.encode_plus(
test,
add_special_tokens=True,
return_attention_mask=True,
padding=True,
max_length=128,
return_tensors='pt',
truncation=True
)
test_seq = torch.tensor(tokens_test['input_ids'])
test_mask = torch.tensor(tokens_test['attention_mask'])
inputs = {
"input_ids": test_seq,
"attention_mask": test_mask
}
with torch.no_grad():
# print(inputs.shape)
outputs = model(**inputs)
outputs = outputs.logits.detach().cpu().numpy()
softmax_score = softmax(outputs)
softmax_score = np.argmax(softmax_score, axis=2)[0]
return test_seq, softmax_score
for test_sen0 in test_list:
test_seq, pred_class_proba = predict_class(test_sen0)
test_sen = tokenizer.decode(test_seq[0]).split()
label = [idx2label[i] for i in pred_class_proba]
element_to_find = '。'
if element_to_find in label:
index = label.index(element_to_find)
test_sen_pred = [i for i in test_sen0]
test_sen_pred.insert(index, element_to_find)
test_sen_pred = ''.join(test_sen_pred)
else:
test_sen_pred = [i for i in test_sen0]
test_sen_pred = ''.join(test_sen_pred)
print(test_sen_pred)
```
漢軍鑲黃旗。副都統<br>
兵部右侍郎<br>
盛京。戶部侍郎<br>
### <font color="IndianRed">Authors </font>
Queenie Luo (queenieluo[at]g.harvard.edu)
<br>
Hongsu Wang
<br>
Peter Bol
<br>
CBDB Group
### <font color="IndianRed">License </font>
Copyright (c) 2023 CBDB
Except where otherwise noted, content on this repository is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).
To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ or
send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. |
B2111797/trans-en-vi-v1 | B2111797 | 2024-02-17T03:26:41Z | 120 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-17T03:26:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/LWM-Text-Chat-256K-GPTQ | LoneStriker | 2024-02-17T03:13:52Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2024-02-17T03:10:40Z | ---
inference: false
---
<br>
<br>
# LWM-Text-Chat-256K Model Card
## Model details
**Model type:**
LWM-Text-Chat-256K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-Chat-256K was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 37K subset of Books3 documents with 200K to 500K tokens
|
Shijia/furina_hau_loss_2e-05 | Shijia | 2024-02-17T03:13:05Z | 101 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T03:12:08Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_hau_loss_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_hau_loss_2e-05
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0213
- Spearman Corr: 0.7683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.95 | 200 | 0.0215 | 0.7652 |
| No log | 1.91 | 400 | 0.0209 | 0.7689 |
| 0.0007 | 2.86 | 600 | 0.0213 | 0.7713 |
| 0.0007 | 3.82 | 800 | 0.0208 | 0.7714 |
| 0.0006 | 4.77 | 1000 | 0.0205 | 0.7709 |
| 0.0006 | 5.73 | 1200 | 0.0205 | 0.7725 |
| 0.0006 | 6.68 | 1400 | 0.0212 | 0.7675 |
| 0.0006 | 7.64 | 1600 | 0.0213 | 0.7675 |
| 0.0005 | 8.59 | 1800 | 0.0210 | 0.7672 |
| 0.0005 | 9.55 | 2000 | 0.0211 | 0.7664 |
| 0.0005 | 10.5 | 2200 | 0.0216 | 0.7675 |
| 0.0005 | 11.46 | 2400 | 0.0212 | 0.7666 |
| 0.0005 | 12.41 | 2600 | 0.0213 | 0.7683 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
theofilusarifin/image_classification | theofilusarifin | 2024-02-17T03:12:48Z | 186 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-02-16T16:42:30Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: image_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.3813
- eval_accuracy: 0.5312
- eval_runtime: 179.1366
- eval_samples_per_second: 0.893
- eval_steps_per_second: 0.056
- epoch: 9.43
- step: 377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
zhudanhao/rlcourse_u2_taxi | zhudanhao | 2024-02-17T03:08:13Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-17T03:08:09Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: rlcourse_u2_taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zhudanhao/rlcourse_u2_taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Shijia/furina_pan_loss_5e-06 | Shijia | 2024-02-17T03:06:47Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T03:05:41Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_pan_loss_5e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_pan_loss_5e-06
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0215
- Spearman Corr: 0.7688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.85 | 200 | 0.0213 | 0.7680 |
| No log | 1.69 | 400 | 0.0213 | 0.7726 |
| 0.0011 | 2.54 | 600 | 0.0213 | 0.7669 |
| 0.0011 | 3.38 | 800 | 0.0221 | 0.7654 |
| 0.0008 | 4.23 | 1000 | 0.0221 | 0.7698 |
| 0.0008 | 5.07 | 1200 | 0.0227 | 0.7682 |
| 0.0008 | 5.92 | 1400 | 0.0211 | 0.7688 |
| 0.0009 | 6.77 | 1600 | 0.0210 | 0.7699 |
| 0.0009 | 7.61 | 1800 | 0.0212 | 0.7687 |
| 0.002 | 8.46 | 2000 | 0.0213 | 0.7700 |
| 0.002 | 9.3 | 2200 | 0.0216 | 0.7675 |
| 0.0019 | 10.15 | 2400 | 0.0215 | 0.7692 |
| 0.0019 | 10.99 | 2600 | 0.0215 | 0.7695 |
| 0.0019 | 11.84 | 2800 | 0.0212 | 0.7699 |
| 0.0019 | 12.68 | 3000 | 0.0214 | 0.7691 |
| 0.0019 | 13.53 | 3200 | 0.0215 | 0.7688 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
zhudanhao/RlCourse | zhudanhao | 2024-02-17T02:56:02Z | 1 | 0 | transformers | [
"transformers",
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"endpoints_compatible",
"region:us"
]
| reinforcement-learning | 2024-02-16T07:49:12Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RlCourse
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Shijia/furina_hau_corr_2e-05 | Shijia | 2024-02-17T02:54:44Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T02:53:41Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_hau_corr_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_hau_corr_2e-05
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0209
- Spearman Corr: 0.7736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.95 | 200 | 0.0214 | 0.7723 |
| No log | 1.91 | 400 | 0.0225 | 0.7716 |
| 0.0012 | 2.86 | 600 | 0.0211 | 0.7695 |
| 0.0012 | 3.82 | 800 | 0.0207 | 0.7718 |
| 0.0011 | 4.77 | 1000 | 0.0214 | 0.7723 |
| 0.0011 | 5.73 | 1200 | 0.0209 | 0.7753 |
| 0.001 | 6.68 | 1400 | 0.0210 | 0.7710 |
| 0.001 | 7.64 | 1600 | 0.0204 | 0.7721 |
| 0.0009 | 8.59 | 1800 | 0.0217 | 0.7731 |
| 0.0009 | 9.55 | 2000 | 0.0216 | 0.7692 |
| 0.0009 | 10.5 | 2200 | 0.0206 | 0.7724 |
| 0.0009 | 11.46 | 2400 | 0.0213 | 0.7734 |
| 0.0009 | 12.41 | 2600 | 0.0208 | 0.7725 |
| 0.0009 | 13.37 | 2800 | 0.0207 | 0.7760 |
| 0.0008 | 14.32 | 3000 | 0.0209 | 0.7724 |
| 0.0008 | 15.27 | 3200 | 0.0208 | 0.7729 |
| 0.0007 | 16.23 | 3400 | 0.0212 | 0.7732 |
| 0.0007 | 17.18 | 3600 | 0.0209 | 0.7746 |
| 0.0007 | 18.14 | 3800 | 0.0209 | 0.7745 |
| 0.0007 | 19.09 | 4000 | 0.0202 | 0.7759 |
| 0.0007 | 20.05 | 4200 | 0.0206 | 0.7750 |
| 0.0007 | 21.0 | 4400 | 0.0209 | 0.7736 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
LoneStriker/LWM-Text-Chat-128K-AWQ | LoneStriker | 2024-02-17T02:54:18Z | 62 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2024-02-17T02:52:37Z | ---
inference: false
---
<br>
<br>
# LWM-Text-Chat-128K Model Card
## Model details
**Model type:**
LWM-Text-Chat-128K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LWM-Text-Chat-128K was trained in December 2023.
**Paper or resources for more information:**
https://largeworldmodel.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/LargeWorldModel/lwm/issues
## Training dataset
- 92K subset of Books3 documents with 100K to 200K tokens |
andyleetw/NeuralPipe-7B-slerp | andyleetw | 2024-02-17T02:51:23Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-17T02:46:52Z | ---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "andyleetw/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Shijia/furina_kin_loss_5e-06 | Shijia | 2024-02-17T02:41:01Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T02:39:55Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_kin_loss_5e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_kin_loss_5e-06
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0215
- Spearman Corr: 0.7692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.89 | 200 | 0.0220 | 0.7641 |
| No log | 1.78 | 400 | 0.0209 | 0.7696 |
| 0.0013 | 2.67 | 600 | 0.0213 | 0.7674 |
| 0.0013 | 3.56 | 800 | 0.0216 | 0.7691 |
| 0.0013 | 4.45 | 1000 | 0.0217 | 0.7698 |
| 0.0013 | 5.35 | 1200 | 0.0218 | 0.7681 |
| 0.002 | 6.24 | 1400 | 0.0219 | 0.7647 |
| 0.002 | 7.13 | 1600 | 0.0218 | 0.7663 |
| 0.002 | 8.02 | 1800 | 0.0210 | 0.7675 |
| 0.002 | 8.91 | 2000 | 0.0215 | 0.7692 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
pravsels/deepseek-coder-6.7b-instruct-finetuned-manimation | pravsels | 2024-02-17T02:39:29Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:finetune:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-01-20T21:43:19Z | ---
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- generated_from_trainer
model-index:
- name: deepseek-coder-6.7b-instruct-finetuned-manimation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepseek-coder-6.7b-instruct-finetuned-manimation
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.98 | 35 | 1.1468 |
| No log | 1.99 | 71 | 1.1349 |
| No log | 2.95 | 105 | 1.1297 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/xlm-roberta-base_pan_loss_5e-06 | Shijia | 2024-02-17T02:34:58Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T02:34:10Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_pan_loss_5e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_pan_loss_5e-06
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0245
- Spearman Corr: 0.7682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.85 | 200 | 0.0232 | 0.7677 |
| No log | 1.69 | 400 | 0.0279 | 0.7686 |
| 0.0011 | 2.54 | 600 | 0.0229 | 0.7636 |
| 0.0011 | 3.38 | 800 | 0.0240 | 0.7676 |
| 0.0009 | 4.23 | 1000 | 0.0237 | 0.7664 |
| 0.0009 | 5.07 | 1200 | 0.0239 | 0.7677 |
| 0.0009 | 5.92 | 1400 | 0.0236 | 0.7678 |
| 0.0007 | 6.77 | 1600 | 0.0248 | 0.7646 |
| 0.0007 | 7.61 | 1800 | 0.0231 | 0.7647 |
| 0.0006 | 8.46 | 2000 | 0.0256 | 0.7671 |
| 0.0006 | 9.3 | 2200 | 0.0245 | 0.7682 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/xlm-roberta-base_pan_loss_2e-05 | Shijia | 2024-02-17T02:31:47Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T02:31:05Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_pan_loss_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_pan_loss_2e-05
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0216
- Spearman Corr: 0.7776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.85 | 200 | 0.0210 | 0.7733 |
| No log | 1.69 | 400 | 0.0215 | 0.7798 |
| 0.0009 | 2.54 | 600 | 0.0219 | 0.7770 |
| 0.0009 | 3.38 | 800 | 0.0212 | 0.7807 |
| 0.0006 | 4.23 | 1000 | 0.0224 | 0.7806 |
| 0.0006 | 5.07 | 1200 | 0.0210 | 0.7800 |
| 0.0006 | 5.92 | 1400 | 0.0208 | 0.7799 |
| 0.0004 | 6.77 | 1600 | 0.0214 | 0.7793 |
| 0.0004 | 7.61 | 1800 | 0.0216 | 0.7795 |
| 0.0003 | 8.46 | 2000 | 0.0207 | 0.7819 |
| 0.0003 | 9.3 | 2200 | 0.0209 | 0.7826 |
| 0.0004 | 10.15 | 2400 | 0.0209 | 0.7793 |
| 0.0004 | 10.99 | 2600 | 0.0207 | 0.7804 |
| 0.0004 | 11.84 | 2800 | 0.0210 | 0.7808 |
| 0.0004 | 12.68 | 3000 | 0.0216 | 0.7776 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/furina_kin_corr_5e-06 | Shijia | 2024-02-17T02:31:41Z | 100 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T02:30:39Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_kin_corr_5e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_kin_corr_5e-06
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0215
- Spearman Corr: 0.7702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.89 | 200 | 0.0217 | 0.7655 |
| No log | 1.78 | 400 | 0.0210 | 0.7692 |
| 0.0023 | 2.67 | 600 | 0.0215 | 0.7681 |
| 0.0023 | 3.56 | 800 | 0.0214 | 0.7704 |
| 0.0022 | 4.45 | 1000 | 0.0216 | 0.7710 |
| 0.0022 | 5.35 | 1200 | 0.0215 | 0.7693 |
| 0.0021 | 6.24 | 1400 | 0.0217 | 0.7670 |
| 0.0021 | 7.13 | 1600 | 0.0217 | 0.7665 |
| 0.002 | 8.02 | 1800 | 0.0209 | 0.7673 |
| 0.002 | 8.91 | 2000 | 0.0215 | 0.7694 |
| 0.002 | 9.8 | 2200 | 0.0212 | 0.7709 |
| 0.002 | 10.69 | 2400 | 0.0218 | 0.7674 |
| 0.002 | 11.58 | 2600 | 0.0215 | 0.7702 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/furina_esp_loss_2e-05 | Shijia | 2024-02-17T02:27:42Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T02:26:37Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_esp_loss_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_esp_loss_2e-05
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0213
- Spearman Corr: 0.7685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.94 | 200 | 0.0214 | 0.7716 |
| No log | 1.89 | 400 | 0.0230 | 0.7712 |
| 0.0009 | 2.83 | 600 | 0.0204 | 0.7729 |
| 0.0009 | 3.77 | 800 | 0.0224 | 0.7691 |
| 0.001 | 4.72 | 1000 | 0.0215 | 0.7706 |
| 0.001 | 5.66 | 1200 | 0.0212 | 0.7723 |
| 0.0011 | 6.6 | 1400 | 0.0217 | 0.7707 |
| 0.0011 | 7.55 | 1600 | 0.0211 | 0.7724 |
| 0.001 | 8.49 | 1800 | 0.0213 | 0.7716 |
| 0.001 | 9.43 | 2000 | 0.0208 | 0.7702 |
| 0.0009 | 10.38 | 2200 | 0.0213 | 0.7685 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/xlm-roberta-base_pan_corr_5e-06 | Shijia | 2024-02-17T02:25:46Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T02:25:02Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_pan_corr_5e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_pan_corr_5e-06
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0244
- Spearman Corr: 0.7659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.85 | 200 | 0.0258 | 0.7651 |
| No log | 1.69 | 400 | 0.0241 | 0.7668 |
| 0.0007 | 2.54 | 600 | 0.0254 | 0.7642 |
| 0.0007 | 3.38 | 800 | 0.0249 | 0.7668 |
| 0.0007 | 4.23 | 1000 | 0.0247 | 0.7614 |
| 0.0007 | 5.07 | 1200 | 0.0234 | 0.7675 |
| 0.0007 | 5.92 | 1400 | 0.0249 | 0.7660 |
| 0.0007 | 6.77 | 1600 | 0.0241 | 0.7653 |
| 0.0007 | 7.61 | 1800 | 0.0235 | 0.7663 |
| 0.0007 | 8.46 | 2000 | 0.0255 | 0.7692 |
| 0.0007 | 9.3 | 2200 | 0.0241 | 0.7680 |
| 0.0013 | 10.15 | 2400 | 0.0239 | 0.7660 |
| 0.0013 | 10.99 | 2600 | 0.0239 | 0.7655 |
| 0.0013 | 11.84 | 2800 | 0.0241 | 0.7671 |
| 0.0017 | 12.68 | 3000 | 0.0241 | 0.7633 |
| 0.0017 | 13.53 | 3200 | 0.0249 | 0.7658 |
| 0.0016 | 14.38 | 3400 | 0.0244 | 0.7673 |
| 0.0016 | 15.22 | 3600 | 0.0244 | 0.7659 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/furina_esp_corr_2e-05 | Shijia | 2024-02-17T02:13:25Z | 101 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T02:12:16Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_esp_corr_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_esp_corr_2e-05
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0212
- Spearman Corr: 0.7706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.94 | 200 | 0.0219 | 0.7728 |
| No log | 1.89 | 400 | 0.0216 | 0.7705 |
| 0.0013 | 2.83 | 600 | 0.0212 | 0.7740 |
| 0.0013 | 3.77 | 800 | 0.0234 | 0.7700 |
| 0.0012 | 4.72 | 1000 | 0.0214 | 0.7691 |
| 0.0012 | 5.66 | 1200 | 0.0212 | 0.7732 |
| 0.0011 | 6.6 | 1400 | 0.0213 | 0.7725 |
| 0.0011 | 7.55 | 1600 | 0.0211 | 0.7716 |
| 0.001 | 8.49 | 1800 | 0.0210 | 0.7724 |
| 0.001 | 9.43 | 2000 | 0.0207 | 0.7712 |
| 0.0009 | 10.38 | 2200 | 0.0212 | 0.7706 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/xlm-roberta-base_kin_loss_2e-05 | Shijia | 2024-02-17T02:09:53Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T02:09:09Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_kin_loss_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_kin_loss_2e-05
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0212
- Spearman Corr: 0.7789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.89 | 200 | 0.0204 | 0.7803 |
| No log | 1.78 | 400 | 0.0212 | 0.7812 |
| 0.0005 | 2.67 | 600 | 0.0218 | 0.7836 |
| 0.0005 | 3.56 | 800 | 0.0212 | 0.7810 |
| 0.0004 | 4.45 | 1000 | 0.0212 | 0.7799 |
| 0.0004 | 5.35 | 1200 | 0.0214 | 0.7792 |
| 0.0005 | 6.24 | 1400 | 0.0208 | 0.7774 |
| 0.0005 | 7.13 | 1600 | 0.0210 | 0.7804 |
| 0.0005 | 8.02 | 1800 | 0.0212 | 0.7789 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/xlm-roberta-base_kin_corr_2e-05 | Shijia | 2024-02-17T02:02:31Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T02:01:52Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_kin_corr_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_kin_corr_2e-05
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0213
- Spearman Corr: 0.7790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.89 | 200 | 0.0210 | 0.7800 |
| No log | 1.78 | 400 | 0.0210 | 0.7820 |
| 0.0006 | 2.67 | 600 | 0.0208 | 0.7800 |
| 0.0006 | 3.56 | 800 | 0.0209 | 0.7822 |
| 0.0006 | 4.45 | 1000 | 0.0212 | 0.7801 |
| 0.0006 | 5.35 | 1200 | 0.0214 | 0.7792 |
| 0.0005 | 6.24 | 1400 | 0.0213 | 0.7768 |
| 0.0005 | 7.13 | 1600 | 0.0211 | 0.7803 |
| 0.0005 | 8.02 | 1800 | 0.0210 | 0.7785 |
| 0.0005 | 8.91 | 2000 | 0.0213 | 0.7783 |
| 0.0005 | 9.8 | 2200 | 0.0211 | 0.7809 |
| 0.0005 | 10.69 | 2400 | 0.0213 | 0.7790 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/xlm-roberta-base_kin_corr_5e-06 | Shijia | 2024-02-17T02:00:39Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T01:59:58Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_kin_corr_5e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_kin_corr_5e-06
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0247
- Spearman Corr: 0.7669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.89 | 200 | 0.0246 | 0.7645 |
| No log | 1.78 | 400 | 0.0248 | 0.7676 |
| 0.002 | 2.67 | 600 | 0.0237 | 0.7655 |
| 0.002 | 3.56 | 800 | 0.0243 | 0.7647 |
| 0.0019 | 4.45 | 1000 | 0.0244 | 0.7623 |
| 0.0019 | 5.35 | 1200 | 0.0233 | 0.7655 |
| 0.0018 | 6.24 | 1400 | 0.0247 | 0.7667 |
| 0.0018 | 7.13 | 1600 | 0.0244 | 0.7635 |
| 0.0018 | 8.02 | 1800 | 0.0249 | 0.7648 |
| 0.0018 | 8.91 | 2000 | 0.0247 | 0.7669 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/furina_eng_loss_2e-05 | Shijia | 2024-02-17T01:59:06Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T01:58:09Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_eng_loss_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_eng_loss_2e-05
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0202
- Spearman Corr: 0.7777
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 1.33 | 200 | 0.0211 | 0.7732 |
| 0.0009 | 2.66 | 400 | 0.0215 | 0.7773 |
| 0.0008 | 3.99 | 600 | 0.0209 | 0.7752 |
| 0.0008 | 5.32 | 800 | 0.0197 | 0.7734 |
| 0.0007 | 6.64 | 1000 | 0.0211 | 0.7735 |
| 0.0006 | 7.97 | 1200 | 0.0208 | 0.7751 |
| 0.0006 | 9.3 | 1400 | 0.0203 | 0.7789 |
| 0.0008 | 10.63 | 1600 | 0.0200 | 0.7797 |
| 0.001 | 11.96 | 1800 | 0.0207 | 0.7734 |
| 0.001 | 13.29 | 2000 | 0.0203 | 0.7756 |
| 0.0009 | 14.62 | 2200 | 0.0202 | 0.7745 |
| 0.0009 | 15.95 | 2400 | 0.0202 | 0.7777 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
bartowski/KunoichiLake-2x7b-exl2 | bartowski | 2024-02-17T01:50:30Z | 5 | 2 | null | [
"text-generation",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-02-17T01:22:41Z | ---
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of KunoichiLake-2x7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/macadeliccc/KunoichiLake-2x7b
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/KunoichiLake-2x7b-exl2/tree/8_0) | 8.0 | 8.0 | 13.7 GB | 15.1 GB | 17.2 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/KunoichiLake-2x7b-exl2/tree/6_5) | 6.5 | 8.0 | 11.5 GB | 12.9 GB | 15.0 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/bartowski/KunoichiLake-2x7b-exl2/tree/5_0) | 5.0 | 6.0 | 9.3 GB | 10.7 GB | 12.8 GB | Slightly lower quality vs 6.5, great for 12gb cards with 16k context. |
| [4_25](https://huggingface.co/bartowski/KunoichiLake-2x7b-exl2/tree/4_25) | 4.25 | 6.0 | 8.2 GB | 9.6 GB | 11.7 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/bartowski/KunoichiLake-2x7b-exl2/tree/3_5) | 3.5 | 6.0 | 7.0 GB | 8.4 GB | 10.5 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/KunoichiLake-2x7b-exl2 KunoichiLake-2x7b-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `KunoichiLake-2x7b-exl2`:
```shell
mkdir KunoichiLake-2x7b-exl2
huggingface-cli download bartowski/KunoichiLake-2x7b-exl2 --local-dir KunoichiLake-2x7b-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir KunoichiLake-2x7b-exl2-6_5
huggingface-cli download bartowski/KunoichiLake-2x7b-exl2 --revision 6_5 --local-dir KunoichiLake-2x7b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir KunoichiLake-2x7b-exl2-6.5
huggingface-cli download bartowski/KunoichiLake-2x7b-exl2 --revision 6_5 --local-dir KunoichiLake-2x7b-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
Harmj0y/nemesis-reranker | Harmj0y | 2024-02-17T01:47:02Z | 165 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T01:44:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Den-sota/lab1_finetuning | Den-sota | 2024-02-17T01:44:14Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-16T22:50:23Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: lab1_finetuning
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.88398487672078
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lab1_finetuning
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8556
- Bleu: 52.8840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/xlm-roberta-base_ind_corr_2e-05 | Shijia | 2024-02-17T01:44:14Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T01:43:28Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_ind_corr_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_ind_corr_2e-05
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0213
- Spearman Corr: 0.7807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.85 | 200 | 0.0222 | 0.7803 |
| No log | 1.69 | 400 | 0.0209 | 0.7788 |
| 0.0006 | 2.54 | 600 | 0.0208 | 0.7742 |
| 0.0006 | 3.38 | 800 | 0.0211 | 0.7780 |
| 0.0005 | 4.23 | 1000 | 0.0212 | 0.7815 |
| 0.0005 | 5.07 | 1200 | 0.0211 | 0.7822 |
| 0.0005 | 5.92 | 1400 | 0.0207 | 0.7796 |
| 0.0004 | 6.77 | 1600 | 0.0223 | 0.7791 |
| 0.0004 | 7.61 | 1800 | 0.0219 | 0.7789 |
| 0.0004 | 8.46 | 2000 | 0.0212 | 0.7802 |
| 0.0004 | 9.3 | 2200 | 0.0213 | 0.7821 |
| 0.0004 | 10.15 | 2400 | 0.0212 | 0.7789 |
| 0.0004 | 10.99 | 2600 | 0.0213 | 0.7786 |
| 0.0004 | 11.84 | 2800 | 0.0213 | 0.7807 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/furina_eng_corr_2e-05 | Shijia | 2024-02-17T01:43:08Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T01:42:17Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_eng_corr_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_eng_corr_2e-05
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0203
- Spearman Corr: 0.7758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 1.33 | 200 | 0.0216 | 0.7729 |
| 0.0014 | 2.66 | 400 | 0.0212 | 0.7735 |
| 0.0013 | 3.99 | 600 | 0.0214 | 0.7754 |
| 0.0013 | 5.32 | 800 | 0.0215 | 0.7733 |
| 0.0012 | 6.64 | 1000 | 0.0211 | 0.7700 |
| 0.0012 | 7.97 | 1200 | 0.0203 | 0.7745 |
| 0.0012 | 9.3 | 1400 | 0.0204 | 0.7792 |
| 0.0011 | 10.63 | 1600 | 0.0199 | 0.7773 |
| 0.001 | 11.96 | 1800 | 0.0210 | 0.7735 |
| 0.001 | 13.29 | 2000 | 0.0204 | 0.7755 |
| 0.001 | 14.62 | 2200 | 0.0203 | 0.7734 |
| 0.0009 | 15.95 | 2400 | 0.0206 | 0.7752 |
| 0.0009 | 17.28 | 2600 | 0.0205 | 0.7729 |
| 0.0009 | 18.6 | 2800 | 0.0208 | 0.7732 |
| 0.0008 | 19.93 | 3000 | 0.0203 | 0.7758 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
mitchyAI/garammchy | mitchyAI | 2024-02-17T01:39:26Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2024-02-17T01:37:35Z | ---
license: creativeml-openrail-m
---
|
Shijia/xlm-roberta-base_ind_corr_5e-06 | Shijia | 2024-02-17T01:38:55Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T01:38:10Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_ind_corr_5e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_ind_corr_5e-06
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0245
- Spearman Corr: 0.7641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.85 | 200 | 0.0236 | 0.7650 |
| No log | 1.69 | 400 | 0.0247 | 0.7661 |
| 0.0011 | 2.54 | 600 | 0.0257 | 0.7631 |
| 0.0011 | 3.38 | 800 | 0.0244 | 0.7624 |
| 0.001 | 4.23 | 1000 | 0.0235 | 0.7617 |
| 0.001 | 5.07 | 1200 | 0.0242 | 0.7668 |
| 0.001 | 5.92 | 1400 | 0.0245 | 0.7645 |
| 0.0011 | 6.77 | 1600 | 0.0242 | 0.7619 |
| 0.0011 | 7.61 | 1800 | 0.0232 | 0.7671 |
| 0.0013 | 8.46 | 2000 | 0.0257 | 0.7673 |
| 0.0013 | 9.3 | 2200 | 0.0242 | 0.7675 |
| 0.0019 | 10.15 | 2400 | 0.0243 | 0.7645 |
| 0.0019 | 10.99 | 2600 | 0.0241 | 0.7643 |
| 0.0019 | 11.84 | 2800 | 0.0246 | 0.7649 |
| 0.0019 | 12.68 | 3000 | 0.0248 | 0.7617 |
| 0.0019 | 13.53 | 3200 | 0.0250 | 0.7644 |
| 0.0018 | 14.38 | 3400 | 0.0247 | 0.7659 |
| 0.0018 | 15.22 | 3600 | 0.0249 | 0.7649 |
| 0.0018 | 16.07 | 3800 | 0.0245 | 0.7641 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/furina_kin_loss_0.0001 | Shijia | 2024-02-17T01:36:49Z | 100 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T01:35:54Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_kin_loss_0.0001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_kin_loss_0.0001
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0473
- Spearman Corr: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.89 | 200 | 0.0462 | nan |
| No log | 1.78 | 400 | 0.0487 | nan |
| 0.0481 | 2.67 | 600 | 0.0482 | nan |
| 0.0481 | 3.56 | 800 | 0.0469 | nan |
| 0.0482 | 4.45 | 1000 | 0.0481 | nan |
| 0.0482 | 5.35 | 1200 | 0.0493 | nan |
| 0.0481 | 6.24 | 1400 | 0.0467 | nan |
| 0.0481 | 7.13 | 1600 | 0.0478 | nan |
| 0.0483 | 8.02 | 1800 | 0.0473 | nan |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/xlm-roberta-base_hin_loss_2e-05 | Shijia | 2024-02-17T01:32:41Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T01:31:59Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_hin_loss_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_hin_loss_2e-05
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0212
- Spearman Corr: 0.7807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.85 | 200 | 0.0238 | 0.7825 |
| No log | 1.69 | 400 | 0.0213 | 0.7820 |
| 0.0007 | 2.54 | 600 | 0.0209 | 0.7744 |
| 0.0007 | 3.38 | 800 | 0.0208 | 0.7817 |
| 0.0007 | 4.23 | 1000 | 0.0211 | 0.7791 |
| 0.0007 | 5.07 | 1200 | 0.0207 | 0.7817 |
| 0.0007 | 5.92 | 1400 | 0.0212 | 0.7833 |
| 0.0005 | 6.77 | 1600 | 0.0210 | 0.7803 |
| 0.0005 | 7.61 | 1800 | 0.0207 | 0.7794 |
| 0.0004 | 8.46 | 2000 | 0.0212 | 0.7809 |
| 0.0004 | 9.3 | 2200 | 0.0212 | 0.7807 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
jeiku/Cookie_7B | jeiku | 2024-02-17T01:23:43Z | 55 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:jeiku/Rainbow_69_7B",
"base_model:merge:jeiku/Rainbow_69_7B",
"base_model:jeiku/SpaghettiOs_7B",
"base_model:merge:jeiku/SpaghettiOs_7B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-15T20:27:16Z | ---
base_model:
- jeiku/SpaghettiOs_7B
- jeiku/Rainbow_69_7B
library_name: transformers
tags:
- mergekit
- merge
license: other
---
# Cookie
A reasonably logical model with a few datasets thrown in to increase RP abilities. This is a good candidate for a balanced 7B model that can provide assistant functionality alongside roleplaying or romantic endeavors.

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [jeiku/SpaghettiOs_7B](https://huggingface.co/jeiku/SpaghettiOs_7B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/Rainbow_69_7B](https://huggingface.co/jeiku/Rainbow_69_7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: jeiku/SpaghettiOs_7B
parameters:
normalize: true
models:
- model: jeiku/SpaghettiOs_7B
parameters:
weight: 1
- model: jeiku/Rainbow_69_7B
parameters:
weight: 1
dtype: float16
``` |
Shijia/xlm-roberta-base_hin_corr_2e-05 | Shijia | 2024-02-17T01:23:36Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T01:22:57Z | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_hin_corr_2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_hin_corr_2e-05
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0209
- Spearman Corr: 0.7808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.85 | 200 | 0.0208 | 0.7783 |
| No log | 1.69 | 400 | 0.0214 | 0.7813 |
| 0.0006 | 2.54 | 600 | 0.0214 | 0.7775 |
| 0.0006 | 3.38 | 800 | 0.0211 | 0.7811 |
| 0.0006 | 4.23 | 1000 | 0.0208 | 0.7799 |
| 0.0006 | 5.07 | 1200 | 0.0213 | 0.7807 |
| 0.0006 | 5.92 | 1400 | 0.0218 | 0.7775 |
| 0.0006 | 6.77 | 1600 | 0.0206 | 0.7817 |
| 0.0006 | 7.61 | 1800 | 0.0213 | 0.7821 |
| 0.0005 | 8.46 | 2000 | 0.0213 | 0.7804 |
| 0.0005 | 9.3 | 2200 | 0.0218 | 0.7812 |
| 0.0004 | 10.15 | 2400 | 0.0215 | 0.7793 |
| 0.0004 | 10.99 | 2600 | 0.0215 | 0.7794 |
| 0.0004 | 11.84 | 2800 | 0.0212 | 0.7815 |
| 0.0004 | 12.68 | 3000 | 0.0221 | 0.7763 |
| 0.0004 | 13.53 | 3200 | 0.0215 | 0.7782 |
| 0.0004 | 14.38 | 3400 | 0.0209 | 0.7808 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/furina_hau_corr_5e-06 | Shijia | 2024-02-17T01:21:35Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T01:20:42Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_hau_corr_5e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_hau_corr_5e-06
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0216
- Spearman Corr: 0.7684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.95 | 200 | 0.0214 | 0.7675 |
| No log | 1.91 | 400 | 0.0218 | 0.7668 |
| 0.0028 | 2.86 | 600 | 0.0223 | 0.7686 |
| 0.0028 | 3.82 | 800 | 0.0231 | 0.7674 |
| 0.0028 | 4.77 | 1000 | 0.0226 | 0.7669 |
| 0.0028 | 5.73 | 1200 | 0.0216 | 0.7665 |
| 0.0026 | 6.68 | 1400 | 0.0223 | 0.7697 |
| 0.0026 | 7.64 | 1600 | 0.0216 | 0.7655 |
| 0.0025 | 8.59 | 1800 | 0.0215 | 0.7686 |
| 0.0025 | 9.55 | 2000 | 0.0216 | 0.7681 |
| 0.0024 | 10.5 | 2200 | 0.0212 | 0.7672 |
| 0.0024 | 11.46 | 2400 | 0.0219 | 0.7669 |
| 0.0024 | 12.41 | 2600 | 0.0220 | 0.7657 |
| 0.0024 | 13.37 | 2800 | 0.0217 | 0.7654 |
| 0.0023 | 14.32 | 3000 | 0.0216 | 0.7684 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
wrchen1/testtesttest | wrchen1 | 2024-02-17T01:20:04Z | 120 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-17T01:19:01Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
model-index:
- name: testtesttest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testtesttest
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Shijia/furina_ind_loss_0.0001 | Shijia | 2024-02-17T01:18:53Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:yihongLiu/furina",
"base_model:finetune:yihongLiu/furina",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-17T01:17:47Z | ---
base_model: yihongLiu/furina
tags:
- generated_from_trainer
model-index:
- name: furina_ind_loss_0.0001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furina_ind_loss_0.0001
This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0465
- Spearman Corr: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.85 | 200 | 0.0466 | nan |
| No log | 1.69 | 400 | 0.0475 | nan |
| 0.047 | 2.54 | 600 | 0.0480 | nan |
| 0.047 | 3.38 | 800 | 0.0469 | nan |
| 0.0475 | 4.23 | 1000 | 0.0471 | nan |
| 0.0475 | 5.07 | 1200 | 0.0474 | nan |
| 0.0475 | 5.92 | 1400 | 0.0476 | nan |
| 0.0476 | 6.77 | 1600 | 0.0476 | nan |
| 0.0476 | 7.61 | 1800 | 0.0465 | nan |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
wrchen1/lab1_ran123 | wrchen1 | 2024-02-17T01:10:12Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-02-17T01:07:29Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
model-index:
- name: lab1_ran123
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lab1_ran123
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Subsets and Splits