Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {} | walterg777/oxford-pets-vit-with-kd | null | [
"region:us"
] | null | 2024-04-30T06:43:56+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# question_answering
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3267 |
| 2.6765 | 2.0 | 500 | 1.7452 |
| 2.6765 | 3.0 | 750 | 1.6818 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "question_answering", "results": []}]} | madanagrawal/question_answering | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T06:44:02+00:00 |
text-generation | transformers |
# The AWQ version
This is the AWQ version of [MohamedRashad/Arabic-Orpo-Llama-3-8B-Instruct](https://huggingface.co/MohamedRashad/Arabic-Orpo-Llama-3-8B-Instruct) for the enthusiasts
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/6116d0584ef9fdfbf45dc4d9/4VqGvuqtWgLOTavTV861j.png">
</center>
## How to use, you ask ?
First, Update your packages
```shell
pip3 install --upgrade autoawq transformers
```
Now, Copy and Run
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "MohamedRashad/Arabic-Orpo-Llama-3-8B-Instruct-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
attn_implementation="flash_attention_2", # disable if you have problems with flash attention 2
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
device_map="auto"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "مرحبا"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
generation_params = {
"do_sample": True,
"temperature": 0.6,
"top_p": 0.9,
"top_k": 40,
"max_new_tokens": 1024,
"eos_token_id": terminators,
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
| {"language": ["ar", "en"], "license": "llama3", "library_name": "transformers", "model_name": "Arabic ORPO 8B chat", "pipeline_tag": "text-generation", "model_type": "llama3", "quantized_by": "MohamedRashad"} | MohamedRashad/Arabic-Orpo-Llama-3-8B-Instruct-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ar",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-30T06:44:03+00:00 |
audio-classification | speechbrain |
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. However, it uses
more fully connected hidden layers after the embedding layer, and cross-entropy loss was used for training.
We observed that this improved the performance of extracted utterance embeddings for downstream tasks.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```bash
pip install git+https://github.com/speechbrain/speechbrain.git@develop
```
```python
import torchaudio
from speechbrain.inference.classifiers import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="speechbrain/lang-id-voxlingua107-ecapa", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("speechbrain/lang-id-voxlingua107-ecapa/udhr_th.wav")
prediction = language_id.classify_batch(signal)
print(prediction)
# (tensor([[-2.8646e+01, -3.0346e+01, -2.0748e+01, -2.9562e+01, -2.2187e+01,
# -3.2668e+01, -3.6677e+01, -3.3573e+01, -3.2545e+01, -2.4365e+01,
# -2.4688e+01, -3.1171e+01, -2.7743e+01, -2.9918e+01, -2.4770e+01,
# -3.2250e+01, -2.4727e+01, -2.6087e+01, -2.1870e+01, -3.2821e+01,
# -2.2128e+01, -2.2822e+01, -3.0888e+01, -3.3564e+01, -2.9906e+01,
# -2.2392e+01, -2.5573e+01, -2.6443e+01, -3.2429e+01, -3.2652e+01,
# -3.0030e+01, -2.4607e+01, -2.2967e+01, -2.4396e+01, -2.8578e+01,
# -2.5153e+01, -2.8475e+01, -2.6409e+01, -2.5230e+01, -2.7957e+01,
# -2.6298e+01, -2.3609e+01, -2.5863e+01, -2.8225e+01, -2.7225e+01,
# -3.0486e+01, -2.1185e+01, -2.7938e+01, -3.3155e+01, -1.9076e+01,
# -2.9181e+01, -2.2160e+01, -1.8352e+01, -2.5866e+01, -3.3636e+01,
# -4.2016e+00, -3.1581e+01, -3.1894e+01, -2.7834e+01, -2.5429e+01,
# -3.2235e+01, -3.2280e+01, -2.8786e+01, -2.3366e+01, -2.6047e+01,
# -2.2075e+01, -2.3770e+01, -2.2518e+01, -2.8101e+01, -2.5745e+01,
# -2.6441e+01, -2.9822e+01, -2.7109e+01, -3.0225e+01, -2.4566e+01,
# -2.9268e+01, -2.7651e+01, -3.4221e+01, -2.9026e+01, -2.6009e+01,
# -3.1968e+01, -3.1747e+01, -2.8156e+01, -2.9025e+01, -2.7756e+01,
# -2.8052e+01, -2.9341e+01, -2.8806e+01, -2.1636e+01, -2.3992e+01,
# -2.3794e+01, -3.3743e+01, -2.8332e+01, -2.7465e+01, -1.5085e-02,
# -2.9094e+01, -2.1444e+01, -2.9780e+01, -3.6046e+01, -3.7401e+01,
# -3.0888e+01, -3.3172e+01, -1.8931e+01, -2.2679e+01, -3.0225e+01,
# -2.4995e+01, -2.1028e+01]]), tensor([-0.0151]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as log-likelihoods that
# the given utterance belongs to the given language (i.e., the larger the better)
# The linear-scale likelihood can be retrieved using the following:
print(prediction[1].exp())
# tensor([0.9850])
# The identified language ISO code is given in prediction[3]
print(prediction[3])
# ['th: Thai']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
# torch.Size([1, 1, 256])
```
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*.
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
See the [SpeechBrain recipe](https://github.com/speechbrain/speechbrain/tree/voxlingua107/recipes/VoxLingua107/lang_id).
## Evaluation results
Error rate: 6.7% on the VoxLingua107 development dataset
#### Referencing SpeechBrain
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
### Referencing VoxLingua107
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain
| {"language": ["multilingual", "ab", "af", "am", "ar", "as", "az", "ba", "be", "bg", "bi", "bo", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fo", "fr", "gl", "gn", "gu", "gv", "ha", "haw", "hi", "hr", "ht", "hu", "hy", "ia", "id", "is", "it", "he", "ja", "jv", "ka", "kk", "km", "kn", "ko", "la", "lm", "ln", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "nn", false, "oc", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sco", "sd", "si", "sk", "sl", "sn", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "tg", "th", "tk", "tl", "tr", "tt", "uk", "ud", "uz", "vi", "war", "yi", "yo", "zh"], "license": "apache-2.0", "tags": ["audio-classification", "speechbrain", "embeddings", "Language", "Identification", "pytorch", "ECAPA-TDNN", "TDNN", "VoxLingua107"], "datasets": ["VoxLingua107"], "metrics": ["Accuracy"], "widget": [{"example_title": "English Sample", "src": "https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0000.flac"}]} | botdevringring/lang-id-voxlingua107-ecapa | null | [
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"multilingual",
"ab",
"af",
"am",
"ar",
"as",
"az",
"ba",
"be",
"bg",
"bi",
"bo",
"br",
"bs",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fo",
"fr",
"gl",
"gn",
"gu",
"gv",
"ha",
"haw",
"hi",
"hr",
"ht",
"hu",
"hy",
"ia",
"id",
"is",
"it",
"he",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"la",
"lm",
"ln",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"nn",
"no",
"oc",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sco",
"sd",
"si",
"sk",
"sl",
"sn",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tk",
"tl",
"tr",
"tt",
"uk",
"ud",
"uz",
"vi",
"war",
"yi",
"yo",
"zh",
"dataset:VoxLingua107",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T06:44:06+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abc88767/model15 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T06:44:08+00:00 |
text2text-generation | transformers | {} | sataayu/molt5-augmented-default-1100-small-smiles2caption | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T06:44:25+00:00 |
|
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - embracellm/sushi06_LoRA
<Gallery />
## Model description
These are embracellm/sushi06_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sushi to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](embracellm/sushi06_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of sushi", "widget": []} | embracellm/sushi06_LoRA | null | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-04-30T06:44:29+00:00 |
text-classification | transformers |
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.34203216433525085
f1_macro: 0.9457020850649197
f1_micro: 0.946067415730337
f1_weighted: 0.9461015789750475
precision_macro: 0.9447370569809594
precision_micro: 0.946067415730337
precision_weighted: 0.9466487598452521
recall_macro: 0.9472065189712249
recall_micro: 0.946067415730337
recall_weighted: 0.946067415730337
accuracy: 0.946067415730337
| {"tags": ["autotrain", "text-classification"], "datasets": ["autotrain-7ejr4-3wbhb/autotrain-data"], "widget": [{"text": "I love AutoTrain"}]} | NawinCom/autotrain-7ejr4-3wbhb | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"autotrain",
"dataset:autotrain-7ejr4-3wbhb/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T06:44:49+00:00 |
text-generation | transformers | This model is a version of mistralai/Mistral-7B-v0.1 that has been fine-tuned with Our In House CustomData.
Train Spec :
We utilized an A100x4 * 1 for training our model
with DeepSpeed / HuggingFace TRL Trainer / HuggingFace Accelerate | {"language": ["ko"], "license": "cc-by-nc-4.0", "datasets": ["Custom_datasets"], "pipeline_tag": "text-generation", "base_model": "mistralai/Mistral-7B-v0.1"} | Alphacode-AI/AlphaMist7B-slr-v4 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ko",
"dataset:Custom_datasets",
"base_model:mistralai/Mistral-7B-v0.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T06:45:30+00:00 |
null | null | {} | dimson15/sn25-4-1 | null | [
"region:us"
] | null | 2024-04-30T06:45:31+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3075
- F1 Score: 0.8811
- Accuracy: 0.8811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4234 | 2.13 | 200 | 0.3424 | 0.8521 | 0.8524 |
| 0.2988 | 4.26 | 400 | 0.3195 | 0.8711 | 0.8711 |
| 0.2705 | 6.38 | 600 | 0.3480 | 0.8629 | 0.8631 |
| 0.2565 | 8.51 | 800 | 0.3229 | 0.8743 | 0.8744 |
| 0.243 | 10.64 | 1000 | 0.3370 | 0.8770 | 0.8771 |
| 0.2254 | 12.77 | 1200 | 0.3412 | 0.8750 | 0.8751 |
| 0.2151 | 14.89 | 1400 | 0.3951 | 0.8594 | 0.8597 |
| 0.2035 | 17.02 | 1600 | 0.3441 | 0.8791 | 0.8791 |
| 0.1934 | 19.15 | 1800 | 0.3769 | 0.8655 | 0.8657 |
| 0.1763 | 21.28 | 2000 | 0.3976 | 0.8730 | 0.8731 |
| 0.1728 | 23.4 | 2200 | 0.4589 | 0.8592 | 0.8597 |
| 0.1499 | 25.53 | 2400 | 0.4406 | 0.8703 | 0.8704 |
| 0.1466 | 27.66 | 2600 | 0.4950 | 0.8544 | 0.8550 |
| 0.1407 | 29.79 | 2800 | 0.5317 | 0.8543 | 0.8550 |
| 0.1267 | 31.91 | 3000 | 0.4777 | 0.8627 | 0.8631 |
| 0.1214 | 34.04 | 3200 | 0.5038 | 0.8547 | 0.8550 |
| 0.1121 | 36.17 | 3400 | 0.5701 | 0.8623 | 0.8631 |
| 0.1013 | 38.3 | 3600 | 0.5882 | 0.8492 | 0.8497 |
| 0.094 | 40.43 | 3800 | 0.6015 | 0.8544 | 0.8550 |
| 0.0839 | 42.55 | 4000 | 0.7460 | 0.8433 | 0.8444 |
| 0.0822 | 44.68 | 4200 | 0.6918 | 0.8383 | 0.8397 |
| 0.0786 | 46.81 | 4400 | 0.6802 | 0.8551 | 0.8557 |
| 0.0749 | 48.94 | 4600 | 0.7523 | 0.8405 | 0.8417 |
| 0.0627 | 51.06 | 4800 | 0.6662 | 0.8588 | 0.8591 |
| 0.0628 | 53.19 | 5000 | 0.7466 | 0.8573 | 0.8577 |
| 0.0572 | 55.32 | 5200 | 0.8095 | 0.8511 | 0.8517 |
| 0.0542 | 57.45 | 5400 | 0.7983 | 0.8492 | 0.8497 |
| 0.0495 | 59.57 | 5600 | 0.8882 | 0.8477 | 0.8484 |
| 0.0467 | 61.7 | 5800 | 0.7923 | 0.8527 | 0.8530 |
| 0.0453 | 63.83 | 6000 | 0.8642 | 0.8442 | 0.8450 |
| 0.0398 | 65.96 | 6200 | 0.9339 | 0.8408 | 0.8417 |
| 0.0407 | 68.09 | 6400 | 0.9011 | 0.8436 | 0.8444 |
| 0.0394 | 70.21 | 6600 | 0.8747 | 0.8498 | 0.8504 |
| 0.0363 | 72.34 | 6800 | 0.8441 | 0.8574 | 0.8577 |
| 0.0349 | 74.47 | 7000 | 0.8893 | 0.8459 | 0.8464 |
| 0.032 | 76.6 | 7200 | 0.8798 | 0.8549 | 0.8550 |
| 0.0352 | 78.72 | 7400 | 0.8617 | 0.8588 | 0.8591 |
| 0.0283 | 80.85 | 7600 | 0.8505 | 0.8590 | 0.8591 |
| 0.0307 | 82.98 | 7800 | 0.9578 | 0.8460 | 0.8464 |
| 0.0275 | 85.11 | 8000 | 0.9154 | 0.8514 | 0.8517 |
| 0.0304 | 87.23 | 8200 | 0.9107 | 0.8534 | 0.8537 |
| 0.0256 | 89.36 | 8400 | 0.9299 | 0.8540 | 0.8544 |
| 0.0254 | 91.49 | 8600 | 0.9893 | 0.8459 | 0.8464 |
| 0.022 | 93.62 | 8800 | 0.9983 | 0.8534 | 0.8537 |
| 0.0236 | 95.74 | 9000 | 0.9772 | 0.8513 | 0.8517 |
| 0.0198 | 97.87 | 9200 | 1.0070 | 0.8507 | 0.8510 |
| 0.0244 | 100.0 | 9400 | 0.9825 | 0.8527 | 0.8530 |
| 0.0202 | 102.13 | 9600 | 0.9848 | 0.8506 | 0.8510 |
| 0.0204 | 104.26 | 9800 | 1.0325 | 0.8499 | 0.8504 |
| 0.0212 | 106.38 | 10000 | 1.0237 | 0.8500 | 0.8504 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H3-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T06:46:15+00:00 |
null | null | {} | ironartisan1/llama2-7b-hf | null | [
"region:us"
] | null | 2024-04-30T06:46:45+00:00 |
|
null | null | {} | MahmutAtia/Turkcell-LLM-7b-v1.GGUF | null | [
"region:us"
] | null | 2024-04-30T06:47:02+00:00 |
|
null | null | {"license": "mit"} | newbienewbie/llama2 | null | [
"license:mit",
"region:us"
] | null | 2024-04-30T06:47:07+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5568
- F1 Score: 0.7302
- Accuracy: 0.7299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6494 | 0.93 | 200 | 0.5994 | 0.6882 | 0.6880 |
| 0.5949 | 1.87 | 400 | 0.5857 | 0.7052 | 0.7056 |
| 0.5714 | 2.8 | 600 | 0.5656 | 0.7207 | 0.7205 |
| 0.5601 | 3.74 | 800 | 0.5623 | 0.7274 | 0.7273 |
| 0.5523 | 4.67 | 1000 | 0.5614 | 0.7313 | 0.7314 |
| 0.5455 | 5.61 | 1200 | 0.5629 | 0.7267 | 0.7273 |
| 0.5444 | 6.54 | 1400 | 0.5550 | 0.7301 | 0.7305 |
| 0.5339 | 7.48 | 1600 | 0.5490 | 0.7360 | 0.7358 |
| 0.5404 | 8.41 | 1800 | 0.5517 | 0.7348 | 0.7349 |
| 0.5358 | 9.35 | 2000 | 0.5593 | 0.7299 | 0.7305 |
| 0.5283 | 10.28 | 2200 | 0.5499 | 0.7368 | 0.7367 |
| 0.5335 | 11.21 | 2400 | 0.5521 | 0.7322 | 0.7326 |
| 0.5253 | 12.15 | 2600 | 0.5545 | 0.7360 | 0.7364 |
| 0.5262 | 13.08 | 2800 | 0.5572 | 0.7332 | 0.7337 |
| 0.5265 | 14.02 | 3000 | 0.5480 | 0.7372 | 0.7372 |
| 0.5241 | 14.95 | 3200 | 0.5501 | 0.7416 | 0.7416 |
| 0.5209 | 15.89 | 3400 | 0.5538 | 0.7364 | 0.7370 |
| 0.519 | 16.82 | 3600 | 0.5406 | 0.7430 | 0.7428 |
| 0.525 | 17.76 | 3800 | 0.5488 | 0.7412 | 0.7413 |
| 0.5204 | 18.69 | 4000 | 0.5406 | 0.7371 | 0.7370 |
| 0.5169 | 19.63 | 4200 | 0.5417 | 0.7428 | 0.7428 |
| 0.5191 | 20.56 | 4400 | 0.5373 | 0.7419 | 0.7416 |
| 0.517 | 21.5 | 4600 | 0.5523 | 0.7337 | 0.7346 |
| 0.5157 | 22.43 | 4800 | 0.5360 | 0.7461 | 0.7457 |
| 0.5139 | 23.36 | 5000 | 0.5473 | 0.7385 | 0.7387 |
| 0.5135 | 24.3 | 5200 | 0.5335 | 0.7454 | 0.7452 |
| 0.5145 | 25.23 | 5400 | 0.5362 | 0.7422 | 0.7419 |
| 0.515 | 26.17 | 5600 | 0.5359 | 0.7409 | 0.7408 |
| 0.5134 | 27.1 | 5800 | 0.5351 | 0.7442 | 0.7440 |
| 0.5076 | 28.04 | 6000 | 0.5365 | 0.7463 | 0.7460 |
| 0.5147 | 28.97 | 6200 | 0.5486 | 0.7368 | 0.7372 |
| 0.5115 | 29.91 | 6400 | 0.5365 | 0.7451 | 0.7449 |
| 0.5095 | 30.84 | 6600 | 0.5499 | 0.7376 | 0.7381 |
| 0.5105 | 31.78 | 6800 | 0.5339 | 0.7461 | 0.7457 |
| 0.5087 | 32.71 | 7000 | 0.5372 | 0.7416 | 0.7413 |
| 0.5059 | 33.64 | 7200 | 0.5415 | 0.7397 | 0.7399 |
| 0.509 | 34.58 | 7400 | 0.5360 | 0.7427 | 0.7425 |
| 0.509 | 35.51 | 7600 | 0.5332 | 0.7440 | 0.7437 |
| 0.5045 | 36.45 | 7800 | 0.5376 | 0.7434 | 0.7431 |
| 0.5085 | 37.38 | 8000 | 0.5448 | 0.7399 | 0.7402 |
| 0.5036 | 38.32 | 8200 | 0.5411 | 0.7411 | 0.7411 |
| 0.5051 | 39.25 | 8400 | 0.5373 | 0.7410 | 0.7408 |
| 0.5081 | 40.19 | 8600 | 0.5353 | 0.7480 | 0.7478 |
| 0.5063 | 41.12 | 8800 | 0.5387 | 0.7423 | 0.7422 |
| 0.5026 | 42.06 | 9000 | 0.5382 | 0.7457 | 0.7455 |
| 0.5068 | 42.99 | 9200 | 0.5410 | 0.7431 | 0.7431 |
| 0.5057 | 43.93 | 9400 | 0.5387 | 0.7438 | 0.7437 |
| 0.5038 | 44.86 | 9600 | 0.5369 | 0.7442 | 0.7440 |
| 0.5042 | 45.79 | 9800 | 0.5379 | 0.7424 | 0.7422 |
| 0.504 | 46.73 | 10000 | 0.5396 | 0.7429 | 0.7428 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T06:47:09+00:00 |
text-generation | transformers | {} | syvai/llama3-da-base | null | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T06:48:22+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NDD-addressbook_test-content
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1234
- Accuracy: 0.9794
- F1: 0.9795
- Precision: 0.9795
- Recall: 0.9794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1144 | 1.0 | 694 | 0.1669 | 0.9325 | 0.9340 | 0.9406 | 0.9325 |
| 0.0671 | 2.0 | 1388 | 0.1234 | 0.9794 | 0.9795 | 0.9795 | 0.9794 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "NDD-addressbook_test-content", "results": []}]} | lgk03/NDD-addressbook_test-content | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T06:48:26+00:00 |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | liuyuxiang/wiki_cs_retriever | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T06:51:00+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5503
- F1 Score: 0.7420
- Accuracy: 0.7419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6213 | 0.93 | 200 | 0.5851 | 0.7076 | 0.7076 |
| 0.5618 | 1.87 | 400 | 0.5700 | 0.7194 | 0.7199 |
| 0.5444 | 2.8 | 600 | 0.5485 | 0.7343 | 0.7340 |
| 0.5373 | 3.74 | 800 | 0.5428 | 0.7358 | 0.7355 |
| 0.5324 | 4.67 | 1000 | 0.5398 | 0.7358 | 0.7355 |
| 0.5243 | 5.61 | 1200 | 0.5563 | 0.7260 | 0.7270 |
| 0.5245 | 6.54 | 1400 | 0.5494 | 0.7330 | 0.7334 |
| 0.5128 | 7.48 | 1600 | 0.5476 | 0.7361 | 0.7361 |
| 0.5189 | 8.41 | 1800 | 0.5409 | 0.7418 | 0.7416 |
| 0.5118 | 9.35 | 2000 | 0.5376 | 0.7449 | 0.7446 |
| 0.5046 | 10.28 | 2200 | 0.5365 | 0.7462 | 0.7460 |
| 0.5076 | 11.21 | 2400 | 0.5480 | 0.7362 | 0.7367 |
| 0.4986 | 12.15 | 2600 | 0.5483 | 0.7416 | 0.7419 |
| 0.4973 | 13.08 | 2800 | 0.5433 | 0.7431 | 0.7428 |
| 0.4969 | 14.02 | 3000 | 0.5424 | 0.7451 | 0.7449 |
| 0.4918 | 14.95 | 3200 | 0.5431 | 0.7466 | 0.7463 |
| 0.4895 | 15.89 | 3400 | 0.5316 | 0.7481 | 0.7478 |
| 0.4864 | 16.82 | 3600 | 0.5444 | 0.7385 | 0.7387 |
| 0.4884 | 17.76 | 3800 | 0.5854 | 0.7272 | 0.7296 |
| 0.4872 | 18.69 | 4000 | 0.5287 | 0.7457 | 0.7455 |
| 0.4797 | 19.63 | 4200 | 0.5321 | 0.7419 | 0.7416 |
| 0.4811 | 20.56 | 4400 | 0.5319 | 0.7434 | 0.7431 |
| 0.4753 | 21.5 | 4600 | 0.5392 | 0.7441 | 0.7440 |
| 0.4758 | 22.43 | 4800 | 0.5264 | 0.7462 | 0.7460 |
| 0.4712 | 23.36 | 5000 | 0.5409 | 0.7468 | 0.7466 |
| 0.4729 | 24.3 | 5200 | 0.5321 | 0.7437 | 0.7434 |
| 0.4709 | 25.23 | 5400 | 0.5293 | 0.7495 | 0.7493 |
| 0.4692 | 26.17 | 5600 | 0.5361 | 0.7434 | 0.7431 |
| 0.4656 | 27.1 | 5800 | 0.5423 | 0.7434 | 0.7431 |
| 0.4623 | 28.04 | 6000 | 0.5445 | 0.7449 | 0.7446 |
| 0.4666 | 28.97 | 6200 | 0.5433 | 0.7474 | 0.7472 |
| 0.4619 | 29.91 | 6400 | 0.5397 | 0.7448 | 0.7446 |
| 0.4625 | 30.84 | 6600 | 0.5419 | 0.7436 | 0.7434 |
| 0.4606 | 31.78 | 6800 | 0.5357 | 0.7457 | 0.7455 |
| 0.459 | 32.71 | 7000 | 0.5367 | 0.7469 | 0.7466 |
| 0.4574 | 33.64 | 7200 | 0.5461 | 0.7458 | 0.7460 |
| 0.4572 | 34.58 | 7400 | 0.5355 | 0.7443 | 0.7440 |
| 0.4557 | 35.51 | 7600 | 0.5353 | 0.7437 | 0.7434 |
| 0.4501 | 36.45 | 7800 | 0.5408 | 0.7461 | 0.7457 |
| 0.4555 | 37.38 | 8000 | 0.5449 | 0.7418 | 0.7416 |
| 0.4497 | 38.32 | 8200 | 0.5391 | 0.7440 | 0.7437 |
| 0.4503 | 39.25 | 8400 | 0.5371 | 0.7434 | 0.7431 |
| 0.4503 | 40.19 | 8600 | 0.5423 | 0.7455 | 0.7452 |
| 0.4513 | 41.12 | 8800 | 0.5433 | 0.7460 | 0.7457 |
| 0.4467 | 42.06 | 9000 | 0.5450 | 0.7448 | 0.7446 |
| 0.4503 | 42.99 | 9200 | 0.5434 | 0.7431 | 0.7428 |
| 0.4505 | 43.93 | 9400 | 0.5413 | 0.7469 | 0.7466 |
| 0.445 | 44.86 | 9600 | 0.5428 | 0.7472 | 0.7469 |
| 0.4449 | 45.79 | 9800 | 0.5431 | 0.7457 | 0.7455 |
| 0.4472 | 46.73 | 10000 | 0.5443 | 0.7445 | 0.7443 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T06:52:14+00:00 |
null | null | {} | ironartisan1/llama-65b-hf | null | [
"region:us"
] | null | 2024-04-30T06:52:57+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5495
- F1 Score: 0.7460
- Accuracy: 0.7457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6028 | 0.93 | 200 | 0.5707 | 0.7234 | 0.7232 |
| 0.5476 | 1.87 | 400 | 0.5531 | 0.7327 | 0.7328 |
| 0.5337 | 2.8 | 600 | 0.5433 | 0.7406 | 0.7405 |
| 0.5254 | 3.74 | 800 | 0.5333 | 0.7454 | 0.7452 |
| 0.5186 | 4.67 | 1000 | 0.5295 | 0.7472 | 0.7469 |
| 0.5073 | 5.61 | 1200 | 0.5392 | 0.7429 | 0.7431 |
| 0.5042 | 6.54 | 1400 | 0.5318 | 0.7497 | 0.7496 |
| 0.4913 | 7.48 | 1600 | 0.5410 | 0.7490 | 0.7490 |
| 0.4938 | 8.41 | 1800 | 0.5278 | 0.7506 | 0.7504 |
| 0.4831 | 9.35 | 2000 | 0.5265 | 0.7483 | 0.7481 |
| 0.4749 | 10.28 | 2200 | 0.5293 | 0.7507 | 0.7504 |
| 0.4745 | 11.21 | 2400 | 0.5542 | 0.7411 | 0.7422 |
| 0.4646 | 12.15 | 2600 | 0.5342 | 0.7636 | 0.7633 |
| 0.4617 | 13.08 | 2800 | 0.5458 | 0.7553 | 0.7551 |
| 0.4581 | 14.02 | 3000 | 0.5805 | 0.7434 | 0.7443 |
| 0.4486 | 14.95 | 3200 | 0.5552 | 0.7556 | 0.7554 |
| 0.4428 | 15.89 | 3400 | 0.5262 | 0.7573 | 0.7572 |
| 0.4387 | 16.82 | 3600 | 0.5551 | 0.7445 | 0.7446 |
| 0.4353 | 17.76 | 3800 | 0.6040 | 0.7281 | 0.7305 |
| 0.4309 | 18.69 | 4000 | 0.5432 | 0.7528 | 0.7525 |
| 0.4236 | 19.63 | 4200 | 0.5479 | 0.7504 | 0.7501 |
| 0.4156 | 20.56 | 4400 | 0.5539 | 0.7519 | 0.7516 |
| 0.4097 | 21.5 | 4600 | 0.5632 | 0.7467 | 0.7466 |
| 0.4072 | 22.43 | 4800 | 0.5566 | 0.7478 | 0.7475 |
| 0.4042 | 23.36 | 5000 | 0.5636 | 0.7481 | 0.7481 |
| 0.3992 | 24.3 | 5200 | 0.5658 | 0.7426 | 0.7425 |
| 0.394 | 25.23 | 5400 | 0.5724 | 0.7431 | 0.7428 |
| 0.3909 | 26.17 | 5600 | 0.5892 | 0.7440 | 0.7440 |
| 0.382 | 27.1 | 5800 | 0.6073 | 0.7325 | 0.7328 |
| 0.3745 | 28.04 | 6000 | 0.5808 | 0.7495 | 0.7493 |
| 0.375 | 28.97 | 6200 | 0.5961 | 0.7445 | 0.7443 |
| 0.3683 | 29.91 | 6400 | 0.6048 | 0.7355 | 0.7355 |
| 0.3664 | 30.84 | 6600 | 0.5912 | 0.7427 | 0.7425 |
| 0.3607 | 31.78 | 6800 | 0.6004 | 0.7454 | 0.7452 |
| 0.3556 | 32.71 | 7000 | 0.6231 | 0.7393 | 0.7393 |
| 0.3523 | 33.64 | 7200 | 0.6199 | 0.7389 | 0.7393 |
| 0.3511 | 34.58 | 7400 | 0.6349 | 0.7362 | 0.7367 |
| 0.3471 | 35.51 | 7600 | 0.6107 | 0.7404 | 0.7402 |
| 0.3426 | 36.45 | 7800 | 0.6431 | 0.7434 | 0.7434 |
| 0.342 | 37.38 | 8000 | 0.6399 | 0.7401 | 0.7402 |
| 0.3393 | 38.32 | 8200 | 0.6360 | 0.7406 | 0.7405 |
| 0.3359 | 39.25 | 8400 | 0.6354 | 0.7386 | 0.7384 |
| 0.3355 | 40.19 | 8600 | 0.6395 | 0.7436 | 0.7434 |
| 0.3347 | 41.12 | 8800 | 0.6416 | 0.7419 | 0.7419 |
| 0.3278 | 42.06 | 9000 | 0.6515 | 0.7431 | 0.7431 |
| 0.3273 | 42.99 | 9200 | 0.6489 | 0.7412 | 0.7411 |
| 0.3227 | 43.93 | 9400 | 0.6407 | 0.7391 | 0.7390 |
| 0.3206 | 44.86 | 9600 | 0.6471 | 0.7415 | 0.7413 |
| 0.3215 | 45.79 | 9800 | 0.6479 | 0.7413 | 0.7411 |
| 0.3209 | 46.73 | 10000 | 0.6473 | 0.7398 | 0.7396 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T06:52:58+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4417
- F1 Score: 0.8154
- Accuracy: 0.8155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5369 | 1.1 | 200 | 0.4673 | 0.7966 | 0.7972 |
| 0.4674 | 2.21 | 400 | 0.4700 | 0.7852 | 0.7874 |
| 0.4562 | 3.31 | 600 | 0.4488 | 0.7980 | 0.7992 |
| 0.4438 | 4.42 | 800 | 0.4431 | 0.8018 | 0.8027 |
| 0.4454 | 5.52 | 1000 | 0.4541 | 0.7987 | 0.8006 |
| 0.4346 | 6.63 | 1200 | 0.4567 | 0.8020 | 0.8037 |
| 0.4422 | 7.73 | 1400 | 0.4403 | 0.8054 | 0.8065 |
| 0.4327 | 8.84 | 1600 | 0.4613 | 0.7981 | 0.8003 |
| 0.4346 | 9.94 | 1800 | 0.4350 | 0.8170 | 0.8169 |
| 0.4321 | 11.05 | 2000 | 0.4455 | 0.8075 | 0.8089 |
| 0.4307 | 12.15 | 2200 | 0.4366 | 0.8133 | 0.8141 |
| 0.4273 | 13.26 | 2400 | 0.4389 | 0.8131 | 0.8141 |
| 0.4258 | 14.36 | 2600 | 0.4368 | 0.8091 | 0.8100 |
| 0.4266 | 15.47 | 2800 | 0.4492 | 0.7996 | 0.8017 |
| 0.4223 | 16.57 | 3000 | 0.4333 | 0.8151 | 0.8155 |
| 0.4237 | 17.68 | 3200 | 0.4332 | 0.8104 | 0.8114 |
| 0.4183 | 18.78 | 3400 | 0.4322 | 0.8128 | 0.8135 |
| 0.419 | 19.89 | 3600 | 0.4462 | 0.8022 | 0.8041 |
| 0.4185 | 20.99 | 3800 | 0.4410 | 0.8074 | 0.8086 |
| 0.4179 | 22.1 | 4000 | 0.4346 | 0.8092 | 0.8103 |
| 0.4157 | 23.2 | 4200 | 0.4372 | 0.8098 | 0.8110 |
| 0.4163 | 24.31 | 4400 | 0.4476 | 0.8057 | 0.8076 |
| 0.4103 | 25.41 | 4600 | 0.4446 | 0.8096 | 0.8110 |
| 0.417 | 26.52 | 4800 | 0.4360 | 0.8124 | 0.8135 |
| 0.4154 | 27.62 | 5000 | 0.4362 | 0.8108 | 0.8121 |
| 0.411 | 28.73 | 5200 | 0.4374 | 0.8069 | 0.8086 |
| 0.4095 | 29.83 | 5400 | 0.4357 | 0.8117 | 0.8128 |
| 0.4095 | 30.94 | 5600 | 0.4342 | 0.8168 | 0.8176 |
| 0.4104 | 32.04 | 5800 | 0.4315 | 0.8159 | 0.8166 |
| 0.4074 | 33.15 | 6000 | 0.4332 | 0.8130 | 0.8141 |
| 0.4072 | 34.25 | 6200 | 0.4370 | 0.8153 | 0.8162 |
| 0.4072 | 35.36 | 6400 | 0.4403 | 0.8098 | 0.8114 |
| 0.4072 | 36.46 | 6600 | 0.4308 | 0.8162 | 0.8169 |
| 0.4077 | 37.57 | 6800 | 0.4367 | 0.8128 | 0.8141 |
| 0.4026 | 38.67 | 7000 | 0.4393 | 0.8133 | 0.8145 |
| 0.403 | 39.78 | 7200 | 0.4378 | 0.8139 | 0.8152 |
| 0.4065 | 40.88 | 7400 | 0.4327 | 0.8135 | 0.8145 |
| 0.4056 | 41.99 | 7600 | 0.4360 | 0.8144 | 0.8155 |
| 0.4035 | 43.09 | 7800 | 0.4411 | 0.8120 | 0.8135 |
| 0.4054 | 44.2 | 8000 | 0.4417 | 0.8091 | 0.8107 |
| 0.4018 | 45.3 | 8200 | 0.4363 | 0.8141 | 0.8152 |
| 0.4013 | 46.41 | 8400 | 0.4362 | 0.8131 | 0.8141 |
| 0.4038 | 47.51 | 8600 | 0.4398 | 0.8128 | 0.8141 |
| 0.3989 | 48.62 | 8800 | 0.4425 | 0.8095 | 0.8110 |
| 0.4007 | 49.72 | 9000 | 0.4387 | 0.8136 | 0.8148 |
| 0.4044 | 50.83 | 9200 | 0.4437 | 0.8100 | 0.8117 |
| 0.3988 | 51.93 | 9400 | 0.4412 | 0.8117 | 0.8131 |
| 0.4 | 53.04 | 9600 | 0.4397 | 0.8121 | 0.8135 |
| 0.4003 | 54.14 | 9800 | 0.4386 | 0.8136 | 0.8148 |
| 0.4009 | 55.25 | 10000 | 0.4408 | 0.8117 | 0.8131 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T06:53:35+00:00 |
null | null | # Model Card: North Mistral 7B - GGML
## Model Overview
The **North Mistral 7B** is part of a series of research experiements into creating Scandinavian LLMs. The current versions are pretrained only, so they will have to be finetuned before used. This repo provides experiemental GGML-versions of these models.
## Model Architecture
North Mistral 7B is based on the Mistral architecture, renowned for its effectiveness in capturing complex patterns in large datasets. It utilizes a multi-layer transformer decoder structure.
| version | checkpoint | val_loss |
|---------|------------|----------|
| v0.1 | [40k](https://huggingface.co/north/north-mistral-7b-ggml/blob/main/north-mistral-v0.1.gguf) | 1.449 |
## Training Data
The model was trained on a diverse dataset primarily in English, Swedish, Danish and Norwegian. A complete datacard will be published later.
## Intended Use
This model is intended for developers and researchers only. It is particularly suited for applications requiring understanding and generating human-like text, including conversational agents, content generation tools, and automated translation services.
## Limitations
- The model will exhibit biases present in the training data.
- Performance can vary significantly depending on the specificity of the task and the nature of the input data.
- High computational requirements for inference may limit deployment on low-resource devices.
## Ethical Considerations
Users are encouraged to evaluate the model carefully in controlled environments before deploying it in critical applications. Ethical use guidelines should be followed to prevent misuse of the model's capabilities, particularly in sensitive contexts.
## Licensing
North Mistral 7B is released under the MIT Public License, which allows for both academic and commercial use.
| {"license": "mit"} | north/north-mistral-7b-ggml | null | [
"gguf",
"license:mit",
"region:us"
] | null | 2024-04-30T06:53:42+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/curtisxu/huggingface/runs/bxujthiq)
# mergeLlama-7b-Instruct-hf-quantized-peft-decompile
This model is a fine-tuned version of [meta-llama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/CodeLlama-7b-Instruct-hf", "model-index": [{"name": "mergeLlama-7b-Instruct-hf-quantized-peft-decompile", "results": []}]} | curtisxu/mergeLlama-7b-Instruct-hf-quantized-peft-decompile | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | 2024-04-30T06:54:33+00:00 |
null | null | {} | laitrongduc/Llama-2-7b-chat-hf-finetuned-wikitext2 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-04-30T06:54:38+00:00 |
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0430HMA1
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0135 | 0.09 | 10 | 0.3204 |
| 0.1991 | 0.18 | 20 | 0.1575 |
| 0.15 | 0.27 | 30 | 0.1643 |
| 0.1573 | 0.36 | 40 | 0.1525 |
| 0.1495 | 0.45 | 50 | 0.1508 |
| 0.1514 | 0.54 | 60 | 0.1491 |
| 0.149 | 0.63 | 70 | 0.1476 |
| 0.15 | 0.73 | 80 | 0.1597 |
| 0.146 | 0.82 | 90 | 0.1489 |
| 0.1504 | 0.91 | 100 | 0.1455 |
| 0.1358 | 1.0 | 110 | 0.0842 |
| 0.1868 | 1.09 | 120 | 0.1344 |
| 0.1262 | 1.18 | 130 | 0.1144 |
| 0.1965 | 1.27 | 140 | 0.1019 |
| 0.0895 | 1.36 | 150 | 0.0772 |
| 0.0653 | 1.45 | 160 | 0.0576 |
| 0.043 | 1.54 | 170 | 0.0449 |
| 0.0641 | 1.63 | 180 | 0.0361 |
| 0.0392 | 1.72 | 190 | 0.0259 |
| 0.0275 | 1.81 | 200 | 0.0246 |
| 0.0256 | 1.9 | 210 | 0.0254 |
| 0.023 | 1.99 | 220 | 0.0246 |
| 0.0278 | 2.08 | 230 | 0.0241 |
| 0.0246 | 2.18 | 240 | 0.0227 |
| 0.0201 | 2.27 | 250 | 0.0251 |
| 0.0229 | 2.36 | 260 | 0.0223 |
| 0.0196 | 2.45 | 270 | 0.0213 |
| 0.0167 | 2.54 | 280 | 0.0210 |
| 0.0236 | 2.63 | 290 | 0.0207 |
| 0.0199 | 2.72 | 300 | 0.0204 |
| 0.0207 | 2.81 | 310 | 0.0203 |
| 0.0207 | 2.9 | 320 | 0.0203 |
| 0.0217 | 2.99 | 330 | 0.0203 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0430HMA1", "results": []}]} | Litzy619/O0430HMA1 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T06:55:38+00:00 |
null | null | {} | hossein0677/distilbert-base-uncased-finetuned-ner | null | [
"region:us"
] | null | 2024-04-30T06:55:53+00:00 |
|
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mamba_text_classification
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2144
- Accuracy: 0.944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0088 | 0.1 | 625 | 0.2663 | 0.9216 |
| 3.5723 | 0.2 | 1250 | 0.3047 | 0.8962 |
| 1.4067 | 0.3 | 1875 | 0.2881 | 0.919 |
| 0.278 | 0.4 | 2500 | 0.2252 | 0.9322 |
| 0.0034 | 0.5 | 3125 | 0.2200 | 0.9382 |
| 2.526 | 0.6 | 3750 | 0.2670 | 0.9354 |
| 0.5528 | 0.7 | 4375 | 0.2209 | 0.9386 |
| 0.0006 | 0.8 | 5000 | 0.2294 | 0.9432 |
| 0.0358 | 0.9 | 5625 | 0.2167 | 0.9438 |
| 0.5311 | 1.0 | 6250 | 0.2144 | 0.944 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "mamba_text_classification", "results": []}]} | TRanHieu009/mamba_text_classification | null | [
"transformers",
"pytorch",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T06:57:09+00:00 |
null | null | {} | AmanSinha2508/first | null | [
"region:us"
] | null | 2024-04-30T06:57:09+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-31m_mz-133_WordLength_n-its-10-seed-2
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-133_WordLength_n-its-10-seed-2", "results": []}]} | AlignmentResearch/robust_llm_pythia-31m_mz-133_WordLength_n-its-10-seed-2 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-31m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T06:57:28+00:00 |
null | null | {} | ZeroWater93/Fast_whisper_large-v3 | null | [
"region:us"
] | null | 2024-04-30T06:58:48+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_V1-bert-text-classification-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1498
- Accuracy: 0.9713
- F1: 0.8341
- Precision: 0.8330
- Recall: 0.8356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.6252 | 0.11 | 50 | 1.7120 | 0.3451 | 0.1545 | 0.2382 | 0.1762 |
| 0.7857 | 0.22 | 100 | 0.7296 | 0.8209 | 0.4973 | 0.4815 | 0.5166 |
| 0.2986 | 0.33 | 150 | 0.5358 | 0.8830 | 0.6565 | 0.6402 | 0.6744 |
| 0.2612 | 0.44 | 200 | 0.4678 | 0.9035 | 0.6704 | 0.6621 | 0.6795 |
| 0.153 | 0.55 | 250 | 0.4325 | 0.9065 | 0.6648 | 0.6446 | 0.6879 |
| 0.2274 | 0.66 | 300 | 0.3498 | 0.8969 | 0.6440 | 0.6237 | 0.6677 |
| 0.1449 | 0.76 | 350 | 0.4254 | 0.8964 | 0.6885 | 0.8012 | 0.6895 |
| 0.1695 | 0.87 | 400 | 0.3484 | 0.9248 | 0.7301 | 0.7857 | 0.7208 |
| 0.1206 | 0.98 | 450 | 0.3075 | 0.9218 | 0.7351 | 0.7586 | 0.7279 |
| 0.1142 | 1.09 | 500 | 0.2241 | 0.9467 | 0.8063 | 0.7964 | 0.8218 |
| 0.0642 | 1.2 | 550 | 0.2527 | 0.9491 | 0.8159 | 0.8106 | 0.8239 |
| 0.0935 | 1.31 | 600 | 0.1961 | 0.9601 | 0.8216 | 0.8270 | 0.8173 |
| 0.0755 | 1.42 | 650 | 0.1290 | 0.9691 | 0.8272 | 0.8348 | 0.8201 |
| 0.108 | 1.53 | 700 | 0.1712 | 0.9612 | 0.8215 | 0.8311 | 0.8130 |
| 0.0667 | 1.64 | 750 | 0.1449 | 0.9716 | 0.8354 | 0.8371 | 0.8338 |
| 0.0925 | 1.75 | 800 | 0.1193 | 0.9721 | 0.8345 | 0.8353 | 0.8337 |
| 0.0769 | 1.86 | 850 | 0.1477 | 0.9675 | 0.8299 | 0.8270 | 0.8334 |
| 0.0558 | 1.97 | 900 | 0.1988 | 0.9606 | 0.8239 | 0.8194 | 0.8299 |
| 0.0379 | 2.07 | 950 | 0.1546 | 0.9694 | 0.8319 | 0.8300 | 0.8340 |
| 0.0358 | 2.18 | 1000 | 0.1871 | 0.9655 | 0.8295 | 0.8283 | 0.8312 |
| 0.0248 | 2.29 | 1050 | 0.1631 | 0.9661 | 0.8303 | 0.8278 | 0.8333 |
| 0.0412 | 2.4 | 1100 | 0.1688 | 0.9658 | 0.8283 | 0.8235 | 0.8340 |
| 0.0096 | 2.51 | 1150 | 0.1726 | 0.9661 | 0.8316 | 0.8297 | 0.8342 |
| 0.0025 | 2.62 | 1200 | 0.1808 | 0.9653 | 0.8300 | 0.8261 | 0.8348 |
| 0.0074 | 2.73 | 1250 | 0.1697 | 0.9677 | 0.8323 | 0.8291 | 0.8360 |
| 0.028 | 2.84 | 1300 | 0.1630 | 0.9705 | 0.8359 | 0.8344 | 0.8377 |
| 0.0292 | 2.95 | 1350 | 0.1743 | 0.9696 | 0.8352 | 0.8341 | 0.8366 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "bert-base-uncased", "model-index": [{"name": "final_V1-bert-text-classification-model", "results": []}]} | AmirlyPhd/final_V1-bert-text-classification-model | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T06:59:00+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-31m_mz-133_WordLength_n-its-10-seed-1
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-31m", "model-index": [{"name": "robust_llm_pythia-31m_mz-133_WordLength_n-its-10-seed-1", "results": []}]} | AlignmentResearch/robust_llm_pythia-31m_mz-133_WordLength_n-its-10-seed-1 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-31m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T06:59:13+00:00 |
null | null |
# Alex01837178373/saiga_llama3_8b-Q5_K_M-GGUF
This model was converted to GGUF format from [`IlyaGusev/saiga_llama3_8b`](https://huggingface.co/IlyaGusev/saiga_llama3_8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/IlyaGusev/saiga_llama3_8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Alex01837178373/saiga_llama3_8b-Q5_K_M-GGUF --model saiga_llama3_8b.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Alex01837178373/saiga_llama3_8b-Q5_K_M-GGUF --model saiga_llama3_8b.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m saiga_llama3_8b.Q5_K_M.gguf -n 128
```
| {"language": ["ru"], "license": "other", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["IlyaGusev/saiga_scored"], "license_name": "llama3", "license_link": "https://llama.meta.com/llama3/license/"} | Alex01837178373/saiga_llama3_8b-Q5_K_M-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"ru",
"dataset:IlyaGusev/saiga_scored",
"license:other",
"region:us"
] | null | 2024-04-30T06:59:39+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-restaurant
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "base_model": "deepset/roberta-base-squad2", "model-index": [{"name": "roberta-finetuned-restaurant", "results": []}]} | pltnhan311/roberta-finetuned-restaurant | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:01:13+00:00 |
text-classification | transformers |
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.11368879675865173
f1_macro: 0.9748328397861948
f1_micro: 0.9752808988764045
f1_weighted: 0.9752071164560256
precision_macro: 0.9752973544608207
precision_micro: 0.9752808988764045
precision_weighted: 0.9756012580457148
recall_macro: 0.9748949579831934
recall_micro: 0.9752808988764045
recall_weighted: 0.9752808988764045
accuracy: 0.9752808988764045
| {"tags": ["autotrain", "text-classification"], "datasets": ["BBC/autotrain-data"], "widget": [{"text": "I love AutoTrain"}]} | NawinCom/BBC | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain",
"dataset:BBC/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:02:04+00:00 |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | nntoan209/bgem3-generic-msmarco-squadv2-tvpl-newssapo | null | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:02:33+00:00 |
text-generation | transformers | {"license": "apache-2.0", "tags": ["athena"]} | nextab/Athena-v1.5 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"athena",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:02:38+00:00 |
|
image-classification | transformers |
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.223019078373909
f1_macro: 0.8656901233453752
f1_micro: 0.9238648002731308
f1_weighted: 0.9239248911195606
precision_macro: 0.928029530990028
precision_micro: 0.9238648002731308
precision_weighted: 0.9287629201629745
recall_macro: 0.834713663096659
recall_micro: 0.9238648002731308
recall_weighted: 0.9238648002731308
accuracy: 0.9238648002731308
| {"tags": ["autotrain", "image-classification"], "datasets": ["autotrain-vit-base-patch16-224/autotrain-data"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]} | Kushagra07/autotrain-vit-base-patch16-224 | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:autotrain-vit-base-patch16-224/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:03:12+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "sft"]} | likhithasapu/codemix-indicbart-sft-notchat | null | [
"transformers",
"safetensors",
"mbart",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:03:31+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | shallow6414/qc2t3b7 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:04:23+00:00 |
text-generation | transformers |
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` | {"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | dmtkeler/autotrain-do2iw-wsghc | null | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us",
"has_space"
] | null | 2024-04-30T07:05:22+00:00 |
text-generation | transformers |
## モデル
- ベースモデル:[ryota39/llm-jp-1b-sft-100k-LoRA](https://huggingface.co/ryota39/llm-jp-1b-sft-100k-LoRA)
- 学習データセット:[ryota39/dpo-ja-194k](https://huggingface.co/datasets/ryota39/dpo-ja-194k)
- 学習方式:フルパラメータチューニング
## サンプル
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(
"ryota39/llm-jp-1b-sft-100k-LoRA-kto-194k"
)
pad_token_id = tokenizer.pad_token_id
model = AutoModelForCausalLM.from_pretrained(
"ryota39/llm-jp-1b-sft-100k-LoRA-kto-194k",
device_map="auto",
)
text = "###Input: 東京の観光名所を教えてください。\n###Output: "
tokenized_input = tokenizer.encode(
text,
add_special_tokens=False,
return_tensors="pt"
).to(model.device)
attention_mask = torch.ones_like(tokenized_input)
attention_mask[tokenized_input == pad_token_id] = 0
with torch.no_grad():
output = model.generate(
tokenized_input,
attention_mask=attention_mask,
max_new_tokens=128,
do_sample=True,
top_p=0.95,
temperature=0.8,
repetition_penalty=1.10
)[0]
print(tokenizer.decode(output))
```
## 出力例
```
###Input: 東京の観光名所を教えてください。
###Output: 東京タワー。日本で一番高い塔だと思いますよ。
東京の街は非常にきれいなので、夜には美しい光景を見ることができます。
また、隅田川やレインボーブリッジから眺める景色もいいですし、皇居や靖国神社など東京の象徴的な場所を訪れるのもいいかもしれません。
スカイツリーから見る景色は最高だと思います。スカイツリーの展望台の中では東京シティビューという場所がおすすめです。
また、浅草寺や雷門、勝鬨橋といった浅草近辺の人気スポットにも行くことができます。他
```
## 謝辞
本成果は【LOCAL AI HACKATHON #001】240時間ハッカソンの成果です。
運営の方々に深く御礼申し上げます。
- 【メタデータラボ株式会社】様
- 【AI声づくり技術研究会】
- サーバー主:やなぎ(Yanagi)様
- 【ローカルLLMに向き合う会】
- サーバー主:saldra(サルドラ)様
[メタデータラボ、日本最大規模のAIハッカソン「LOCAL AI HACKATHON #001」~ AIの民主化 ~を開催、本日より出場チームの募集を開始](https://prtimes.jp/main/html/rd/p/000000008.000056944.html)
| {"language": ["ja"], "license": "cc", "library_name": "transformers", "tags": ["dpo"], "datasets": ["ryota39/dpo-ja-194k"]} | ryota39/llm-jp-1b-sft-100k-LoRA-kto-194k | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"dpo",
"ja",
"dataset:ryota39/dpo-ja-194k",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:05:36+00:00 |
text-generation | transformers |
# mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0
This model was converted to MLX format from [`llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0`]() using mlx-lm version **0.12.0**.
Refer to the [original model card](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mlx"], "datasets": ["databricks/databricks-dolly-15k", "llm-jp/databricks-dolly-15k-ja", "llm-jp/oasst1-21k-en", "llm-jp/oasst1-21k-ja", "llm-jp/oasst2-33k-en", "llm-jp/oasst2-33k-ja"], "programming_language": ["C", "C++", "C#", "Go", "Java", "JavaScript", "Lua", "PHP", "Python", "Ruby", "Rust", "Scala", "TypeScript"], "pipeline_tag": "text-generation", "inference": false} | mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"en",
"ja",
"dataset:databricks/databricks-dolly-15k",
"dataset:llm-jp/databricks-dolly-15k-ja",
"dataset:llm-jp/oasst1-21k-en",
"dataset:llm-jp/oasst1-21k-ja",
"dataset:llm-jp/oasst2-33k-en",
"dataset:llm-jp/oasst2-33k-ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:05:50+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | rbgo/inferless-Llama-3-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:06:31+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4470
- F1 Score: 0.8227
- Accuracy: 0.8235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.51 | 1.1 | 200 | 0.4500 | 0.8060 | 0.8062 |
| 0.4534 | 2.21 | 400 | 0.4424 | 0.8096 | 0.8100 |
| 0.4445 | 3.31 | 600 | 0.4395 | 0.8065 | 0.8076 |
| 0.4312 | 4.42 | 800 | 0.4376 | 0.8074 | 0.8083 |
| 0.4307 | 5.52 | 1000 | 0.4448 | 0.8041 | 0.8058 |
| 0.419 | 6.63 | 1200 | 0.4580 | 0.8046 | 0.8069 |
| 0.4233 | 7.73 | 1400 | 0.4587 | 0.7952 | 0.7982 |
| 0.4138 | 8.84 | 1600 | 0.4851 | 0.7910 | 0.7951 |
| 0.4128 | 9.94 | 1800 | 0.4287 | 0.8159 | 0.8162 |
| 0.4091 | 11.05 | 2000 | 0.4425 | 0.8099 | 0.8107 |
| 0.405 | 12.15 | 2200 | 0.4279 | 0.8144 | 0.8148 |
| 0.3999 | 13.26 | 2400 | 0.4335 | 0.8140 | 0.8148 |
| 0.3993 | 14.36 | 2600 | 0.4327 | 0.8169 | 0.8176 |
| 0.3979 | 15.47 | 2800 | 0.4373 | 0.8109 | 0.8121 |
| 0.3909 | 16.57 | 3000 | 0.4277 | 0.8151 | 0.8152 |
| 0.3931 | 17.68 | 3200 | 0.4269 | 0.8202 | 0.8207 |
| 0.3875 | 18.78 | 3400 | 0.4589 | 0.8071 | 0.8089 |
| 0.3879 | 19.89 | 3600 | 0.4351 | 0.8174 | 0.8183 |
| 0.3824 | 20.99 | 3800 | 0.4441 | 0.8098 | 0.8114 |
| 0.3813 | 22.1 | 4000 | 0.4397 | 0.8135 | 0.8141 |
| 0.3793 | 23.2 | 4200 | 0.4400 | 0.8113 | 0.8121 |
| 0.3778 | 24.31 | 4400 | 0.4586 | 0.8101 | 0.8121 |
| 0.3722 | 25.41 | 4600 | 0.4392 | 0.8213 | 0.8218 |
| 0.377 | 26.52 | 4800 | 0.4454 | 0.8091 | 0.8103 |
| 0.3752 | 27.62 | 5000 | 0.4443 | 0.8147 | 0.8159 |
| 0.3693 | 28.73 | 5200 | 0.4490 | 0.8073 | 0.8089 |
| 0.3657 | 29.83 | 5400 | 0.4413 | 0.8104 | 0.8110 |
| 0.367 | 30.94 | 5600 | 0.4405 | 0.8142 | 0.8148 |
| 0.3655 | 32.04 | 5800 | 0.4436 | 0.8172 | 0.8176 |
| 0.3638 | 33.15 | 6000 | 0.4486 | 0.8134 | 0.8145 |
| 0.3607 | 34.25 | 6200 | 0.4532 | 0.8090 | 0.8100 |
| 0.3597 | 35.36 | 6400 | 0.4600 | 0.8157 | 0.8169 |
| 0.3584 | 36.46 | 6600 | 0.4425 | 0.8202 | 0.8207 |
| 0.3546 | 37.57 | 6800 | 0.4490 | 0.8135 | 0.8145 |
| 0.3535 | 38.67 | 7000 | 0.4558 | 0.8150 | 0.8162 |
| 0.3541 | 39.78 | 7200 | 0.4610 | 0.8140 | 0.8152 |
| 0.3544 | 40.88 | 7400 | 0.4434 | 0.8176 | 0.8180 |
| 0.3531 | 41.99 | 7600 | 0.4526 | 0.8101 | 0.8110 |
| 0.35 | 43.09 | 7800 | 0.4497 | 0.8157 | 0.8166 |
| 0.3516 | 44.2 | 8000 | 0.4660 | 0.8097 | 0.8110 |
| 0.3491 | 45.3 | 8200 | 0.4472 | 0.8133 | 0.8138 |
| 0.3453 | 46.41 | 8400 | 0.4591 | 0.8109 | 0.8117 |
| 0.3487 | 47.51 | 8600 | 0.4647 | 0.8132 | 0.8145 |
| 0.3456 | 48.62 | 8800 | 0.4584 | 0.8138 | 0.8148 |
| 0.3451 | 49.72 | 9000 | 0.4585 | 0.8129 | 0.8138 |
| 0.3485 | 50.83 | 9200 | 0.4656 | 0.8109 | 0.8124 |
| 0.3434 | 51.93 | 9400 | 0.4623 | 0.8133 | 0.8145 |
| 0.3427 | 53.04 | 9600 | 0.4597 | 0.8146 | 0.8155 |
| 0.3421 | 54.14 | 9800 | 0.4599 | 0.8129 | 0.8138 |
| 0.3425 | 55.25 | 10000 | 0.4627 | 0.8127 | 0.8138 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T07:07:35+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4576
- F1 Score: 0.8200
- Accuracy: 0.8204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.497 | 1.1 | 200 | 0.4462 | 0.8070 | 0.8072 |
| 0.4475 | 2.21 | 400 | 0.4392 | 0.8065 | 0.8072 |
| 0.4362 | 3.31 | 600 | 0.4282 | 0.8137 | 0.8141 |
| 0.4213 | 4.42 | 800 | 0.4471 | 0.8055 | 0.8072 |
| 0.417 | 5.52 | 1000 | 0.4382 | 0.8037 | 0.8055 |
| 0.4055 | 6.63 | 1200 | 0.4586 | 0.8029 | 0.8051 |
| 0.4058 | 7.73 | 1400 | 0.4554 | 0.8001 | 0.8027 |
| 0.3961 | 8.84 | 1600 | 0.4680 | 0.7983 | 0.8013 |
| 0.3909 | 9.94 | 1800 | 0.4355 | 0.8175 | 0.8180 |
| 0.3866 | 11.05 | 2000 | 0.4408 | 0.8104 | 0.8107 |
| 0.3794 | 12.15 | 2200 | 0.4383 | 0.8163 | 0.8173 |
| 0.3705 | 13.26 | 2400 | 0.4336 | 0.8161 | 0.8166 |
| 0.368 | 14.36 | 2600 | 0.4389 | 0.8181 | 0.8183 |
| 0.3621 | 15.47 | 2800 | 0.4450 | 0.8157 | 0.8162 |
| 0.3537 | 16.57 | 3000 | 0.4434 | 0.8172 | 0.8173 |
| 0.3486 | 17.68 | 3200 | 0.4555 | 0.8199 | 0.8200 |
| 0.3417 | 18.78 | 3400 | 0.4873 | 0.8039 | 0.8055 |
| 0.3384 | 19.89 | 3600 | 0.4532 | 0.8148 | 0.8155 |
| 0.3269 | 20.99 | 3800 | 0.4819 | 0.8034 | 0.8044 |
| 0.324 | 22.1 | 4000 | 0.4837 | 0.8162 | 0.8162 |
| 0.3192 | 23.2 | 4200 | 0.4995 | 0.8024 | 0.8034 |
| 0.312 | 24.31 | 4400 | 0.4982 | 0.8039 | 0.8051 |
| 0.2982 | 25.41 | 4600 | 0.5090 | 0.8126 | 0.8131 |
| 0.308 | 26.52 | 4800 | 0.4995 | 0.8072 | 0.8079 |
| 0.2956 | 27.62 | 5000 | 0.5131 | 0.8076 | 0.8089 |
| 0.2869 | 28.73 | 5200 | 0.5214 | 0.8070 | 0.8079 |
| 0.2801 | 29.83 | 5400 | 0.5086 | 0.8086 | 0.8086 |
| 0.281 | 30.94 | 5600 | 0.5187 | 0.8152 | 0.8152 |
| 0.2749 | 32.04 | 5800 | 0.5211 | 0.8121 | 0.8124 |
| 0.2686 | 33.15 | 6000 | 0.5515 | 0.8066 | 0.8072 |
| 0.2632 | 34.25 | 6200 | 0.5491 | 0.8081 | 0.8083 |
| 0.2574 | 35.36 | 6400 | 0.5823 | 0.8088 | 0.8096 |
| 0.2528 | 36.46 | 6600 | 0.5612 | 0.8066 | 0.8076 |
| 0.2492 | 37.57 | 6800 | 0.5598 | 0.8000 | 0.8006 |
| 0.2466 | 38.67 | 7000 | 0.5874 | 0.8075 | 0.8089 |
| 0.2422 | 39.78 | 7200 | 0.5805 | 0.8117 | 0.8124 |
| 0.2393 | 40.88 | 7400 | 0.5684 | 0.8073 | 0.8076 |
| 0.2375 | 41.99 | 7600 | 0.5579 | 0.8061 | 0.8062 |
| 0.2333 | 43.09 | 7800 | 0.5884 | 0.8013 | 0.8020 |
| 0.2278 | 44.2 | 8000 | 0.6094 | 0.8091 | 0.8096 |
| 0.2282 | 45.3 | 8200 | 0.5905 | 0.8090 | 0.8093 |
| 0.2194 | 46.41 | 8400 | 0.6165 | 0.8053 | 0.8058 |
| 0.2208 | 47.51 | 8600 | 0.6277 | 0.8047 | 0.8055 |
| 0.218 | 48.62 | 8800 | 0.6125 | 0.8044 | 0.8048 |
| 0.2189 | 49.72 | 9000 | 0.6186 | 0.8050 | 0.8055 |
| 0.2201 | 50.83 | 9200 | 0.6197 | 0.8010 | 0.8020 |
| 0.2122 | 51.93 | 9400 | 0.6302 | 0.8025 | 0.8034 |
| 0.2116 | 53.04 | 9600 | 0.6281 | 0.8048 | 0.8055 |
| 0.2075 | 54.14 | 9800 | 0.6281 | 0.8043 | 0.8048 |
| 0.2074 | 55.25 | 10000 | 0.6320 | 0.8049 | 0.8055 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T07:07:35+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Bandu Mulla
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3523
- eval_wer: 33.6621
- eval_runtime: 1373.0171
- eval_samples_per_second: 2.108
- eval_steps_per_second: 0.264
- epoch: 4.8900
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["hi"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small Hi - Bandu Mulla", "results": []}]} | bmulla7/whisper-small-hi | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:07:43+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5160
- F1 Score: 0.7698
- Accuracy: 0.7708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.623 | 1.01 | 200 | 0.5869 | 0.7061 | 0.7099 |
| 0.5857 | 2.02 | 400 | 0.5679 | 0.7269 | 0.7276 |
| 0.5694 | 3.03 | 600 | 0.5520 | 0.7437 | 0.7446 |
| 0.5568 | 4.04 | 800 | 0.5443 | 0.7495 | 0.7503 |
| 0.5498 | 5.05 | 1000 | 0.5376 | 0.7488 | 0.75 |
| 0.5431 | 6.06 | 1200 | 0.5393 | 0.7498 | 0.7519 |
| 0.5397 | 7.07 | 1400 | 0.5343 | 0.7532 | 0.7544 |
| 0.5395 | 8.08 | 1600 | 0.5345 | 0.7517 | 0.7535 |
| 0.5353 | 9.09 | 1800 | 0.5282 | 0.7563 | 0.7576 |
| 0.5341 | 10.1 | 2000 | 0.5283 | 0.7615 | 0.7623 |
| 0.53 | 11.11 | 2200 | 0.5320 | 0.7580 | 0.7592 |
| 0.5308 | 12.12 | 2400 | 0.5252 | 0.7578 | 0.7585 |
| 0.5279 | 13.13 | 2600 | 0.5264 | 0.7582 | 0.7595 |
| 0.5282 | 14.14 | 2800 | 0.5219 | 0.7562 | 0.7576 |
| 0.5227 | 15.15 | 3000 | 0.5252 | 0.7569 | 0.7588 |
| 0.5231 | 16.16 | 3200 | 0.5236 | 0.7528 | 0.7554 |
| 0.5192 | 17.17 | 3400 | 0.5269 | 0.7576 | 0.7588 |
| 0.5231 | 18.18 | 3600 | 0.5177 | 0.7619 | 0.7626 |
| 0.5181 | 19.19 | 3800 | 0.5183 | 0.7609 | 0.7623 |
| 0.5191 | 20.2 | 4000 | 0.5197 | 0.7585 | 0.7598 |
| 0.5159 | 21.21 | 4200 | 0.5258 | 0.7511 | 0.7535 |
| 0.5149 | 22.22 | 4400 | 0.5230 | 0.7579 | 0.7595 |
| 0.5139 | 23.23 | 4600 | 0.5250 | 0.7534 | 0.7560 |
| 0.5208 | 24.24 | 4800 | 0.5206 | 0.7536 | 0.7560 |
| 0.5112 | 25.25 | 5000 | 0.5184 | 0.7565 | 0.7579 |
| 0.5128 | 26.26 | 5200 | 0.5221 | 0.7622 | 0.7629 |
| 0.5118 | 27.27 | 5400 | 0.5193 | 0.7532 | 0.7551 |
| 0.5121 | 28.28 | 5600 | 0.5155 | 0.7586 | 0.7598 |
| 0.5138 | 29.29 | 5800 | 0.5242 | 0.7527 | 0.7557 |
| 0.5083 | 30.3 | 6000 | 0.5194 | 0.7574 | 0.7592 |
| 0.5096 | 31.31 | 6200 | 0.5189 | 0.7554 | 0.7569 |
| 0.5126 | 32.32 | 6400 | 0.5212 | 0.7562 | 0.7588 |
| 0.5062 | 33.33 | 6600 | 0.5223 | 0.7541 | 0.7566 |
| 0.5056 | 34.34 | 6800 | 0.5209 | 0.7548 | 0.7573 |
| 0.5046 | 35.35 | 7000 | 0.5186 | 0.7583 | 0.7598 |
| 0.5092 | 36.36 | 7200 | 0.5154 | 0.7572 | 0.7588 |
| 0.5069 | 37.37 | 7400 | 0.5157 | 0.7580 | 0.7598 |
| 0.5057 | 38.38 | 7600 | 0.5174 | 0.7580 | 0.7595 |
| 0.5058 | 39.39 | 7800 | 0.5181 | 0.7582 | 0.7598 |
| 0.5042 | 40.4 | 8000 | 0.5205 | 0.7580 | 0.7598 |
| 0.5065 | 41.41 | 8200 | 0.5182 | 0.7583 | 0.7607 |
| 0.5069 | 42.42 | 8400 | 0.5198 | 0.7539 | 0.7563 |
| 0.5053 | 43.43 | 8600 | 0.5185 | 0.7574 | 0.7592 |
| 0.5024 | 44.44 | 8800 | 0.5181 | 0.7554 | 0.7576 |
| 0.5038 | 45.45 | 9000 | 0.5170 | 0.7579 | 0.7595 |
| 0.5026 | 46.46 | 9200 | 0.5188 | 0.7562 | 0.7582 |
| 0.5069 | 47.47 | 9400 | 0.5177 | 0.7566 | 0.7588 |
| 0.4961 | 48.48 | 9600 | 0.5194 | 0.7569 | 0.7588 |
| 0.5096 | 49.49 | 9800 | 0.5177 | 0.7554 | 0.7576 |
| 0.5019 | 50.51 | 10000 | 0.5177 | 0.7579 | 0.7598 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T07:07:56+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5223
- F1 Score: 0.7689
- Accuracy: 0.7702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6077 | 1.01 | 200 | 0.5675 | 0.7302 | 0.7323 |
| 0.559 | 2.02 | 400 | 0.5359 | 0.7508 | 0.7525 |
| 0.5397 | 3.03 | 600 | 0.5277 | 0.7535 | 0.7544 |
| 0.5337 | 4.04 | 800 | 0.5335 | 0.7603 | 0.7614 |
| 0.5279 | 5.05 | 1000 | 0.5260 | 0.7600 | 0.7610 |
| 0.5225 | 6.06 | 1200 | 0.5250 | 0.7539 | 0.7563 |
| 0.5179 | 7.07 | 1400 | 0.5267 | 0.7560 | 0.7576 |
| 0.5153 | 8.08 | 1600 | 0.5248 | 0.7563 | 0.7585 |
| 0.5128 | 9.09 | 1800 | 0.5143 | 0.7607 | 0.7620 |
| 0.5102 | 10.1 | 2000 | 0.5145 | 0.7646 | 0.7655 |
| 0.5035 | 11.11 | 2200 | 0.5206 | 0.7657 | 0.7670 |
| 0.504 | 12.12 | 2400 | 0.5097 | 0.7656 | 0.7664 |
| 0.4999 | 13.13 | 2600 | 0.5132 | 0.7653 | 0.7667 |
| 0.4996 | 14.14 | 2800 | 0.5140 | 0.7713 | 0.7724 |
| 0.4931 | 15.15 | 3000 | 0.5170 | 0.7640 | 0.7658 |
| 0.4925 | 16.16 | 3200 | 0.5173 | 0.7631 | 0.7655 |
| 0.4885 | 17.17 | 3400 | 0.5254 | 0.7652 | 0.7667 |
| 0.4905 | 18.18 | 3600 | 0.5103 | 0.7724 | 0.7730 |
| 0.4855 | 19.19 | 3800 | 0.5079 | 0.7679 | 0.7693 |
| 0.4848 | 20.2 | 4000 | 0.5109 | 0.7697 | 0.7708 |
| 0.4797 | 21.21 | 4200 | 0.5171 | 0.7635 | 0.7658 |
| 0.478 | 22.22 | 4400 | 0.5185 | 0.7684 | 0.7696 |
| 0.4761 | 23.23 | 4600 | 0.5189 | 0.7640 | 0.7658 |
| 0.4799 | 24.24 | 4800 | 0.5178 | 0.7581 | 0.7610 |
| 0.4719 | 25.25 | 5000 | 0.5158 | 0.7685 | 0.7689 |
| 0.4733 | 26.26 | 5200 | 0.5195 | 0.7728 | 0.7730 |
| 0.4694 | 27.27 | 5400 | 0.5209 | 0.7638 | 0.7658 |
| 0.4695 | 28.28 | 5600 | 0.5127 | 0.7756 | 0.7762 |
| 0.4722 | 29.29 | 5800 | 0.5263 | 0.7559 | 0.7598 |
| 0.4642 | 30.3 | 6000 | 0.5220 | 0.7686 | 0.7699 |
| 0.463 | 31.31 | 6200 | 0.5194 | 0.7736 | 0.7746 |
| 0.4639 | 32.32 | 6400 | 0.5225 | 0.7637 | 0.7658 |
| 0.4593 | 33.33 | 6600 | 0.5276 | 0.7653 | 0.7674 |
| 0.4568 | 34.34 | 6800 | 0.5190 | 0.7688 | 0.7702 |
| 0.4551 | 35.35 | 7000 | 0.5222 | 0.7737 | 0.7743 |
| 0.4588 | 36.36 | 7200 | 0.5211 | 0.7666 | 0.7677 |
| 0.4569 | 37.37 | 7400 | 0.5236 | 0.7695 | 0.7708 |
| 0.4558 | 38.38 | 7600 | 0.5227 | 0.7747 | 0.7753 |
| 0.4534 | 39.39 | 7800 | 0.5218 | 0.7733 | 0.7740 |
| 0.4514 | 40.4 | 8000 | 0.5270 | 0.7701 | 0.7711 |
| 0.4527 | 41.41 | 8200 | 0.5283 | 0.7641 | 0.7661 |
| 0.4545 | 42.42 | 8400 | 0.5257 | 0.7622 | 0.7639 |
| 0.4501 | 43.43 | 8600 | 0.5273 | 0.7703 | 0.7715 |
| 0.4474 | 44.44 | 8800 | 0.5274 | 0.7643 | 0.7658 |
| 0.4482 | 45.45 | 9000 | 0.5263 | 0.7706 | 0.7715 |
| 0.4481 | 46.46 | 9200 | 0.5272 | 0.7680 | 0.7693 |
| 0.449 | 47.47 | 9400 | 0.5281 | 0.7657 | 0.7674 |
| 0.4396 | 48.48 | 9600 | 0.5313 | 0.7654 | 0.7667 |
| 0.4528 | 49.49 | 9800 | 0.5283 | 0.7643 | 0.7658 |
| 0.4455 | 50.51 | 10000 | 0.5285 | 0.7667 | 0.7680 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T07:07:56+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5134
- F1 Score: 0.7661
- Accuracy: 0.7677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.594 | 1.01 | 200 | 0.5404 | 0.7553 | 0.7557 |
| 0.5431 | 2.02 | 400 | 0.5282 | 0.7581 | 0.7588 |
| 0.5271 | 3.03 | 600 | 0.5242 | 0.7593 | 0.7601 |
| 0.5217 | 4.04 | 800 | 0.5209 | 0.7568 | 0.7585 |
| 0.5147 | 5.05 | 1000 | 0.5190 | 0.7616 | 0.7626 |
| 0.5077 | 6.06 | 1200 | 0.5237 | 0.7567 | 0.7595 |
| 0.5007 | 7.07 | 1400 | 0.5229 | 0.7630 | 0.7645 |
| 0.4957 | 8.08 | 1600 | 0.5154 | 0.7596 | 0.7614 |
| 0.4918 | 9.09 | 1800 | 0.5109 | 0.7675 | 0.7689 |
| 0.4855 | 10.1 | 2000 | 0.5114 | 0.7717 | 0.7727 |
| 0.4753 | 11.11 | 2200 | 0.5227 | 0.7649 | 0.7667 |
| 0.4733 | 12.12 | 2400 | 0.5139 | 0.7694 | 0.7699 |
| 0.4671 | 13.13 | 2600 | 0.5227 | 0.7620 | 0.7642 |
| 0.4629 | 14.14 | 2800 | 0.5271 | 0.7666 | 0.7680 |
| 0.4534 | 15.15 | 3000 | 0.5316 | 0.7624 | 0.7636 |
| 0.4501 | 16.16 | 3200 | 0.5337 | 0.7668 | 0.7680 |
| 0.4438 | 17.17 | 3400 | 0.5426 | 0.7655 | 0.7670 |
| 0.4405 | 18.18 | 3600 | 0.5362 | 0.7637 | 0.7652 |
| 0.433 | 19.19 | 3800 | 0.5340 | 0.7673 | 0.7680 |
| 0.4286 | 20.2 | 4000 | 0.5398 | 0.7631 | 0.7636 |
| 0.4188 | 21.21 | 4200 | 0.5503 | 0.7659 | 0.7670 |
| 0.4161 | 22.22 | 4400 | 0.5667 | 0.7551 | 0.7560 |
| 0.4061 | 23.23 | 4600 | 0.5742 | 0.7547 | 0.7551 |
| 0.4069 | 24.24 | 4800 | 0.5761 | 0.7560 | 0.7588 |
| 0.398 | 25.25 | 5000 | 0.5637 | 0.7639 | 0.7639 |
| 0.3948 | 26.26 | 5200 | 0.5826 | 0.7547 | 0.7551 |
| 0.3919 | 27.27 | 5400 | 0.5768 | 0.7553 | 0.7569 |
| 0.3845 | 28.28 | 5600 | 0.5962 | 0.7526 | 0.7535 |
| 0.3842 | 29.29 | 5800 | 0.5895 | 0.7473 | 0.7497 |
| 0.3732 | 30.3 | 6000 | 0.5930 | 0.7562 | 0.7566 |
| 0.3725 | 31.31 | 6200 | 0.5884 | 0.7555 | 0.7560 |
| 0.3667 | 32.32 | 6400 | 0.6023 | 0.7608 | 0.7617 |
| 0.3581 | 33.33 | 6600 | 0.6189 | 0.7499 | 0.7522 |
| 0.3611 | 34.34 | 6800 | 0.5950 | 0.7533 | 0.7538 |
| 0.3504 | 35.35 | 7000 | 0.6163 | 0.7535 | 0.7541 |
| 0.3529 | 36.36 | 7200 | 0.6210 | 0.7507 | 0.7519 |
| 0.3464 | 37.37 | 7400 | 0.6336 | 0.7454 | 0.7468 |
| 0.3454 | 38.38 | 7600 | 0.6325 | 0.7396 | 0.7396 |
| 0.3413 | 39.39 | 7800 | 0.6368 | 0.7467 | 0.7472 |
| 0.3383 | 40.4 | 8000 | 0.6332 | 0.7490 | 0.7497 |
| 0.3365 | 41.41 | 8200 | 0.6283 | 0.7481 | 0.7491 |
| 0.3396 | 42.42 | 8400 | 0.6309 | 0.7461 | 0.7472 |
| 0.3292 | 43.43 | 8600 | 0.6488 | 0.7493 | 0.75 |
| 0.3253 | 44.44 | 8800 | 0.6601 | 0.7463 | 0.7472 |
| 0.3312 | 45.45 | 9000 | 0.6363 | 0.7490 | 0.7497 |
| 0.3269 | 46.46 | 9200 | 0.6423 | 0.7490 | 0.7494 |
| 0.3224 | 47.47 | 9400 | 0.6537 | 0.7459 | 0.7472 |
| 0.3222 | 48.48 | 9600 | 0.6515 | 0.7489 | 0.75 |
| 0.3236 | 49.49 | 9800 | 0.6497 | 0.7476 | 0.7487 |
| 0.3244 | 50.51 | 10000 | 0.6487 | 0.7507 | 0.7516 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T07:08:14+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | mintujupally/gpt2-med-ft | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:08:26+00:00 |
null | null | {} | Dilan7896/Wingom7first | null | [
"region:us"
] | null | 2024-04-30T07:09:58+00:00 |
|
null | null |
# MistrollPercival_01-7B
MistrollPercival_01-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: BarraHome/Mistroll-7B-v2.2
- model: AurelPx/Percival_01-7b-slerp
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/MistrollPercival_01-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/MistrollPercival_01-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T07:11:19+00:00 |
null | null | {"license": "bsl-1.0"} | linbroan19960327/32 | null | [
"license:bsl-1.0",
"region:us"
] | null | 2024-04-30T07:13:09+00:00 |
|
null | null | {} | rafationgson/newreality-xl | null | [
"region:us"
] | null | 2024-04-30T07:14:20+00:00 |
|
null | null | This repo contains GGUF format model files for [
Svenni551's gemma-2b-it-toxic-v2.0](https://huggingface.co/Svenni551/gemma-2b-it-toxic-v2.0). | {} | Blombert/gemma-2b-it-toxic-v2.0-GGUF | null | [
"gguf",
"region:us"
] | null | 2024-04-30T07:14:38+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 36
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "other", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2", "results": []}]} | yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2 | null | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:14:52+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4599
- F1 Score: 0.8014
- Accuracy: 0.8030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5681 | 0.92 | 200 | 0.5181 | 0.7553 | 0.7577 |
| 0.51 | 1.83 | 400 | 0.5069 | 0.7623 | 0.7652 |
| 0.495 | 2.75 | 600 | 0.4944 | 0.7720 | 0.7744 |
| 0.4931 | 3.67 | 800 | 0.4845 | 0.7810 | 0.7824 |
| 0.4786 | 4.59 | 1000 | 0.4851 | 0.7810 | 0.7830 |
| 0.4756 | 5.5 | 1200 | 0.4779 | 0.7791 | 0.7810 |
| 0.4737 | 6.42 | 1400 | 0.4746 | 0.7886 | 0.7896 |
| 0.4711 | 7.34 | 1600 | 0.4779 | 0.7861 | 0.7878 |
| 0.4639 | 8.26 | 1800 | 0.4787 | 0.7867 | 0.7881 |
| 0.4663 | 9.17 | 2000 | 0.4679 | 0.7921 | 0.7936 |
| 0.4651 | 10.09 | 2200 | 0.4783 | 0.7834 | 0.7861 |
| 0.4582 | 11.01 | 2400 | 0.4743 | 0.7892 | 0.7913 |
| 0.4592 | 11.93 | 2600 | 0.4638 | 0.7933 | 0.7947 |
| 0.4575 | 12.84 | 2800 | 0.4664 | 0.7920 | 0.7936 |
| 0.4554 | 13.76 | 3000 | 0.4715 | 0.7937 | 0.7956 |
| 0.4533 | 14.68 | 3200 | 0.4642 | 0.7972 | 0.7982 |
| 0.4521 | 15.6 | 3400 | 0.4652 | 0.7972 | 0.7990 |
| 0.4492 | 16.51 | 3600 | 0.4692 | 0.7961 | 0.7976 |
| 0.4524 | 17.43 | 3800 | 0.4582 | 0.7946 | 0.7956 |
| 0.4463 | 18.35 | 4000 | 0.4638 | 0.7949 | 0.7964 |
| 0.4458 | 19.27 | 4200 | 0.4650 | 0.7972 | 0.7985 |
| 0.4485 | 20.18 | 4400 | 0.4671 | 0.7967 | 0.7985 |
| 0.444 | 21.1 | 4600 | 0.4619 | 0.8000 | 0.8013 |
| 0.4454 | 22.02 | 4800 | 0.4638 | 0.7968 | 0.7982 |
| 0.4439 | 22.94 | 5000 | 0.4555 | 0.7980 | 0.7993 |
| 0.4449 | 23.85 | 5200 | 0.4580 | 0.8009 | 0.8025 |
| 0.4428 | 24.77 | 5400 | 0.4646 | 0.7970 | 0.7990 |
| 0.4441 | 25.69 | 5600 | 0.4587 | 0.7990 | 0.8002 |
| 0.441 | 26.61 | 5800 | 0.4578 | 0.7986 | 0.7996 |
| 0.4418 | 27.52 | 6000 | 0.4637 | 0.7980 | 0.7996 |
| 0.438 | 28.44 | 6200 | 0.4576 | 0.8004 | 0.8019 |
| 0.4387 | 29.36 | 6400 | 0.4631 | 0.7990 | 0.8007 |
| 0.4399 | 30.28 | 6600 | 0.4588 | 0.7993 | 0.8010 |
| 0.4376 | 31.19 | 6800 | 0.4552 | 0.8006 | 0.8016 |
| 0.4364 | 32.11 | 7000 | 0.4606 | 0.8004 | 0.8022 |
| 0.4392 | 33.03 | 7200 | 0.4599 | 0.7996 | 0.8010 |
| 0.4368 | 33.94 | 7400 | 0.4598 | 0.8020 | 0.8033 |
| 0.4327 | 34.86 | 7600 | 0.4602 | 0.8016 | 0.8030 |
| 0.4368 | 35.78 | 7800 | 0.4562 | 0.8018 | 0.8030 |
| 0.4367 | 36.7 | 8000 | 0.4594 | 0.8019 | 0.8033 |
| 0.4342 | 37.61 | 8200 | 0.4629 | 0.8005 | 0.8025 |
| 0.437 | 38.53 | 8400 | 0.4576 | 0.8014 | 0.8028 |
| 0.4329 | 39.45 | 8600 | 0.4604 | 0.8016 | 0.8030 |
| 0.4329 | 40.37 | 8800 | 0.4633 | 0.8009 | 0.8028 |
| 0.4382 | 41.28 | 9000 | 0.4587 | 0.8001 | 0.8019 |
| 0.4326 | 42.2 | 9200 | 0.4583 | 0.8021 | 0.8033 |
| 0.4309 | 43.12 | 9400 | 0.4599 | 0.8019 | 0.8033 |
| 0.4341 | 44.04 | 9600 | 0.4587 | 0.8003 | 0.8019 |
| 0.4327 | 44.95 | 9800 | 0.4597 | 0.8008 | 0.8025 |
| 0.4311 | 45.87 | 10000 | 0.4588 | 0.8006 | 0.8022 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T07:14:54+00:00 |
null | null | {} | nntoan209/bgem3-unified-finetune-sft-v2 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-04-30T07:16:02+00:00 |
|
reinforcement-learning | stable-baselines3 |
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.27 +/- 0.12", "name": "mean_reward", "verified": false}]}]}]} | lightyip/a2c-PandaReachDense-v3 | null | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-30T07:16:52+00:00 |
null | null | {"license": "openrail"} | Coolwowsocoolwow/Silver_06 | null | [
"license:openrail",
"region:us"
] | null | 2024-04-30T07:17:12+00:00 |
|
null | null | {"license": "apache-2.0"} | jayasuryajsk/Phi-3-mini-4k-romanized_telugu | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T07:17:21+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_32768_512_43M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4417
- F1 Score: 0.8075
- Accuracy: 0.8088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5466 | 0.92 | 200 | 0.5059 | 0.7710 | 0.7732 |
| 0.4877 | 1.83 | 400 | 0.4834 | 0.7806 | 0.7824 |
| 0.4734 | 2.75 | 600 | 0.4703 | 0.7908 | 0.7919 |
| 0.4737 | 3.67 | 800 | 0.4677 | 0.7951 | 0.7962 |
| 0.4575 | 4.59 | 1000 | 0.4679 | 0.7951 | 0.7962 |
| 0.4534 | 5.5 | 1200 | 0.4599 | 0.7950 | 0.7964 |
| 0.4518 | 6.42 | 1400 | 0.4595 | 0.7991 | 0.8002 |
| 0.4471 | 7.34 | 1600 | 0.4609 | 0.8002 | 0.8025 |
| 0.4412 | 8.26 | 1800 | 0.4637 | 0.8006 | 0.8019 |
| 0.4426 | 9.17 | 2000 | 0.4486 | 0.8055 | 0.8065 |
| 0.4401 | 10.09 | 2200 | 0.4731 | 0.7966 | 0.7996 |
| 0.4341 | 11.01 | 2400 | 0.4619 | 0.8014 | 0.8036 |
| 0.4321 | 11.93 | 2600 | 0.4467 | 0.8028 | 0.8033 |
| 0.4325 | 12.84 | 2800 | 0.4493 | 0.8060 | 0.8076 |
| 0.4268 | 13.76 | 3000 | 0.4583 | 0.8028 | 0.8050 |
| 0.4252 | 14.68 | 3200 | 0.4560 | 0.8058 | 0.8071 |
| 0.4219 | 15.6 | 3400 | 0.4454 | 0.8070 | 0.8079 |
| 0.422 | 16.51 | 3600 | 0.4627 | 0.8036 | 0.8053 |
| 0.4222 | 17.43 | 3800 | 0.4527 | 0.8059 | 0.8073 |
| 0.4149 | 18.35 | 4000 | 0.4500 | 0.8059 | 0.8065 |
| 0.4165 | 19.27 | 4200 | 0.4587 | 0.8047 | 0.8062 |
| 0.4147 | 20.18 | 4400 | 0.4640 | 0.8041 | 0.8056 |
| 0.4112 | 21.1 | 4600 | 0.4534 | 0.8052 | 0.8062 |
| 0.4133 | 22.02 | 4800 | 0.4541 | 0.8067 | 0.8076 |
| 0.4101 | 22.94 | 5000 | 0.4487 | 0.8045 | 0.8056 |
| 0.4104 | 23.85 | 5200 | 0.4520 | 0.8019 | 0.8033 |
| 0.4065 | 24.77 | 5400 | 0.4689 | 0.8047 | 0.8068 |
| 0.4067 | 25.69 | 5600 | 0.4542 | 0.8061 | 0.8073 |
| 0.4034 | 26.61 | 5800 | 0.4540 | 0.8042 | 0.8050 |
| 0.4036 | 27.52 | 6000 | 0.4662 | 0.8032 | 0.8045 |
| 0.4 | 28.44 | 6200 | 0.4526 | 0.8026 | 0.8039 |
| 0.3994 | 29.36 | 6400 | 0.4538 | 0.8057 | 0.8071 |
| 0.3993 | 30.28 | 6600 | 0.4515 | 0.8051 | 0.8068 |
| 0.398 | 31.19 | 6800 | 0.4507 | 0.8034 | 0.8042 |
| 0.3962 | 32.11 | 7000 | 0.4530 | 0.8057 | 0.8068 |
| 0.3983 | 33.03 | 7200 | 0.4589 | 0.8046 | 0.8056 |
| 0.3949 | 33.94 | 7400 | 0.4566 | 0.8054 | 0.8065 |
| 0.3907 | 34.86 | 7600 | 0.4557 | 0.8043 | 0.8056 |
| 0.3929 | 35.78 | 7800 | 0.4536 | 0.8048 | 0.8053 |
| 0.3915 | 36.7 | 8000 | 0.4579 | 0.8052 | 0.8065 |
| 0.3872 | 37.61 | 8200 | 0.4630 | 0.8027 | 0.8045 |
| 0.3945 | 38.53 | 8400 | 0.4594 | 0.8027 | 0.8042 |
| 0.3873 | 39.45 | 8600 | 0.4575 | 0.8051 | 0.8062 |
| 0.3866 | 40.37 | 8800 | 0.4656 | 0.8045 | 0.8062 |
| 0.3917 | 41.28 | 9000 | 0.4581 | 0.8021 | 0.8036 |
| 0.3876 | 42.2 | 9200 | 0.4572 | 0.8052 | 0.8062 |
| 0.3852 | 43.12 | 9400 | 0.4595 | 0.8029 | 0.8039 |
| 0.386 | 44.04 | 9600 | 0.4592 | 0.8035 | 0.8048 |
| 0.3842 | 44.95 | 9800 | 0.4599 | 0.8033 | 0.8048 |
| 0.3836 | 45.87 | 10000 | 0.4597 | 0.8040 | 0.8053 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_32768_512_43M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_32768_512_43M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T07:17:33+00:00 |
null | null | {"license": "openrail"} | shhsehfjxkHsndh/gsstjstjf | null | [
"license:openrail",
"region:us"
] | null | 2024-04-30T07:17:54+00:00 |
|
automatic-speech-recognition | transformers | {} | sid330/whisper-base-ml | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:18:11+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-1b_mz-131_IMDB
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-131_IMDB", "results": []}]} | AlignmentResearch/robust_llm_pythia-1b_mz-131_IMDB | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:18:17+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_V1-distilbert-text-classification-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1494
- Accuracy: 0.9672
- F1: 0.8312
- Precision: 0.8275
- Recall: 0.8357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.6662 | 0.11 | 50 | 1.6945 | 0.2888 | 0.0820 | 0.1958 | 0.1341 |
| 0.7494 | 0.22 | 100 | 0.6947 | 0.8034 | 0.4962 | 0.4949 | 0.5054 |
| 0.2779 | 0.33 | 150 | 0.4631 | 0.8980 | 0.6685 | 0.6550 | 0.6829 |
| 0.2204 | 0.44 | 200 | 0.3938 | 0.8999 | 0.6686 | 0.6659 | 0.6758 |
| 0.137 | 0.55 | 250 | 0.4153 | 0.9065 | 0.6707 | 0.6537 | 0.6898 |
| 0.1931 | 0.66 | 300 | 0.3093 | 0.9166 | 0.7089 | 0.7728 | 0.7046 |
| 0.1356 | 0.76 | 350 | 0.3384 | 0.9152 | 0.6904 | 0.8123 | 0.6978 |
| 0.1065 | 0.87 | 400 | 0.4172 | 0.9144 | 0.7233 | 0.7804 | 0.7174 |
| 0.105 | 0.98 | 450 | 0.4521 | 0.8852 | 0.7078 | 0.7342 | 0.7051 |
| 0.1275 | 1.09 | 500 | 0.2837 | 0.9262 | 0.7365 | 0.7927 | 0.7275 |
| 0.0754 | 1.2 | 550 | 0.3979 | 0.9180 | 0.7164 | 0.8039 | 0.7133 |
| 0.0861 | 1.31 | 600 | 0.1506 | 0.9604 | 0.8259 | 0.8247 | 0.8280 |
| 0.0514 | 1.42 | 650 | 0.1397 | 0.9664 | 0.8277 | 0.8264 | 0.8293 |
| 0.0536 | 1.53 | 700 | 0.1566 | 0.9642 | 0.8279 | 0.8255 | 0.8308 |
| 0.0351 | 1.64 | 750 | 0.1804 | 0.9620 | 0.8276 | 0.8251 | 0.8312 |
| 0.0862 | 1.75 | 800 | 0.1445 | 0.9655 | 0.8314 | 0.8307 | 0.8322 |
| 0.0461 | 1.86 | 850 | 0.1492 | 0.9669 | 0.8306 | 0.8291 | 0.8324 |
| 0.0663 | 1.97 | 900 | 0.2054 | 0.9604 | 0.8292 | 0.8299 | 0.8295 |
| 0.0482 | 2.07 | 950 | 0.1498 | 0.9655 | 0.8294 | 0.8272 | 0.8324 |
| 0.0299 | 2.18 | 1000 | 0.1657 | 0.9650 | 0.8292 | 0.8269 | 0.8321 |
| 0.0348 | 2.29 | 1050 | 0.1473 | 0.9686 | 0.8310 | 0.8291 | 0.8332 |
| 0.0283 | 2.4 | 1100 | 0.1470 | 0.9694 | 0.8333 | 0.8297 | 0.8376 |
| 0.0115 | 2.51 | 1150 | 0.1496 | 0.9691 | 0.8336 | 0.8317 | 0.8358 |
| 0.004 | 2.62 | 1200 | 0.1671 | 0.9650 | 0.8301 | 0.8280 | 0.8329 |
| 0.0054 | 2.73 | 1250 | 0.1560 | 0.9694 | 0.8333 | 0.8325 | 0.8343 |
| 0.0217 | 2.84 | 1300 | 0.1553 | 0.9696 | 0.8334 | 0.8326 | 0.8345 |
| 0.0054 | 2.95 | 1350 | 0.1603 | 0.9691 | 0.8332 | 0.8324 | 0.8343 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "final_V1-distilbert-text-classification-model", "results": []}]} | AmirlyPhd/final_V1-distilbert-text-classification-model | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:18:29+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-14m_mz-133_EnronSpam_n-its-10-seed-2
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-133_EnronSpam_n-its-10-seed-2", "results": []}]} | AlignmentResearch/robust_llm_pythia-14m_mz-133_EnronSpam_n-its-10-seed-2 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:18:50+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_32768_512_43M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4593
- F1 Score: 0.8080
- Accuracy: 0.8085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.532 | 0.92 | 200 | 0.5003 | 0.7788 | 0.7810 |
| 0.4759 | 1.83 | 400 | 0.4744 | 0.7933 | 0.7950 |
| 0.4604 | 2.75 | 600 | 0.4594 | 0.7985 | 0.7993 |
| 0.4624 | 3.67 | 800 | 0.4559 | 0.7965 | 0.7973 |
| 0.4441 | 4.59 | 1000 | 0.4552 | 0.8001 | 0.8013 |
| 0.4371 | 5.5 | 1200 | 0.4578 | 0.7966 | 0.7982 |
| 0.4361 | 6.42 | 1400 | 0.4540 | 0.8029 | 0.8039 |
| 0.4302 | 7.34 | 1600 | 0.4615 | 0.7940 | 0.7970 |
| 0.4196 | 8.26 | 1800 | 0.4536 | 0.8041 | 0.8045 |
| 0.4203 | 9.17 | 2000 | 0.4452 | 0.8077 | 0.8085 |
| 0.4175 | 10.09 | 2200 | 0.4663 | 0.8012 | 0.8036 |
| 0.4074 | 11.01 | 2400 | 0.4547 | 0.8040 | 0.8056 |
| 0.4032 | 11.93 | 2600 | 0.4458 | 0.8061 | 0.8062 |
| 0.3995 | 12.84 | 2800 | 0.4507 | 0.8030 | 0.8042 |
| 0.394 | 13.76 | 3000 | 0.4626 | 0.8017 | 0.8045 |
| 0.387 | 14.68 | 3200 | 0.4740 | 0.8106 | 0.8116 |
| 0.3798 | 15.6 | 3400 | 0.4645 | 0.8033 | 0.8042 |
| 0.3793 | 16.51 | 3600 | 0.4739 | 0.8026 | 0.8045 |
| 0.3736 | 17.43 | 3800 | 0.4854 | 0.8028 | 0.8048 |
| 0.3682 | 18.35 | 4000 | 0.4689 | 0.8095 | 0.8096 |
| 0.365 | 19.27 | 4200 | 0.4743 | 0.8069 | 0.8082 |
| 0.36 | 20.18 | 4400 | 0.4915 | 0.8065 | 0.8073 |
| 0.3521 | 21.1 | 4600 | 0.4773 | 0.8108 | 0.8111 |
| 0.3512 | 22.02 | 4800 | 0.4589 | 0.8127 | 0.8131 |
| 0.3461 | 22.94 | 5000 | 0.4784 | 0.8096 | 0.8102 |
| 0.3426 | 23.85 | 5200 | 0.4836 | 0.8072 | 0.8082 |
| 0.3364 | 24.77 | 5400 | 0.5025 | 0.8019 | 0.8039 |
| 0.3323 | 25.69 | 5600 | 0.5016 | 0.8058 | 0.8071 |
| 0.3263 | 26.61 | 5800 | 0.4957 | 0.8126 | 0.8134 |
| 0.3241 | 27.52 | 6000 | 0.5310 | 0.8025 | 0.8042 |
| 0.3193 | 28.44 | 6200 | 0.4931 | 0.8063 | 0.8071 |
| 0.3149 | 29.36 | 6400 | 0.4947 | 0.8036 | 0.8045 |
| 0.3111 | 30.28 | 6600 | 0.5114 | 0.7948 | 0.7962 |
| 0.3087 | 31.19 | 6800 | 0.5160 | 0.8035 | 0.8039 |
| 0.3048 | 32.11 | 7000 | 0.5246 | 0.8039 | 0.8050 |
| 0.3036 | 33.03 | 7200 | 0.5121 | 0.8067 | 0.8076 |
| 0.3029 | 33.94 | 7400 | 0.5133 | 0.8060 | 0.8068 |
| 0.2968 | 34.86 | 7600 | 0.5271 | 0.8084 | 0.8088 |
| 0.2937 | 35.78 | 7800 | 0.5254 | 0.8064 | 0.8065 |
| 0.2894 | 36.7 | 8000 | 0.5430 | 0.8001 | 0.8010 |
| 0.2877 | 37.61 | 8200 | 0.5349 | 0.8015 | 0.8025 |
| 0.2916 | 38.53 | 8400 | 0.5424 | 0.7984 | 0.7999 |
| 0.2815 | 39.45 | 8600 | 0.5469 | 0.8003 | 0.8013 |
| 0.284 | 40.37 | 8800 | 0.5575 | 0.8012 | 0.8025 |
| 0.2831 | 41.28 | 9000 | 0.5531 | 0.7982 | 0.7996 |
| 0.2795 | 42.2 | 9200 | 0.5466 | 0.8005 | 0.8010 |
| 0.2756 | 43.12 | 9400 | 0.5513 | 0.8014 | 0.8019 |
| 0.275 | 44.04 | 9600 | 0.5573 | 0.7997 | 0.8007 |
| 0.2741 | 44.95 | 9800 | 0.5527 | 0.8008 | 0.8016 |
| 0.2727 | 45.87 | 10000 | 0.5574 | 0.8029 | 0.8039 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_32768_512_43M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_32768_512_43M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T07:19:02+00:00 |
text-generation | transformers |
# llama-3-8b-chat-patent-small
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the english translation of a small dataset of 16,000 Korean patents.
## Model description
This model is provided for testing purposes only.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"language": ["en"], "license": "other", "tags": ["llama-factory", "full", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "llama-3-8b-chat-patent-small", "results": []}]} | kimhyeongjun/llama-3-8b-chat-patent-small | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:19:07+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-14m_mz-133_EnronSpam_n-its-10-seed-3
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-14m", "model-index": [{"name": "robust_llm_pythia-14m_mz-133_EnronSpam_n-its-10-seed-3", "results": []}]} | AlignmentResearch/robust_llm_pythia-14m_mz-133_EnronSpam_n-its-10-seed-3 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:19:48+00:00 |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# O0430HMA3
This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5415 | 0.09 | 10 | 0.3036 |
| 0.1844 | 0.18 | 20 | 0.1520 |
| 0.1496 | 0.27 | 30 | 0.1664 |
| 0.1583 | 0.36 | 40 | 0.1556 |
| 0.1517 | 0.45 | 50 | 0.1555 |
| 0.1516 | 0.54 | 60 | 0.1522 |
| 0.153 | 0.63 | 70 | 0.1478 |
| 0.1493 | 0.73 | 80 | 0.1598 |
| 0.1462 | 0.82 | 90 | 0.1427 |
| 0.1591 | 0.91 | 100 | 0.4668 |
| 0.2553 | 1.0 | 110 | 0.0960 |
| 0.4182 | 1.09 | 120 | 1.5724 |
| 0.276 | 1.18 | 130 | 0.0788 |
| 0.0868 | 1.27 | 140 | 0.0749 |
| 0.0837 | 1.36 | 150 | 0.0648 |
| 0.0593 | 1.45 | 160 | 0.0556 |
| 0.0534 | 1.54 | 170 | 0.0485 |
| 0.0781 | 1.63 | 180 | 0.0526 |
| 0.0545 | 1.72 | 190 | 0.0445 |
| 0.0352 | 1.81 | 200 | 0.0309 |
| 0.0496 | 1.9 | 210 | 0.0589 |
| 0.0461 | 1.99 | 220 | 0.0449 |
| 0.0372 | 2.08 | 230 | 0.0267 |
| 0.0236 | 2.18 | 240 | 0.0236 |
| 0.0213 | 2.27 | 250 | 0.0232 |
| 0.0212 | 2.36 | 260 | 0.0193 |
| 0.0207 | 2.45 | 270 | 0.0170 |
| 0.0141 | 2.54 | 280 | 0.0153 |
| 0.0205 | 2.63 | 290 | 0.0151 |
| 0.0154 | 2.72 | 300 | 0.0133 |
| 0.0135 | 2.81 | 310 | 0.0129 |
| 0.0152 | 2.9 | 320 | 0.0125 |
| 0.0137 | 2.99 | 330 | 0.0125 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "allenai/OLMo-1B", "model-index": [{"name": "O0430HMA3", "results": []}]} | Litzy619/O0430HMA3 | null | [
"safetensors",
"generated_from_trainer",
"base_model:allenai/OLMo-1B",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T07:20:17+00:00 |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DonutProcessor_Detail
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.2
- Tokenizers 0.13.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "model-index": [{"name": "DonutProcessor_Detail", "results": []}]} | 2003achu/code | null | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:20:51+00:00 |
null | null | {} | saikrishna2711/my_awesome_model | null | [
"region:us"
] | null | 2024-04-30T07:21:48+00:00 |
|
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | steve1989/finbert-finetuned-SA-finance-headlines | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:22:15+00:00 |
text-classification | transformers | {} | langwnwk/my-trainer | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:23:15+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_32768_512_43M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_32768_512_43M](https://huggingface.co/mahdibaghbanzadeh/seqsight_32768_512_43M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5432
- F1 Score: 0.7236
- Accuracy: 0.7247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6558 | 3.92 | 200 | 0.6066 | 0.6576 | 0.6580 |
| 0.6001 | 7.84 | 400 | 0.5916 | 0.6752 | 0.6753 |
| 0.5837 | 11.76 | 600 | 0.5789 | 0.6886 | 0.6889 |
| 0.5679 | 15.69 | 800 | 0.5726 | 0.7093 | 0.7123 |
| 0.5555 | 19.61 | 1000 | 0.5643 | 0.7058 | 0.7062 |
| 0.5486 | 23.53 | 1200 | 0.5901 | 0.6847 | 0.6975 |
| 0.5346 | 27.45 | 1400 | 0.5607 | 0.7247 | 0.7259 |
| 0.5276 | 31.37 | 1600 | 0.5585 | 0.7172 | 0.7198 |
| 0.5254 | 35.29 | 1800 | 0.5542 | 0.7184 | 0.7210 |
| 0.5129 | 39.22 | 2000 | 0.5539 | 0.7228 | 0.7235 |
| 0.5081 | 43.14 | 2200 | 0.5505 | 0.7254 | 0.7259 |
| 0.5075 | 47.06 | 2400 | 0.5478 | 0.7254 | 0.7272 |
| 0.496 | 50.98 | 2600 | 0.5521 | 0.7284 | 0.7284 |
| 0.494 | 54.9 | 2800 | 0.5555 | 0.7210 | 0.7247 |
| 0.4866 | 58.82 | 3000 | 0.5454 | 0.7240 | 0.7247 |
| 0.4872 | 62.75 | 3200 | 0.5484 | 0.7235 | 0.7247 |
| 0.4796 | 66.67 | 3400 | 0.5458 | 0.7365 | 0.7370 |
| 0.4776 | 70.59 | 3600 | 0.5406 | 0.7340 | 0.7346 |
| 0.4744 | 74.51 | 3800 | 0.5452 | 0.7269 | 0.7284 |
| 0.4708 | 78.43 | 4000 | 0.5408 | 0.7282 | 0.7296 |
| 0.4676 | 82.35 | 4200 | 0.5395 | 0.7319 | 0.7333 |
| 0.4629 | 86.27 | 4400 | 0.5382 | 0.7328 | 0.7333 |
| 0.4596 | 90.2 | 4600 | 0.5429 | 0.7200 | 0.7222 |
| 0.4567 | 94.12 | 4800 | 0.5392 | 0.7325 | 0.7333 |
| 0.4578 | 98.04 | 5000 | 0.5452 | 0.7263 | 0.7284 |
| 0.456 | 101.96 | 5200 | 0.5398 | 0.7314 | 0.7321 |
| 0.4542 | 105.88 | 5400 | 0.5382 | 0.7292 | 0.7309 |
| 0.4502 | 109.8 | 5600 | 0.5393 | 0.7315 | 0.7321 |
| 0.4452 | 113.73 | 5800 | 0.5389 | 0.7276 | 0.7284 |
| 0.4426 | 117.65 | 6000 | 0.5427 | 0.7314 | 0.7321 |
| 0.4401 | 121.57 | 6200 | 0.5441 | 0.7338 | 0.7346 |
| 0.4453 | 125.49 | 6400 | 0.5386 | 0.7240 | 0.7247 |
| 0.4361 | 129.41 | 6600 | 0.5382 | 0.7329 | 0.7333 |
| 0.4369 | 133.33 | 6800 | 0.5439 | 0.7280 | 0.7296 |
| 0.4382 | 137.25 | 7000 | 0.5364 | 0.7300 | 0.7309 |
| 0.4348 | 141.18 | 7200 | 0.5384 | 0.7335 | 0.7346 |
| 0.4326 | 145.1 | 7400 | 0.5403 | 0.7348 | 0.7358 |
| 0.4334 | 149.02 | 7600 | 0.5422 | 0.7347 | 0.7358 |
| 0.4341 | 152.94 | 7800 | 0.5403 | 0.7359 | 0.7370 |
| 0.432 | 156.86 | 8000 | 0.5380 | 0.7337 | 0.7346 |
| 0.4333 | 160.78 | 8200 | 0.5384 | 0.7344 | 0.7358 |
| 0.4342 | 164.71 | 8400 | 0.5378 | 0.7337 | 0.7346 |
| 0.4359 | 168.63 | 8600 | 0.5368 | 0.7343 | 0.7358 |
| 0.43 | 172.55 | 8800 | 0.5377 | 0.7330 | 0.7346 |
| 0.4276 | 176.47 | 9000 | 0.5394 | 0.7342 | 0.7358 |
| 0.4242 | 180.39 | 9200 | 0.5415 | 0.7355 | 0.7370 |
| 0.4269 | 184.31 | 9400 | 0.5407 | 0.7336 | 0.7346 |
| 0.4275 | 188.24 | 9600 | 0.5402 | 0.7347 | 0.7358 |
| 0.4263 | 192.16 | 9800 | 0.5399 | 0.7359 | 0.7370 |
| 0.4231 | 196.08 | 10000 | 0.5395 | 0.7359 | 0.7370 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_32768_512_43M", "model-index": [{"name": "GUE_mouse_0-seqsight_32768_512_43M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_32768_512_43M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_32768_512_43M",
"region:us"
] | null | 2024-04-30T07:24:09+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | kyounghyun/gemma-medical_qa-Finetune | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:25:17+00:00 |
null | null | {"license": "openrail"} | rumina001/rumina | null | [
"license:openrail",
"region:us"
] | null | 2024-04-30T07:25:26+00:00 |
|
null | null | {} | ZoneTwelve/DevHub_TAIDE | null | [
"gguf",
"region:us"
] | null | 2024-04-30T07:26:00+00:00 |
|
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_danish
This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0667
- Precision: 0.7791
- Recall: 0.7329
- F1: 0.7553
- Accuracy: 0.9807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 137 | 0.0788 | 0.6736 | 0.6658 | 0.6697 | 0.9749 |
| No log | 2.0 | 274 | 0.0652 | 0.7653 | 0.7406 | 0.7528 | 0.9802 |
| No log | 3.0 | 411 | 0.0667 | 0.7791 | 0.7329 | 0.7553 | 0.9807 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "distilbert/distilbert-base-multilingual-cased", "model-index": [{"name": "trained_danish", "results": []}]} | annamariagnat/trained_danish | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:26:21+00:00 |
null | null | {"license": "openrail"} | junesaki/RVC-Models | null | [
"license:openrail",
"region:us"
] | null | 2024-04-30T07:27:11+00:00 |
|
null | null | {} | ahopkins/foobar | null | [
"region:us"
] | null | 2024-04-30T07:28:01+00:00 |
|
null | null |
# ridwanlekan/Baichuan2-13B-Base-Q4_K_M-GGUF
This model was converted to GGUF format from [`baichuan-inc/Baichuan2-13B-Base`](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ridwanlekan/Baichuan2-13B-Base-Q4_K_M-GGUF --model baichuan2-13b-base.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ridwanlekan/Baichuan2-13B-Base-Q4_K_M-GGUF --model baichuan2-13b-base.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m baichuan2-13b-base.Q4_K_M.gguf -n 128
```
| {"language": ["en", "zh"], "license": "other", "tags": ["llama-cpp", "gguf-my-repo"], "tasks": ["text-generation"]} | ridwanlekan/Baichuan2-13B-Base-Q4_K_M-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"zh",
"license:other",
"region:us"
] | null | 2024-04-30T07:30:00+00:00 |
text-generation | transformers |
# mlx-community/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0
This model was converted to MLX format from [`llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0`]() using mlx-lm version **0.12.0**.
Refer to the [original model card](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mlx"], "datasets": ["databricks/databricks-dolly-15k", "llm-jp/databricks-dolly-15k-ja", "llm-jp/oasst1-21k-en", "llm-jp/oasst1-21k-ja", "llm-jp/oasst2-33k-en", "llm-jp/oasst2-33k-ja"], "programming_language": ["C", "C++", "C#", "Go", "Java", "JavaScript", "Lua", "PHP", "Python", "Ruby", "Rust", "Scala", "TypeScript"], "pipeline_tag": "text-generation", "inference": false} | mlx-community/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mlx",
"conversational",
"en",
"ja",
"dataset:databricks/databricks-dolly-15k",
"dataset:llm-jp/databricks-dolly-15k-ja",
"dataset:llm-jp/oasst1-21k-en",
"dataset:llm-jp/oasst1-21k-ja",
"dataset:llm-jp/oasst2-33k-en",
"dataset:llm-jp/oasst2-33k-ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:30:40+00:00 |
null | null | {} | HikariLight/Mistral-ACI-Bench-SFT-Hashtag | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-04-30T07:30:45+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Meta-Llama-3-8B
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: kloodia/alpaca_french
type: oasst
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./lora-out-french-alpaca
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# lora-out-french-alpaca
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3359 | 0.0 | 1 | 1.3247 |
| 1.1121 | 0.25 | 100 | 1.1294 |
| 1.1716 | 0.5 | 200 | 1.1096 |
| 1.1122 | 0.75 | 300 | 1.0955 |
| 1.0474 | 1.0 | 400 | 1.0836 |
| 1.0447 | 1.24 | 500 | 1.0873 |
| 1.0131 | 1.49 | 600 | 1.0809 |
| 0.9847 | 1.74 | 700 | 1.0762 |
| 0.9584 | 1.99 | 800 | 1.0697 |
| 0.8514 | 2.23 | 900 | 1.0966 |
| 0.9217 | 2.48 | 1000 | 1.0995 |
| 0.8732 | 2.73 | 1100 | 1.0964 |
| 0.9226 | 2.98 | 1200 | 1.0951 |
| 0.76 | 3.22 | 1300 | 1.1307 |
| 0.8056 | 3.47 | 1400 | 1.1314 |
| 0.7895 | 3.72 | 1500 | 1.1297 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0 | {"license": "other", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "lora-out-french-alpaca", "results": []}]} | kloodia/alpaca | null | [
"peft",
"tensorboard",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"8-bit",
"region:us"
] | null | 2024-04-30T07:30:54+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-dpo-full-sft-wo-kqa_golden
This model is a fine-tuned version of [Minbyul/llama2-7b-wo-kqa_golden-sft](https://huggingface.co/Minbyul/llama2-7b-wo-kqa_golden-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2778
- Rewards/chosen: -0.1016
- Rewards/rejected: -2.1516
- Rewards/accuracies: 0.9500
- Rewards/margins: 2.0501
- Logps/rejected: -771.6371
- Logps/chosen: -312.4064
- Logits/rejected: -0.5673
- Logits/chosen: -0.7867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.2497 | 0.74 | 100 | 0.3024 | -0.0879 | -1.9222 | 0.9500 | 1.8343 | -748.6945 | -311.0383 | -0.5637 | -0.7827 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/llama2-7b-wo-kqa_golden-sft", "model-index": [{"name": "llama2-7b-dpo-full-sft-wo-kqa_golden", "results": []}]} | Minbyul/llama2-7b-dpo-full-sft-wo-kqa_golden | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Minbyul/llama2-7b-wo-kqa_golden-sft",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:33:06+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-360M
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 300
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.5748 | 1.0 | 3 | 8.5145 |
| 8.2938 | 2.0 | 6 | 8.2723 |
| 7.8473 | 3.0 | 9 | 7.8807 |
| 7.2394 | 4.0 | 12 | 7.3951 |
| 6.6519 | 5.0 | 15 | 6.9171 |
| 6.2694 | 6.0 | 18 | 6.5824 |
| 5.9992 | 7.0 | 21 | 6.3622 |
| 5.9116 | 8.0 | 24 | 6.1503 |
| 5.6323 | 9.0 | 27 | 5.8219 |
| 5.1124 | 10.0 | 30 | 5.4438 |
| 4.6146 | 11.0 | 33 | 5.1114 |
| 4.4062 | 12.0 | 36 | 4.8742 |
| 3.967 | 13.0 | 39 | 4.6720 |
| 3.9281 | 14.0 | 42 | 4.4782 |
| 3.5204 | 15.0 | 45 | 4.2976 |
| 3.3159 | 16.0 | 48 | 4.1650 |
| 3.1737 | 17.0 | 51 | 4.0546 |
| 2.9307 | 18.0 | 54 | 3.9636 |
| 2.8228 | 19.0 | 57 | 3.9233 |
| 2.8805 | 20.0 | 60 | 3.8573 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "Llama-360M", "results": []}]} | ninagroot/Llama-360M | null | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:34:53+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Raghuveer991/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.3605
- Validation Loss: 2.0390
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.3605 | 2.0390 | 0 |
### Framework versions
- Transformers 4.40.0
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "Raghuveer991/my_awesome_qa_model", "results": []}]} | Raghuveer991/my_awesome_qa_model | null | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:34:55+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cdc_influenza_bart-base-cnn
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5155
- Rouge1: 0.3829
- Rouge2: 0.3086
- Rougel: 0.3623
- Rougelsum: 0.3576
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 2 | 0.8120 | 0.308 | 0.2272 | 0.2723 | 0.2758 | 20.0 |
| No log | 2.0 | 4 | 0.6427 | 0.3473 | 0.2635 | 0.3179 | 0.3189 | 20.0 |
| No log | 3.0 | 6 | 0.5496 | 0.3925 | 0.3203 | 0.3671 | 0.3642 | 20.0 |
| No log | 4.0 | 8 | 0.5155 | 0.3829 | 0.3086 | 0.3623 | 0.3576 | 20.0 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "facebook/bart-base", "model-index": [{"name": "cdc_influenza_bart-base-cnn", "results": []}]} | PergaZuZ/cdc_influenza_bart-base-cnn | null | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:36:22+00:00 |
text-generation | transformers |
# Mistral-child-1-3
Mistral-child-1-3 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: HuggingFaceH4/zephyr-7b-beta
parameters:
density: 0.5
weight: 0.5
- model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
dtype: float16
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "HuggingFaceH4/zephyr-7b-beta", "mistralai/Mistral-7B-Instruct-v0.2"]} | PotatoB/Mistral-child-1-3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"HuggingFaceH4/zephyr-7b-beta",
"mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:37:09+00:00 |
null | null | {} | chris200931/my_finetuned_gpt2 | null | [
"region:us"
] | null | 2024-04-30T07:37:13+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** russgeo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | russgeo/megaprompt | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:37:42+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | singhvishnu020/gemma-7b-v2-role-play_1 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T07:37:49+00:00 |
null | null | {"license": "other", "license_name": "no", "license_link": "LICENSE"} | Moazzz900/Question_AnswerModel | null | [
"license:other",
"region:us"
] | null | 2024-04-30T07:38:38+00:00 |
|
text-generation | transformers |
# base model :
- microsoft/Phi-3-mini-4k-instruct
# Dataset :
- ayoubkirouane/Small-Instruct-Alpaca_Format | {"language": ["en"], "library_name": "transformers", "tags": ["unsloth", "trl", "sft"], "datasets": ["ayoubkirouane/Small-Instruct-Alpaca_Format"], "pipeline_tag": "text-generation"} | ayoubkirouane/Phi3-3.8-4k_alpaca_instruct | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:ayoubkirouane/Small-Instruct-Alpaca_Format",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-30T07:39:11+00:00 |
null | null | {} | HenryCai1129/adapter-llama-adapterhappy_search_1000_new-50-0.003 | null | [
"region:us"
] | null | 2024-04-30T07:39:15+00:00 |
|
null | null | {} | baotuan/Baotuan | null | [
"region:us"
] | null | 2024-04-30T07:39:47+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.