modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
thdangtr/blip_recipe1m_instructions_v1_test
|
thdangtr
| 2024-04-14T14:32:55Z | 64 | 0 |
transformers
|
[
"transformers",
"safetensors",
"blip",
"visual-question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2024-04-14T14:29:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/mistralai_-_Mixtral-8x7B-v0.1-4bits
|
RichardErkhov
| 2024-04-14T14:32:33Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-04-14T14:05:24Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mixtral-8x7B-v0.1 - bnb 4bits
- Model creator: https://huggingface.co/mistralai/
- Original model: https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/
Original model description:
---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
tags:
- moe
---
# Model Card for Mixtral-8x7B
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Notice
Mixtral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
farhananis005/QLoRA_mistral7b__roneneldan-TinyStories7k
|
farhananis005
| 2024-04-14T14:29:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T14:28:57Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** farhananis005
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
do1do1do1do1/wav2vec2-base-timit-demo-colab
|
do1do1do1do1
| 2024-04-14T14:27:45Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T14:27:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Roza55/Roza
|
Roza55
| 2024-04-14T14:26:43Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2024-04-14T14:26:43Z |
---
license: bigscience-openrail-m
---
|
hungphongtrn/en_vi_envit5-base_docs_news_train
|
hungphongtrn
| 2024-04-14T14:26:23Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/envit5-base",
"base_model:finetune:VietAI/envit5-base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-14T13:36:45Z |
---
license: mit
base_model: VietAI/envit5-base
tags:
- generated_from_trainer
model-index:
- name: en_vi_envit5-base_docs_news_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en_vi_envit5-base_docs_news_train
This model is a fine-tuned version of [VietAI/envit5-base](https://huggingface.co/VietAI/envit5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.37.2
- Pytorch 1.12.1+cu116
- Datasets 2.18.0
- Tokenizers 0.15.1
|
casque/slingshot_v1.6_Gtonero
|
casque
| 2024-04-14T14:26:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-14T14:25:31Z |
---
license: creativeml-openrail-m
---
|
trung0209/rumi_new
|
trung0209
| 2024-04-14T14:25:37Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"peft",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-30T11:34:33Z |
---
tags:
- text-generation-inference
- text-generation
- peft
library_name: transformers
license: other
pipeline_tag: text-generation
---
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
RichardErkhov/mistralai_-_Mistral-7B-v0.1-8bits
|
RichardErkhov
| 2024-04-14T14:23:27Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:2310.06825",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-04-14T13:46:57Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-7B-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/mistralai/
- Original model: https://huggingface.co/mistralai/Mistral-7B-v0.1/
Original model description:
---
license: apache-2.0
pipeline_tag: text-generation
language:
- en
tags:
- pretrained
inference:
parameters:
temperature: 0.7
---
# Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
KeyError: 'mistral'
```
- Or:
```
NotImplementedError: Cannot copy out of meta tensor; no data!
```
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
## Notice
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
RichardErkhov/mistralai_-_Mistral-7B-Instruct-v0.1-4bits
|
RichardErkhov
| 2024-04-14T14:08:23Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:2310.06825",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-04-14T13:33:11Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-7B-Instruct-v0.1 - bnb 4bits
- Model creator: https://huggingface.co/mistralai/
- Original model: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1/
Original model description:
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
inference: true
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
FrentrNette/frentrtosSummarizer
|
FrentrNette
| 2024-04-14T13:57:15Z | 105 | 2 |
transformers
|
[
"transformers",
"keras",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-13T23:34:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hookzeng/ppo-Huggy
|
hookzeng
| 2024-04-14T13:54:20Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-04-14T13:53:26Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hookzeng/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fishtoby/q-FrozenLake-v1-4x4-noSlippery
|
fishtoby
| 2024-04-14T13:53:32Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-14T13:53:29Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fishtoby/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
XsoraS/outputs3
|
XsoraS
| 2024-04-14T13:52:01Z | 136 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T13:06:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
edg3/bart-cnn-samsum-finetuned
|
edg3
| 2024-04-14T13:49:40Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-14T08:17:07Z |
---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
model-index:
- name: bart-cnn-samsum-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the 'samsum' data set.
It achieves the following results on the evaluation set:
- Loss: 0.1330
## Model description
Experiments with simple training on an existing model; for my personal blog.
## Intended uses & limitations
To read conversations and give them summaries, to some degree.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0981 | 1.0 | 37 | 0.1360 |
| 0.1009 | 2.0 | 74 | 0.1330 |
| 0.0957 | 3.0 | 111 | 0.1330 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Jones189/q-Taxi-v3
|
Jones189
| 2024-04-14T13:44:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-14T13:44:08Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Jones189/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Random2307/FR
|
Random2307
| 2024-04-14T13:37:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-04-14T13:37:17Z |
---
license: creativeml-openrail-m
---
|
Minbyul/selfbiorag-7b-wo-medication_qa-sft
|
Minbyul
| 2024-04-14T13:35:32Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:dmis-lab/selfbiorag_7b",
"base_model:finetune:dmis-lab/selfbiorag_7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T13:22:57Z |
---
base_model: dmis-lab/selfbiorag_7b
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/deita-10k-v0-sft
model-index:
- name: selfbiorag-7b-wo-medication_qa-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# selfbiorag-7b-wo-medication_qa-sft
This model is a fine-tuned version of [dmis-lab/selfbiorag_7b](https://huggingface.co/dmis-lab/selfbiorag_7b) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5074 | 0.92 | 6 | 1.5828 |
| 1.2223 | 2.0 | 13 | 1.5458 |
| 1.1253 | 2.77 | 18 | 1.5396 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
shubham11/gemma_newprompt14-4
|
shubham11
| 2024-04-14T13:30:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T13:29:17Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-it-bnb-4bit
---
# Uploaded model
- **Developed by:** shubham11
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hungphongtrn/en_vi_envit5-base_doc_train
|
hungphongtrn
| 2024-04-14T13:29:04Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/envit5-base",
"base_model:finetune:VietAI/envit5-base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-14T12:50:39Z |
---
license: mit
base_model: VietAI/envit5-base
tags:
- generated_from_trainer
model-index:
- name: en_vi_envit5-base_doc_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en_vi_envit5-base_doc_train
This model is a fine-tuned version of [VietAI/envit5-base](https://huggingface.co/VietAI/envit5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.37.2
- Pytorch 1.12.1+cu116
- Datasets 2.18.0
- Tokenizers 0.15.1
|
zahra-soukhtedel/wav2vec2-large-xls-r-300m-persion-v2
|
zahra-soukhtedel
| 2024-04-14T13:27:57Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T10:01:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Minbyul/meditron-7b-wo-medication_qa-sft
|
Minbyul
| 2024-04-14T13:21:35Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:epfl-llm/meditron-7b",
"base_model:finetune:epfl-llm/meditron-7b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T13:08:36Z |
---
license: llama2
base_model: epfl-llm/meditron-7b
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/deita-10k-v0-sft
model-index:
- name: meditron-7b-wo-medication_qa-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meditron-7b-wo-medication_qa-sft
This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1713 | 0.92 | 6 | 1.3683 |
| 1.0185 | 2.0 | 13 | 1.3435 |
| 0.9011 | 2.77 | 18 | 1.3274 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
TKU410410103/hubert-large-japanese-asr
|
TKU410410103
| 2024-04-14T13:21:01Z | 525 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"ja",
"dataset:reazon-research/reazonspeech",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-09T03:01:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
- cer
model-index:
- name: hubert-large-japanese-asr
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Reazonspeech
type: custom
args: ja
metrics:
- name: Test WER
type: wer
value: 40.5197
- name: Test CER
type: cer
value: 23.220979
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 22.705487
- name: Test CER
type: cer
value: 9.39939
datasets:
- reazon-research/reazonspeech
- mozilla-foundation/common_voice_11_0
language:
- ja
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-large-asr
This model is a fine-tuned version of [rinna/japanese-hubert-large](https://huggingface.co/rinna/japanese-hubert-large) ASR. Initially fine-tuned on the [reazonspeech(small) dataset](https://huggingface.co/datasets/reazon-research/reazonspeech), it was subsequently further fine-tuned on the [common_voice_11_0 dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/ja) for ASR tasks.
This model can only predict Hiragana.
## Acknowledgments
This model's fine-tuning approach was inspired by and references the training methodology used in [vumichien/wav2vec2-large-xlsr-japanese-hiragana](https://huggingface.co/vumichien/wav2vec2-large-xlsr-japanese-hiragana).
## Training procedure
The model was fine-tuned in two main stages, first on the Reazonspeech dataset, followed by the common_voice_11_0 dataset. Details of the training steps and results are as follows:
### Training on Reazonspeech
The initial fine-tuning on the Reazonspeech(small) dataset was carried out with the following performance metrics:
| Step | Training Loss | Validation Loss | WER |
|-------|---------------|-----------------|--------|
| 1000 | 12.29880 | 3.610288 | 1.00000|
| 2000 | 3.601800 | 3.505306 | 1.00000|
| 3000 | 2.80300 | 1.948012 | 0.722361|
| 4000 | 1.961500 | 1.545842 | 0.558738|
| 5000 | 1.712000 | 1.420027 | 0.509049|
| 6000 | 1.565500 | 1.235171 | 0.466279|
| 7000 | 1.504900 | 1.160565 | 0.461829|
| 8000 | 1.409800 | 1.088012 | 0.427435|
| 9000 | 1.358800 | 1.097211 | 0.409861|
| 10000 | 1.318600 | 1.062294 | 0.403694|
| 11000 | 1.258500 | 1.026783 | 0.385464|
| 12000 | 1.245100 | 1.024860 | 0.379845|
| 13000 | 1.217700 | 0.985201 | 0.375634|
| 14000 | 1.187900 | 0.977686 | 0.367163|
| 15000 | 1.168100 | 0.978529 | 0.363656|
| 16000 | 1.135800 | 0.965668 | 0.363942|
| 17000 | 1.140600 | 0.953237 | 0.360912|
### Training on common_voice_11_0
After fine-tuning on Reazonspeech, further fine-tuning was performed on the common_voice_11_0 dataset, leading to the following results:
| Step | Training Loss | Validation Loss | WER |
|------|---------------|-----------------|--------|
| 1000 | 1.08950 | 0.49275 | 0.302035|
| 2000 | 0.86100 | 0.45113 | 0.266950|
| 3000 | 0.76240 | 0.442281 | 0.244981|
| 4000 | 0.70170 | 0.411666 | 0.234287|
| 5000 | 0.66400 | 0.411769 | 0.227942|
| 6000 | 0.63810 | 0.413067 | 0.225690|
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-4
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- num_train_epochs: 10
- lr_scheduler_type: linear
### How to evaluate the model
```python
from transformers import HubertForCTC, Wav2Vec2Processor
from datasets import load_dataset
import torch
import torchaudio
import librosa
import numpy as np
import re
import MeCab
import pykakasi
from evaluate import load
model = HubertForCTC.from_pretrained('TKU410410103/hubert-large-japanese-asr')
processor = Wav2Vec2Processor.from_pretrained("TKU410410103/hubert-large-japanese-asr")
# load dataset
test_dataset = load_dataset('mozilla-foundation/common_voice_11_0', 'ja', split='test')
remove_columns = [col for col in test_dataset.column_names if col not in ['audio', 'sentence']]
test_dataset = test_dataset.remove_columns(remove_columns)
# resample
def process_waveforms(batch):
speech_arrays = []
sampling_rates = []
for audio_path in batch['audio']:
speech_array, _ = torchaudio.load(audio_path['path'])
speech_array_resampled = librosa.resample(np.asarray(speech_array[0].numpy()), orig_sr=48000, target_sr=16000)
speech_arrays.append(speech_array_resampled)
sampling_rates.append(16000)
batch["array"] = speech_arrays
batch["sampling_rate"] = sampling_rates
return batch
# hiragana
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "'", "ʻ", "ˆ"]
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
wakati = MeCab.Tagger("-Owakati")
kakasi = pykakasi.kakasi()
kakasi.setMode("J","H")
kakasi.setMode("K","H")
kakasi.setMode("r","Hepburn")
conv = kakasi.getConverter()
def prepare_char(batch):
batch["sentence"] = conv.do(wakati.parse(batch["sentence"]).strip())
batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip()
return batch
resampled_eval_dataset = test_dataset.map(process_waveforms, batched=True, batch_size=50, num_proc=4)
eval_dataset = resampled_eval_dataset.map(prepare_char, num_proc=4)
# begin the evaluation process
wer = load("wer")
cer = load("cer")
def evaluate(batch):
inputs = processor(batch["array"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(device), attention_mask=inputs.attention_mask.to(device)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
columns_to_remove = [column for column in eval_dataset.column_names if column != "sentence"]
batch_size = 16
result = eval_dataset.map(evaluate, remove_columns=columns_to_remove, batched=True, batch_size=batch_size)
wer_result = wer.compute(predictions=result["pred_strings"], references=result["sentence"])
cer_result = cer.compute(predictions=result["pred_strings"], references=result["sentence"])
print("WER: {:2f}%".format(100 * wer_result))
print("CER: {:2f}%".format(100 * cer_result))
```
### Test results
The final model was evaluated as follows:
On reazonspeech(tiny):
- WER: 40.519700%
- CER: 23.220979%
On common_voice_11_0:
- WER: 22.705487%
- CER: 9.399390%
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu118
- Datasets 2.17.1
|
TKU410410103/hubert-base-japanese-asr
|
TKU410410103
| 2024-04-14T13:20:43Z | 573 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"ja",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-09T06:01:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
- cer
model-index:
- name: hubert-base-japanese-asr
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 27.511982
- name: Test CER
type: cer
value: 11.699897
datasets:
- mozilla-foundation/common_voice_11_0
language:
- ja
---
# hubert-base-asr
This model is a fine-tuned version of [rinna/japanese-hubert-base](https://huggingface.co/rinna/japanese-hubert-base) on the [common_voice_11_0 dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/ja) for ASR tasks.
This model can only predict Hiragana.
## Acknowledgments
This model's fine-tuning approach was inspired by and references the training methodology used in [vumichien/wav2vec2-large-xlsr-japanese-hiragana](https://huggingface.co/vumichien/wav2vec2-large-xlsr-japanese-hiragana).
## Training Procedure
Fine-tuning on the common_voice_11_0 dataset led to the following results:
| Step | Training Loss | Validation Loss | WER |
|-------|---------------|-----------------|--------|
| 1000 | 2.505600 | 1.009531 | 0.614952|
| 2000 | 1.186900 | 0.752440 | 0.422948|
| 3000 | 0.947700 | 0.658266 | 0.358543|
| 4000 | 0.817700 | 0.656034 | 0.356308|
| 5000 | 0.741300 | 0.623420 | 0.314537|
| 6000 | 0.694700 | 0.624534 | 0.294018|
| 7000 | 0.653400 | 0.603341 | 0.286735|
| 8000 | 0.616200 | 0.606606 | 0.285132|
| 9000 | 0.594800 | 0.596215 | 0.277422|
| 10000 | 0.590500 | 0.603380 | 0.274949|
### Training hyperparameters
The training hyperparameters remained consistent throughout the fine-tuning process:
- learning_rate: 1e-4
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- num_train_epochs: 30
- lr_scheduler_type: linear
### How to evaluate the model
```python
from transformers import HubertForCTC, Wav2Vec2Processor
from datasets import load_dataset
import torch
import torchaudio
import librosa
import numpy as np
import re
import MeCab
import pykakasi
from evaluate import load
model = HubertForCTC.from_pretrained('TKU410410103/hubert-base-japanese-asr')
processor = Wav2Vec2Processor.from_pretrained("TKU410410103/hubert-base-japanese-asr")
# load dataset
test_dataset = load_dataset('mozilla-foundation/common_voice_11_0', 'ja', split='test')
remove_columns = [col for col in test_dataset.column_names if col not in ['audio', 'sentence']]
test_dataset = test_dataset.remove_columns(remove_columns)
# resample
def process_waveforms(batch):
speech_arrays = []
sampling_rates = []
for audio_path in batch['audio']:
speech_array, _ = torchaudio.load(audio_path['path'])
speech_array_resampled = librosa.resample(np.asarray(speech_array[0].numpy()), orig_sr=48000, target_sr=16000)
speech_arrays.append(speech_array_resampled)
sampling_rates.append(16000)
batch["array"] = speech_arrays
batch["sampling_rate"] = sampling_rates
return batch
# hiragana
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "'", "ʻ", "ˆ"]
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
wakati = MeCab.Tagger("-Owakati")
kakasi = pykakasi.kakasi()
kakasi.setMode("J","H")
kakasi.setMode("K","H")
kakasi.setMode("r","Hepburn")
conv = kakasi.getConverter()
def prepare_char(batch):
batch["sentence"] = conv.do(wakati.parse(batch["sentence"]).strip())
batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip()
return batch
resampled_eval_dataset = test_dataset.map(process_waveforms, batched=True, batch_size=50, num_proc=4)
eval_dataset = resampled_eval_dataset.map(prepare_char, num_proc=4)
# begin the evaluation process
wer = load("wer")
cer = load("cer")
def evaluate(batch):
inputs = processor(batch["array"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(device), attention_mask=inputs.attention_mask.to(device)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
columns_to_remove = [column for column in eval_dataset.column_names if column != "sentence"]
batch_size = 16
result = eval_dataset.map(evaluate, remove_columns=columns_to_remove, batched=True, batch_size=batch_size)
wer_result = wer.compute(predictions=result["pred_strings"], references=result["sentence"])
cer_result = cer.compute(predictions=result["pred_strings"], references=result["sentence"])
print("WER: {:2f}%".format(100 * wer_result))
print("CER: {:2f}%".format(100 * cer_result))
```
### Test results
The final model was evaluated as follows:
On common_voice_11_0:
- WER: 27.511982%
- CER: 11.699897%
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu118
- Datasets 2.17.1
|
tomaszki/mistral-32-b
|
tomaszki
| 2024-04-14T13:20:00Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T13:16:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NouRed/BioMed-Gemma-2b
|
NouRed
| 2024-04-14T13:15:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T13:15:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LDDon/distilgpt2-finetuned-cybersecurity_readme
|
LDDon
| 2024-04-14T13:14:11Z | 204 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T12:37:15Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-cybersecurity_readme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-cybersecurity_readme
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 125 | 3.0330 |
| No log | 2.0 | 250 | 2.9910 |
| No log | 3.0 | 375 | 2.9861 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
tomaszki/mistral-32-a
|
tomaszki
| 2024-04-14T13:09:30Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T13:06:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Zzzalo/my_awesome_qa_model
|
Zzzalo
| 2024-04-14T13:08:43Z | 103 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-04-13T18:05:13Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.5739 |
| 2.8841 | 2.0 | 500 | 1.8642 |
| 2.8841 | 3.0 | 750 | 1.7366 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Minbyul/llama2-7b-wo-medication_qa-sft
|
Minbyul
| 2024-04-14T13:07:25Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T12:54:42Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/deita-10k-v0-sft
model-index:
- name: llama2-7b-wo-medication_qa-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-wo-medication_qa-sft
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1052 | 0.92 | 6 | 1.2976 |
| 0.9691 | 2.0 | 13 | 1.2458 |
| 0.871 | 2.77 | 18 | 1.2333 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
tomaszki/mistral-32
|
tomaszki
| 2024-04-14T13:06:20Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T13:04:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jayavibhav/gemma-it-Kannada-v01
|
jayavibhav
| 2024-04-14T13:03:34Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-03T10:15:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cgihlstorf/llama27b-finetuned_32_1_0.0003_alternate_no_output_random_train_nonrandom_val
|
cgihlstorf
| 2024-04-14T13:03:03Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-04-14T13:01:46Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
GXLooong/llama-2-7b-dpo-full
|
GXLooong
| 2024-04-14T13:02:47Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-29T03:45:28Z |
README见ppo仓库:
https://huggingface.co/GXLooong/llama-2-7b-ppo-full
|
reallad/lobollama
|
reallad
| 2024-04-14T12:58:17Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-09T22:48:52Z |
---
license: llama2
---
A modified version of llama-2-7b with only 4 k-v heads. Outputs gibberish, but some functionality appears to be restorable through fine-tuning.
|
Minbyul/mistral-7b-wo-medication_qa-sft
|
Minbyul
| 2024-04-14T12:53:24Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T12:40:44Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/deita-10k-v0-sft
model-index:
- name: mistral-7b-wo-medication_qa-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-wo-medication_qa-sft
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3958 | 1.0 | 6 | 1.6723 |
| 1.0573 | 2.0 | 12 | 1.5254 |
| 0.8462 | 3.0 | 18 | 1.5099 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
mergekit-community/mergekit-slerp-llfrpky
|
mergekit-community
| 2024-04-14T12:51:55Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:merge:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:WizardLMTeam/WizardMath-7B-V1.1",
"base_model:merge:WizardLMTeam/WizardMath-7B-V1.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T12:48:49Z |
---
base_model:
- NousResearch/Hermes-2-Pro-Mistral-7B
- WizardLM/WizardMath-7B-V1.1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: WizardLM/WizardMath-7B-V1.1
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
metterian/llama-pro-ko-8b
|
metterian
| 2024-04-14T12:48:00Z | 59 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"llama-2-ko",
"llama-pro-ko",
"en",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-02-15T12:09:54Z |
---
language:
- en
- ko
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- llama-2-ko
- llama-pro-ko
license: apache-2.0
---
# LLaMA-Pro-Ko-8B Model Card
### Model Description
LLaMA-Pro is an advanced iteration of the original LLaMA model, augmented with additional Transformer blocks. Unlike its predecessor, Llama-pro, which was specialized for programming and mathematics, Llama-Pro-Ko is tailored to the language domain, undergoing post-training for enhanced performance.
## Development and Training
The NLP & AI Lab at Korea University developed LLaMA-Pro-Ko, a model boasting 8 billion parameters. This model extends LLaMA2-7B by incorporating Korean tokens via vocabulary extension and was further refined by training on a Korean corpus of 10 billion tokens, exclusively without the inclusion of English data.
### Language Specialization and Transfer
While previous models like Llama-ko and Llama-2-ko experienced diminished English capabilities as they learned Korean, Llama-Pro's language transfer approach aims to bolster Korean language performance with minimal impact on its English proficiency.
### Bilingual Performance Evaluation
LLaMA-Pro-Ko's performance is evaluated on two fronts: its proficiency in English and its mastery of Korean, showcasing its capabilities as a bilingual model.

### Korean Evaluation
#### Open Ko LLM Benchmark
| | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | AVG |
| ------------------------------------------------------------ | --------- | ------------ | --------- | ------------- | --------------- | --------- |
| [Llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) | 31.91 | 41.68 | 34.11 | 48.49 | 30.34 | 37.31 |
| [beomi/open-llama-2-ko-7b](https://huggingface.co/beomi/open-llama-2-ko-7b) | 40.02 | 50.27 | 27.60 | 38.67 | 42.15 | 39.74 |
| llama-pro-ko-8b | **40.19** | **51.26** | **36.80** | **40.24** | **43.8** | **42.46** |
### English Evaluation
#### Open LLM Benchmark
| | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | AVG | diff |
| :----------------------------------------------------------- | :-------: | :----------: | :-------: | :----------: | :----------: | :----------: | :-------: |
| [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) | 53.07 | **78.59** | 46.87 | **38.76** | **74.03** | **58.26** | 0 |
| [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) | 48.46 | 75.28 | 39.56 | 34.49 | 72.14 | 53.99 | -4.28 |
| [beomi/open-llama-2-ko-7b](https://huggingface.co/beomi/open-llama-2-ko-7b) | 46.84 | 69.48 | 29.86 | 35.35 | 66.30 | 49.57 | -8.70 |
| llama-pro-ko-8b | **53.24** | <u>77.93</u> | **47.06** | <u>38.32</u> | <u>72.22</u> | <u>57.75</u> | **-0.51** |
|
sandeepmaddu/14apr-bert-cased
|
sandeepmaddu
| 2024-04-14T12:43:11Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-04-14T12:26:53Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 14apr-bert-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 14apr-bert-uncased
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1141
- Precision: 0.9797
- Recall: 0.9796
- F1: 0.9797
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1405 | 1.0 | 2500 | 0.1016 | 0.9731 | 0.9761 | 0.9746 | 0.9721 |
| 0.0994 | 2.0 | 5000 | 0.0939 | 0.9776 | 0.9774 | 0.9775 | 0.9750 |
| 0.0731 | 3.0 | 7500 | 0.0968 | 0.9783 | 0.9790 | 0.9787 | 0.9767 |
| 0.045 | 4.0 | 10000 | 0.1075 | 0.9790 | 0.9798 | 0.9794 | 0.9773 |
| 0.035 | 5.0 | 12500 | 0.1141 | 0.9797 | 0.9796 | 0.9797 | 0.9774 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Minbyul/biomistral-7b-wo-medication_qa-sft
|
Minbyul
| 2024-04-14T12:39:35Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:BioMistral/BioMistral-7B",
"base_model:finetune:BioMistral/BioMistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T12:26:28Z |
---
license: apache-2.0
base_model: BioMistral/BioMistral-7B
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/deita-10k-v0-sft
model-index:
- name: biomistral-7b-wo-medication_qa-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biomistral-7b-wo-medication_qa-sft
This model is a fine-tuned version of [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3711 | 1.0 | 6 | 1.7329 |
| 1.0734 | 2.0 | 12 | 1.6324 |
| 0.8291 | 3.0 | 18 | 1.6409 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
Yan777/trained_weigths_2
|
Yan777
| 2024-04-14T12:36:35Z | 6 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-14T12:35:56Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-chat-hf
model-index:
- name: trained_weigths_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_weigths_2
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8026 | 1.0 | 5194 | 0.4062 |
| 0.817 | 2.0 | 10388 | 0.3952 |
| 0.6804 | 3.0 | 15582 | 0.3953 |
| 0.725 | 4.0 | 20776 | 0.3984 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
HansenYan/caster-dev
|
HansenYan
| 2024-04-14T12:35:50Z | 0 | 1 | null |
[
"steel engineering",
"level 2",
"caster",
"zh",
"en",
"de",
"ru",
"license:mit",
"region:us"
] | null | 2024-04-14T10:04:28Z |
---
license: mit
language:
- zh
- en
- de
- ru
tags:
- steel engineering
- level 2
- caster
---
|
wookyungseo/qlora-koalpaca-polyglot-12.8b-500step
|
wookyungseo
| 2024-04-14T12:30:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T12:30:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gotchachurchkhela/SN6-23
|
gotchachurchkhela
| 2024-04-14T12:24:33Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T12:21:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sai-vatturi/whisper-tiny-hi
|
sai-vatturi
| 2024-04-14T12:24:30Z | 113 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-14T08:39:40Z |
---
language:
- hi
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny Hindi - Sainadh Vatturi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: None
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 60.9667315669178
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Hindi - Sainadh Vatturi
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6314
- Wer: 60.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.308 | 2.44 | 1000 | 0.5337 | 63.8238 |
| 0.1876 | 4.89 | 2000 | 0.5105 | 59.3287 |
| 0.0936 | 7.33 | 3000 | 0.5599 | 59.4853 |
| 0.0657 | 9.78 | 4000 | 0.6047 | 60.3699 |
| 0.0466 | 12.22 | 5000 | 0.6314 | 60.9667 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
alexgrigoras/mistral_7b_finetuned_custom_data
|
alexgrigoras
| 2024-04-14T12:20:20Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-04-13T16:59:32Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Michielo/mt5-small_nl-en_translation
|
Michielo
| 2024-04-14T12:19:22Z | 170 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"translation",
"en",
"nl",
"dataset:opus_books",
"dataset:iwslt2017",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-07T16:29:32Z |
---
license: apache-2.0
datasets:
- opus_books
- iwslt2017
language:
- en
- nl
pipeline_tag: text2text-generation
tags:
- translation
metrics:
- bleu
- chrf
- chrf++
widget:
- text: ">>en<< Was het leuk?"
---
# Model Card for mt5-small nl-en translation
The mt5-small nl-en translation model is a finetuned version of [google/mt5-small](https://huggingface.co/google/mt5-small).
It was finetuned on 237k rows of the [iwslt2017](https://huggingface.co/datasets/iwslt2017/viewer/iwslt2017-en-nl) dataset and roughly 38k rows of the [opus_books](https://huggingface.co/datasets/opus_books/viewer/en-nl) dataset. The model was trained in multiple phases with different epochs & batch sizes.
## How to use
**Install dependencies**
```bash
pip install transformers
pip install sentencepiece
pip install protobuf
```
You can use the following code for model inference. This model was finetuned to work with an identifier when prompted that needs to be present for the best results.
```Python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig
# load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Michielo/mt5-small_nl-en_translation")
model = AutoModelForSeq2SeqLM.from_pretrained("Michielo/mt5-small_nl-en_translation")
# tokenize input
inputs = tokenizer(">>en<< Your Dutch text here", return_tensors="pt")
# calculate the output
outputs = model.generate(**inputs, generation_config=generation_config)
# decode and print
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
## Benchmarks
| Benchmark | Score |
|--------------|:-----:|
| BLEU | 51.92% |
| chr-F | 67.90% |
| chr-F++ | 67.62% |
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details.
|
IbrahimTarek/Boiler_gemma7b
|
IbrahimTarek
| 2024-04-14T12:14:20Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-04-14T09:56:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mergekit-community/mergekit-slerp-ynceepa
|
mergekit-community
| 2024-04-14T12:14:01Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"base_model:cloudyu/google-gemma-7b-chinese-sft-v1",
"base_model:merge:cloudyu/google-gemma-7b-chinese-sft-v1",
"base_model:unsloth/codegemma-7b",
"base_model:merge:unsloth/codegemma-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T12:10:19Z |
---
base_model:
- unsloth/codegemma-7b
- cloudyu/google-gemma-7b-chinese-sft-v1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b)
* [cloudyu/google-gemma-7b-chinese-sft-v1](https://huggingface.co/cloudyu/google-gemma-7b-chinese-sft-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cloudyu/google-gemma-7b-chinese-sft-v1
- model: unsloth/codegemma-7b
merge_method: slerp
base_model: unsloth/codegemma-7b
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
LDDon/distilgpt2-finetuned-wikitext2
|
LDDon
| 2024-04-14T12:03:42Z | 208 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-03T02:17:15Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 125 | 3.3708 |
| No log | 2.0 | 250 | 3.3240 |
| No log | 3.0 | 375 | 3.3188 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
SarthakDargan/meko_LoRA
|
SarthakDargan
| 2024-04-14T12:00:28Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-04-14T05:40:49Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MEKO
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - SarthakDargan/meko_LoRA
<Gallery />
## Model description
These are SarthakDargan/meko_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MEKO to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](SarthakDargan/meko_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
denise227/amazon_kindle_sentiment_analysis_definitivo
|
denise227
| 2024-04-14T11:58:50Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-14T11:08:43Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: amazon_kindle_sentiment_analysis_definitivo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazon_kindle_sentiment_analysis_definitivo
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9897
- Accuracy: 0.585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6088 | 0.01 | 10 | 1.5857 | 0.265 |
| 1.6469 | 0.02 | 20 | 1.5750 | 0.2617 |
| 1.5407 | 0.03 | 30 | 1.5206 | 0.295 |
| 1.5096 | 0.03 | 40 | 1.5134 | 0.3792 |
| 1.5668 | 0.04 | 50 | 1.4435 | 0.33 |
| 1.386 | 0.05 | 60 | 1.3578 | 0.32 |
| 1.3041 | 0.06 | 70 | 1.2950 | 0.4167 |
| 1.2491 | 0.07 | 80 | 1.2376 | 0.4242 |
| 1.4186 | 0.07 | 90 | 1.3518 | 0.4175 |
| 1.3238 | 0.08 | 100 | 1.1709 | 0.4675 |
| 1.1596 | 0.09 | 110 | 1.1853 | 0.4417 |
| 1.1351 | 0.1 | 120 | 1.3158 | 0.4083 |
| 1.1573 | 0.11 | 130 | 1.1438 | 0.475 |
| 1.1858 | 0.12 | 140 | 1.2280 | 0.45 |
| 1.268 | 0.12 | 150 | 1.3686 | 0.3767 |
| 1.3871 | 0.13 | 160 | 1.2159 | 0.4525 |
| 1.1129 | 0.14 | 170 | 1.1402 | 0.4783 |
| 1.1144 | 0.15 | 180 | 1.2366 | 0.4558 |
| 1.1953 | 0.16 | 190 | 1.1209 | 0.4717 |
| 1.2515 | 0.17 | 200 | 1.1857 | 0.4408 |
| 1.0826 | 0.17 | 210 | 1.1044 | 0.48 |
| 1.0192 | 0.18 | 220 | 1.0932 | 0.4925 |
| 1.2467 | 0.19 | 230 | 1.0608 | 0.5058 |
| 0.9914 | 0.2 | 240 | 1.1134 | 0.4942 |
| 1.1065 | 0.21 | 250 | 1.1115 | 0.4833 |
| 1.1161 | 0.22 | 260 | 1.2943 | 0.485 |
| 1.4564 | 0.23 | 270 | 1.3899 | 0.3892 |
| 1.4043 | 0.23 | 280 | 1.1529 | 0.4742 |
| 1.0993 | 0.24 | 290 | 1.3811 | 0.4167 |
| 1.1307 | 0.25 | 300 | 1.0985 | 0.4892 |
| 1.1536 | 0.26 | 310 | 1.0903 | 0.5133 |
| 1.0491 | 0.27 | 320 | 1.1709 | 0.4875 |
| 1.1946 | 0.28 | 330 | 1.1875 | 0.4725 |
| 1.1956 | 0.28 | 340 | 1.0579 | 0.5292 |
| 0.8626 | 0.29 | 350 | 1.2314 | 0.48 |
| 1.2908 | 0.3 | 360 | 1.0875 | 0.5225 |
| 1.1227 | 0.31 | 370 | 1.1000 | 0.4975 |
| 1.0407 | 0.32 | 380 | 1.1035 | 0.5267 |
| 1.2242 | 0.33 | 390 | 1.1243 | 0.4833 |
| 1.2052 | 0.33 | 400 | 1.0719 | 0.5067 |
| 1.1526 | 0.34 | 410 | 1.0351 | 0.5442 |
| 0.9881 | 0.35 | 420 | 1.0394 | 0.5333 |
| 1.0651 | 0.36 | 430 | 1.0422 | 0.5317 |
| 1.0571 | 0.37 | 440 | 1.0310 | 0.5408 |
| 1.22 | 0.38 | 450 | 1.0176 | 0.5358 |
| 0.9914 | 0.38 | 460 | 1.2306 | 0.4733 |
| 1.0956 | 0.39 | 470 | 1.0239 | 0.5358 |
| 0.9464 | 0.4 | 480 | 1.0895 | 0.51 |
| 1.0855 | 0.41 | 490 | 1.0398 | 0.5292 |
| 1.2345 | 0.42 | 500 | 1.1024 | 0.5133 |
| 1.1624 | 0.42 | 510 | 1.1720 | 0.4733 |
| 1.1251 | 0.43 | 520 | 1.1044 | 0.4858 |
| 1.0896 | 0.44 | 530 | 1.0415 | 0.5225 |
| 0.9643 | 0.45 | 540 | 1.0211 | 0.5383 |
| 1.1421 | 0.46 | 550 | 1.1593 | 0.5017 |
| 1.0463 | 0.47 | 560 | 1.0246 | 0.52 |
| 1.0508 | 0.47 | 570 | 1.0377 | 0.515 |
| 1.0507 | 0.48 | 580 | 1.0565 | 0.5408 |
| 0.8932 | 0.49 | 590 | 1.0147 | 0.5483 |
| 0.8834 | 0.5 | 600 | 1.0191 | 0.5458 |
| 1.0548 | 0.51 | 610 | 1.0668 | 0.5392 |
| 1.1106 | 0.52 | 620 | 1.0086 | 0.53 |
| 1.0587 | 0.53 | 630 | 1.0144 | 0.5483 |
| 0.9468 | 0.53 | 640 | 1.1663 | 0.5042 |
| 1.0948 | 0.54 | 650 | 1.0263 | 0.5458 |
| 1.2202 | 0.55 | 660 | 0.9932 | 0.5358 |
| 0.898 | 0.56 | 670 | 1.0217 | 0.52 |
| 1.2074 | 0.57 | 680 | 1.0416 | 0.5333 |
| 1.1777 | 0.57 | 690 | 0.9986 | 0.5483 |
| 1.0448 | 0.58 | 700 | 0.9836 | 0.5558 |
| 0.9387 | 0.59 | 710 | 1.0127 | 0.5392 |
| 1.0905 | 0.6 | 720 | 1.0633 | 0.5183 |
| 0.9262 | 0.61 | 730 | 1.0046 | 0.5375 |
| 1.0691 | 0.62 | 740 | 1.0005 | 0.5458 |
| 0.8828 | 0.62 | 750 | 1.0031 | 0.55 |
| 1.1497 | 0.63 | 760 | 1.0785 | 0.4925 |
| 0.9907 | 0.64 | 770 | 1.0094 | 0.54 |
| 0.9741 | 0.65 | 780 | 0.9794 | 0.555 |
| 0.8731 | 0.66 | 790 | 1.0327 | 0.5217 |
| 1.1001 | 0.67 | 800 | 1.0335 | 0.5325 |
| 1.0796 | 0.68 | 810 | 1.0004 | 0.5492 |
| 1.1743 | 0.68 | 820 | 1.0022 | 0.5425 |
| 1.0616 | 0.69 | 830 | 1.0307 | 0.5375 |
| 0.9953 | 0.7 | 840 | 0.9799 | 0.555 |
| 1.0607 | 0.71 | 850 | 1.1107 | 0.5108 |
| 1.2028 | 0.72 | 860 | 0.9770 | 0.55 |
| 0.9749 | 0.72 | 870 | 0.9927 | 0.5483 |
| 0.9752 | 0.73 | 880 | 1.0249 | 0.5342 |
| 0.9905 | 0.74 | 890 | 0.9946 | 0.5408 |
| 0.9116 | 0.75 | 900 | 1.0538 | 0.5433 |
| 1.1579 | 0.76 | 910 | 0.9914 | 0.555 |
| 1.0955 | 0.77 | 920 | 1.0265 | 0.5383 |
| 1.1222 | 0.78 | 930 | 1.0443 | 0.5175 |
| 0.9873 | 0.78 | 940 | 0.9877 | 0.5408 |
| 0.8737 | 0.79 | 950 | 1.0376 | 0.5442 |
| 1.0869 | 0.8 | 960 | 0.9777 | 0.555 |
| 1.0751 | 0.81 | 970 | 0.9655 | 0.5675 |
| 1.092 | 0.82 | 980 | 0.9720 | 0.5533 |
| 1.0741 | 0.82 | 990 | 0.9939 | 0.5325 |
| 1.0502 | 0.83 | 1000 | 0.9864 | 0.5517 |
| 1.0623 | 0.84 | 1010 | 0.9637 | 0.5567 |
| 1.0641 | 0.85 | 1020 | 0.9590 | 0.565 |
| 0.9818 | 0.86 | 1030 | 1.0268 | 0.5317 |
| 1.01 | 0.87 | 1040 | 0.9562 | 0.5517 |
| 0.9202 | 0.88 | 1050 | 0.9766 | 0.5458 |
| 0.9179 | 0.88 | 1060 | 0.9771 | 0.55 |
| 1.0009 | 0.89 | 1070 | 1.0164 | 0.535 |
| 0.9891 | 0.9 | 1080 | 0.9699 | 0.5542 |
| 0.9137 | 0.91 | 1090 | 1.0187 | 0.5325 |
| 0.9941 | 0.92 | 1100 | 0.9797 | 0.5592 |
| 0.9203 | 0.93 | 1110 | 1.0172 | 0.5292 |
| 0.8416 | 0.93 | 1120 | 1.0945 | 0.505 |
| 1.0899 | 0.94 | 1130 | 0.9963 | 0.55 |
| 1.0149 | 0.95 | 1140 | 0.9716 | 0.5592 |
| 0.9339 | 0.96 | 1150 | 0.9762 | 0.5492 |
| 1.0562 | 0.97 | 1160 | 1.0362 | 0.5258 |
| 1.0929 | 0.97 | 1170 | 0.9954 | 0.5433 |
| 1.0686 | 0.98 | 1180 | 1.0128 | 0.5342 |
| 1.1207 | 0.99 | 1190 | 0.9771 | 0.5525 |
| 0.9934 | 1.0 | 1200 | 0.9731 | 0.5575 |
| 0.8436 | 1.01 | 1210 | 0.9501 | 0.5558 |
| 0.7829 | 1.02 | 1220 | 0.9517 | 0.5708 |
| 0.7667 | 1.02 | 1230 | 0.9789 | 0.565 |
| 0.8093 | 1.03 | 1240 | 1.0047 | 0.5683 |
| 0.9297 | 1.04 | 1250 | 0.9831 | 0.5642 |
| 0.7154 | 1.05 | 1260 | 1.0401 | 0.5425 |
| 0.78 | 1.06 | 1270 | 0.9859 | 0.5683 |
| 0.8144 | 1.07 | 1280 | 0.9833 | 0.565 |
| 0.9511 | 1.07 | 1290 | 0.9870 | 0.5675 |
| 0.781 | 1.08 | 1300 | 0.9851 | 0.5633 |
| 0.8336 | 1.09 | 1310 | 0.9990 | 0.5625 |
| 0.9651 | 1.1 | 1320 | 1.0068 | 0.5542 |
| 0.7268 | 1.11 | 1330 | 0.9673 | 0.5742 |
| 0.7733 | 1.12 | 1340 | 0.9806 | 0.5692 |
| 0.7022 | 1.12 | 1350 | 1.0552 | 0.5508 |
| 0.8362 | 1.13 | 1360 | 0.9981 | 0.5683 |
| 0.9729 | 1.14 | 1370 | 1.0001 | 0.5683 |
| 0.7756 | 1.15 | 1380 | 0.9706 | 0.5625 |
| 0.7695 | 1.16 | 1390 | 1.0897 | 0.5392 |
| 0.7771 | 1.17 | 1400 | 1.0611 | 0.5483 |
| 0.6836 | 1.18 | 1410 | 1.0292 | 0.5575 |
| 0.8588 | 1.18 | 1420 | 0.9883 | 0.5767 |
| 0.7796 | 1.19 | 1430 | 1.0347 | 0.5658 |
| 0.8175 | 1.2 | 1440 | 1.0069 | 0.5717 |
| 0.6805 | 1.21 | 1450 | 1.0415 | 0.5525 |
| 0.7783 | 1.22 | 1460 | 1.0041 | 0.5708 |
| 1.046 | 1.23 | 1470 | 1.0039 | 0.5592 |
| 0.8762 | 1.23 | 1480 | 0.9609 | 0.5667 |
| 0.8282 | 1.24 | 1490 | 0.9625 | 0.5567 |
| 0.7038 | 1.25 | 1500 | 0.9559 | 0.5675 |
| 0.6776 | 1.26 | 1510 | 0.9826 | 0.5625 |
| 0.6715 | 1.27 | 1520 | 1.0019 | 0.5625 |
| 0.6957 | 1.27 | 1530 | 1.0005 | 0.5667 |
| 0.8419 | 1.28 | 1540 | 0.9876 | 0.575 |
| 0.7598 | 1.29 | 1550 | 1.0067 | 0.57 |
| 0.8714 | 1.3 | 1560 | 1.0743 | 0.55 |
| 0.864 | 1.31 | 1570 | 1.0003 | 0.5767 |
| 0.7178 | 1.32 | 1580 | 1.0116 | 0.5642 |
| 0.7912 | 1.32 | 1590 | 1.0323 | 0.5642 |
| 0.7834 | 1.33 | 1600 | 1.0123 | 0.5675 |
| 0.6978 | 1.34 | 1610 | 1.0530 | 0.55 |
| 0.7452 | 1.35 | 1620 | 1.0123 | 0.5658 |
| 0.8377 | 1.36 | 1630 | 1.0238 | 0.5608 |
| 0.7119 | 1.37 | 1640 | 1.0407 | 0.5642 |
| 0.7891 | 1.38 | 1650 | 1.0125 | 0.5692 |
| 0.7185 | 1.38 | 1660 | 1.0460 | 0.5483 |
| 0.7011 | 1.39 | 1670 | 1.0203 | 0.5658 |
| 0.8356 | 1.4 | 1680 | 1.0003 | 0.5667 |
| 0.6473 | 1.41 | 1690 | 0.9958 | 0.5742 |
| 0.6722 | 1.42 | 1700 | 0.9979 | 0.5817 |
| 0.7462 | 1.43 | 1710 | 0.9990 | 0.5817 |
| 0.6933 | 1.43 | 1720 | 1.0167 | 0.5758 |
| 0.6566 | 1.44 | 1730 | 1.0205 | 0.5825 |
| 0.7495 | 1.45 | 1740 | 1.0854 | 0.5483 |
| 0.9585 | 1.46 | 1750 | 1.0658 | 0.5567 |
| 0.8849 | 1.47 | 1760 | 1.0129 | 0.5708 |
| 0.9289 | 1.48 | 1770 | 0.9918 | 0.5942 |
| 0.751 | 1.48 | 1780 | 0.9849 | 0.5875 |
| 0.9082 | 1.49 | 1790 | 0.9887 | 0.5692 |
| 0.8307 | 1.5 | 1800 | 0.9978 | 0.5758 |
| 0.7014 | 1.51 | 1810 | 1.0261 | 0.5567 |
| 0.6632 | 1.52 | 1820 | 1.0294 | 0.5567 |
| 0.6885 | 1.52 | 1830 | 1.0054 | 0.5683 |
| 0.8374 | 1.53 | 1840 | 0.9983 | 0.5717 |
| 0.73 | 1.54 | 1850 | 0.9974 | 0.5792 |
| 0.7691 | 1.55 | 1860 | 0.9933 | 0.5775 |
| 0.795 | 1.56 | 1870 | 0.9918 | 0.5742 |
| 0.8298 | 1.57 | 1880 | 0.9970 | 0.5733 |
| 0.7621 | 1.57 | 1890 | 0.9981 | 0.5708 |
| 0.6753 | 1.58 | 1900 | 1.0033 | 0.5733 |
| 0.5386 | 1.59 | 1910 | 1.0098 | 0.5758 |
| 1.1066 | 1.6 | 1920 | 0.9923 | 0.5842 |
| 0.9523 | 1.61 | 1930 | 0.9987 | 0.5692 |
| 0.7225 | 1.62 | 1940 | 0.9958 | 0.5675 |
| 0.7592 | 1.62 | 1950 | 0.9800 | 0.58 |
| 0.7368 | 1.63 | 1960 | 1.0065 | 0.5658 |
| 0.7683 | 1.64 | 1970 | 0.9865 | 0.5708 |
| 0.5852 | 1.65 | 1980 | 0.9991 | 0.5675 |
| 0.7919 | 1.66 | 1990 | 1.0034 | 0.5708 |
| 0.7784 | 1.67 | 2000 | 0.9961 | 0.5717 |
| 0.8155 | 1.68 | 2010 | 0.9812 | 0.575 |
| 0.6281 | 1.68 | 2020 | 0.9803 | 0.5825 |
| 0.6084 | 1.69 | 2030 | 0.9802 | 0.5733 |
| 0.6207 | 1.7 | 2040 | 0.9843 | 0.5767 |
| 0.8847 | 1.71 | 2050 | 0.9871 | 0.5817 |
| 0.7049 | 1.72 | 2060 | 0.9897 | 0.5783 |
| 0.7144 | 1.73 | 2070 | 0.9914 | 0.5808 |
| 0.5971 | 1.73 | 2080 | 0.9915 | 0.5883 |
| 0.7566 | 1.74 | 2090 | 0.9888 | 0.5833 |
| 0.8263 | 1.75 | 2100 | 1.0017 | 0.5775 |
| 0.6402 | 1.76 | 2110 | 0.9872 | 0.5833 |
| 0.9838 | 1.77 | 2120 | 0.9852 | 0.5833 |
| 0.5518 | 1.77 | 2130 | 0.9803 | 0.585 |
| 0.737 | 1.78 | 2140 | 0.9892 | 0.5883 |
| 0.8021 | 1.79 | 2150 | 0.9917 | 0.585 |
| 0.6804 | 1.8 | 2160 | 0.9928 | 0.5775 |
| 0.6661 | 1.81 | 2170 | 0.9921 | 0.5808 |
| 0.6192 | 1.82 | 2180 | 0.9941 | 0.5833 |
| 0.7101 | 1.82 | 2190 | 0.9980 | 0.5858 |
| 0.7373 | 1.83 | 2200 | 1.0018 | 0.5825 |
| 0.845 | 1.84 | 2210 | 1.0030 | 0.5808 |
| 0.6556 | 1.85 | 2220 | 1.0077 | 0.5758 |
| 0.7979 | 1.86 | 2230 | 1.0115 | 0.5708 |
| 0.5802 | 1.87 | 2240 | 1.0065 | 0.5767 |
| 0.6794 | 1.88 | 2250 | 0.9945 | 0.5842 |
| 0.8538 | 1.88 | 2260 | 0.9901 | 0.5817 |
| 0.884 | 1.89 | 2270 | 0.9877 | 0.58 |
| 0.8306 | 1.9 | 2280 | 0.9850 | 0.5825 |
| 0.7196 | 1.91 | 2290 | 0.9846 | 0.5775 |
| 0.6548 | 1.92 | 2300 | 0.9850 | 0.5825 |
| 0.7692 | 1.93 | 2310 | 0.9863 | 0.5833 |
| 0.6386 | 1.93 | 2320 | 0.9880 | 0.5842 |
| 0.9404 | 1.94 | 2330 | 0.9919 | 0.5842 |
| 0.6133 | 1.95 | 2340 | 0.9920 | 0.5825 |
| 0.7229 | 1.96 | 2350 | 0.9898 | 0.5825 |
| 0.6681 | 1.97 | 2360 | 0.9887 | 0.585 |
| 0.7672 | 1.98 | 2370 | 0.9884 | 0.585 |
| 0.6217 | 1.98 | 2380 | 0.9893 | 0.5858 |
| 0.7101 | 1.99 | 2390 | 0.9897 | 0.585 |
| 0.6067 | 2.0 | 2400 | 0.9897 | 0.585 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
serene89104/gpt-neo-125m-finetuned-cybersecurity
|
serene89104
| 2024-04-14T11:48:26Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T08:28:28Z |
---
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gpt-neo-125m-finetuned-cybersecurity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125m-finetuned-cybersecurity
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9384
- Accuracy: 0.1440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.9416 | 1.0 | 16661 | 2.0521 | 0.1437 |
| 1.7556 | 2.0 | 33322 | 1.9568 | 0.1451 |
| 1.5854 | 3.0 | 49983 | 1.9384 | 0.1440 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
adammoss/patch-pretrain-mask
|
adammoss
| 2024-04-14T11:42:43Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"patchgpt",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T06:12:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IbrahimTarek/your-model
|
IbrahimTarek
| 2024-04-14T11:39:40Z | 7 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"gemma",
"generated_from_trainer",
"base_model:google/gemma-7b-it",
"base_model:adapter:google/gemma-7b-it",
"license:gemma",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-04-08T09:53:41Z |
---
license: gemma
library_name: peft
tags:
- generated_from_trainer
base_model: google/gemma-7b-it
model-index:
- name: your-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# your-model
This model is a fine-tuned version of [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
KarthikAlagarsamy/distilbertfinetuneHS5E8BHLRVHS
|
KarthikAlagarsamy
| 2024-04-14T11:34:46Z | 112 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-04-14T10:55:03Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbertfinetuneHS5E8BHLRVHS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbertfinetuneHS5E8BHLRVHS
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7886 | 1.0 | 1000 | 1.5221 |
| 1.1733 | 2.0 | 2000 | 1.3578 |
| 0.8003 | 3.0 | 3000 | 1.3842 |
| 0.5553 | 4.0 | 4000 | 1.5867 |
| 0.4178 | 5.0 | 5000 | 1.6647 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Minbyul/mistral-7b-wo-live_qa-sft
|
Minbyul
| 2024-04-14T11:34:17Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-12T07:32:44Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/deita-10k-v0-sft
model-index:
- name: mistral-7b-wo-live_qa-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-wo-live_qa-sft
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6461 | 0.92 | 6 | 1.7001 |
| 1.1299 | 2.0 | 13 | 1.6488 |
| 0.9123 | 2.77 | 18 | 1.6476 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
hflog/LeroyDyer-Mixtral_AI_CyberTron_Ultra
|
hflog
| 2024-04-14T11:32:46Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"code",
"medical ",
"farmer",
"doctor",
"Mega-Series",
"Cyber-Series",
"Role-Play",
"Self-Rag",
"ThinkingBot",
"conversational",
"en",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:teknium/OpenHermes-2.5",
"dataset:Open-Orca/SlimOrca",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:databricks/databricks-dolly-15k",
"dataset:yahma/alpaca-cleaned",
"dataset:uonlp/CulturaX",
"dataset:mwitiderrick/SwahiliPlatypus",
"dataset:swahili",
"dataset:Rogendo/English-Swahili-Sentence-Pairs",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:meta-math/MetaMathQA",
"base_model:LeroyDyer/SpydazWeb_AI_CyberTron_Ultra_7b",
"base_model:finetune:LeroyDyer/SpydazWeb_AI_CyberTron_Ultra_7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T11:32:46Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- code
- 'medical '
- farmer
- doctor
- Mega-Series
- Cyber-Series
- Role-Play
- Self-Rag
- ThinkingBot
base_model: LeroyDyer/Mixtral_AI_CyberTron_Ultra
metrics:
- accuracy
- bertscore
- bleu
- brier_score
- cer
- character
- charcut_mt
- chrf
- code_eval
library_name: transformers
datasets:
- gretelai/synthetic_text_to_sql
- HuggingFaceTB/cosmopedia
- teknium/OpenHermes-2.5
- Open-Orca/SlimOrca
- Open-Orca/OpenOrca
- cognitivecomputations/dolphin-coder
- databricks/databricks-dolly-15k
- yahma/alpaca-cleaned
- uonlp/CulturaX
- mwitiderrick/SwahiliPlatypus
- swahili
- Rogendo/English-Swahili-Sentence-Pairs
- ise-uiuc/Magicoder-Evol-Instruct-110K
- meta-math/MetaMathQA
---
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/Mixtral_AI_CyberTron_Ultra
### Ok Its a Great MODEL !
Highly Math Trained As well as many TextBooks and Lessons Highly fit datasets as well as Coding Datasets highly tuned!
This model has absorbed all its previous generations as well as ALL high performers and Specialist models (mistral) It has absorb many foriegn languge models and still stays as an english model !
Very impressive responses Short and long as also it was trained on some binary datasets to return a direct answer! and others to perform step by step response as wel as other to perform interactive response with clients for vairous tasks, such as product design and system design discussion:
Finacial information and other finacial tasks have been highly tunes also : Infact when returning to previous aligned datasets they stayed in line and was sdtill able to achieve High tuning!
Hence a process of merging with a specific topic or role and then training for the role and topic on themed data, hence previous itterations heavily tuned for medical or law or role play as the conception was that intergating the model into a single enity may even corrput them , so the decision to seperate concerns was taken :
This enabled for ssstrategic merging and tuning !
Concepts : chain of thought and functin calling Self rag ! Thoughts , emotive responses have been enhance where possibel with the data given . even sexy books have been highly tuned into the model :
but also i think american genera books (sci fi, fantasy, romance novels are required) for great role play which some expect: )
I have recently seen a strategy in which prompts can be embedded into the adapter to Trigger Specific Roles :
I hae tried to remove such prompting as you are a helpful ai to a character theme instead such as you are a cyber hacker by day and business man by night ! ie to give the model various internal personas !
after some training i noticed it was also talking to itself !! (rehersing) but the tokens for thought were missing so it lookeed strange until i noticed the bug;
After removing the thought tokens they were displayed in the output as the tokenizer was masking them !
But Still a Great Model , Given a Task based data set it Coverges Super quickly hence my enjoyment of the model as training of it is super quick !
Now when ii load up datasets : they are generally only a few bad steps before it begins to drop below zero maintaining a steady 0.6 etc whilst loading the unnseen new dataset , hence not needing so many epochs to adjust the matrix to the new information !
Im not sure if Lora actually works when you save them but i do save some and use them to load models for training ! as they are jump starts for model which did not recive that fine tuning , they can be merged and alligned ! (probably thiey are Good! )
### MOTTO FOR MODEL!
****Models are the same as loras , take them with light weight like tablets of knowledge!
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jones189/ppo-LunarLander-v2
|
Jones189
| 2024-04-14T11:28:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-14T11:26:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 220.11 +/- 60.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dnrso/koBART_Sum_Review_finetuning
|
dnrso
| 2024-04-14T11:28:49Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-22T23:20:59Z |
---
language:
- ko
tags:
- bart
license: mit
---
# koBART Review Summarization
## finetuning BASE
https://huggingface.co/gogamza/kobart-summarization
# dataset and code
https://github.com/dnrso/review_summary_using_KoBART
# Demo Space
https://huggingface.co/spaces/dnrso/koBART_Sum_Review_finetuning
|
sbawa/elysa-beta-gguf
|
sbawa
| 2024-04-14T11:27:57Z | 3 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-14T11:27:14Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Uploaded model
- **Developed by:** sbawa
- **License:** apache-2.0
- **Finetuned from model :** TinyLlama/TinyLlama-1.1B-Chat-v1.0
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
OsakanaTeishoku/mixtral_4x300m_dummy
|
OsakanaTeishoku
| 2024-04-14T11:13:52Z | 123 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T11:12:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mogesa/my-tokenizer
|
mogesa
| 2024-04-14T10:58:24Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T10:58:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ddrg/web_table_embeddings_plain64
|
ddrg
| 2024-04-14T10:52:53Z | 0 | 0 | null |
[
"schema",
"word-embeddings",
"embeddings",
"unsupervised-learning",
"tables",
"web-table",
"schema-data",
"en",
"license:mit",
"region:us"
] | null | 2024-04-02T21:33:50Z |
---
license: mit
language:
- en
tags:
- schema
- word-embeddings
- embeddings
- unsupervised-learning
- tables
- web-table
- schema-data
---
# Pre-trained Web Table Embeddings
The models here represent schema terms and instance data terms in a semantic vector space making them especially useful for representing schema and class information as well as for ML tasks on tabular text data.
The code for executing and evaluating the models is located in the [table-embeddings Github repository](https://github.com/guenthermi/table-embeddings)
## Quick Start
You can install the table_embeddings package to encode text from tables by running the following commands:
```bash
pip install cython
pip install git+https://github.com/guenthermi/table-embeddings.git
```
After that you can encode text with the following Python snippet:
```python
from table_embeddings import TableEmbeddingModel
model = TableEmbeddingModel.load_model('ddrg/web_table_embeddings_plain64')
embedding = model.get_header_vector('headline')
```
## Model Types
| Model Type | Description | Download-Links |
| ---------- | ----------- | -------------- |
| W-tax | Model of relations between table header and table body | ([64dim](https://huggingface.co/ddrg/web_table_embeddings_tax64), [150dim](https://huggingface.co/ddrg/web_table_embeddings_tax150))
| W-row | Model of row-wise relations in tables | ([64dim](https://huggingface.co/ddrg/web_table_embeddings_row64), [150dim](https://huggingface.co/ddrg/web_table_embeddings_row150))
| W-combo | Model of row-wise relations and relations between table header and table body | ([64dim](https://huggingface.co/ddrg/web_table_embeddings_combo64), [150dim](https://huggingface.co/ddrg/web_table_embeddings_combo150))
| W-plain | Model of row-wise relations in tables without pre-processing | ([64dim](https://huggingface.co/ddrg/web_table_embeddings_plain64), [150dim](https://huggingface.co/ddrg/web_table_embeddings_plain150))
## More Information
For examples on how to use the models, you can take a look at the [Github repository](https://github.com/guenthermi/table-embeddings)
More information can be found in the paper [Pre-Trained Web Table Embeddings for Table Discovery](https://dl.acm.org/doi/10.1145/3464509.3464892)
```
@inproceedings{gunther2021pre,
title={Pre-Trained Web Table Embeddings for Table Discovery},
author={G{\"u}nther, Michael and Thiele, Maik and Gonsior, Julius and Lehner, Wolfgang},
booktitle={Fourth Workshop in Exploiting AI Techniques for Data Management},
pages={24--31},
year={2021}
}
```
|
ddrg/web_table_embeddings_combo150
|
ddrg
| 2024-04-14T10:52:22Z | 0 | 1 | null |
[
"schema",
"word-embeddings",
"embeddings",
"unsupervised-learning",
"tables",
"web-table",
"schema-data",
"en",
"license:mit",
"region:us"
] | null | 2024-04-05T20:15:58Z |
---
license: mit
language:
- en
tags:
- schema
- word-embeddings
- embeddings
- unsupervised-learning
- tables
- web-table
- schema-data
---
# Pre-trained Web Table Embeddings
The models here represent schema terms and instance data terms in a semantic vector space making them especially useful for representing schema and class information as well as for ML tasks on tabular text data.
The code for executing and evaluating the models is located in the [table-embeddings Github repository](https://github.com/guenthermi/table-embeddings)
## Quick Start
You can install the table_embeddings package to encode text from tables by running the following commands:
```bash
pip install cython
pip install git+https://github.com/guenthermi/table-embeddings.git
```
After that you can encode text with the following Python snippet:
```python
from table_embeddings import TableEmbeddingModel
model = TableEmbeddingModel.load_model('ddrg/web_table_embeddings_combo150')
embedding = model.get_header_vector('headline')
```
## Model Types
| Model Type | Description | Download-Links |
| ---------- | ----------- | -------------- |
| W-tax | Model of relations between table header and table body | ([64dim](https://huggingface.co/ddrg/web_table_embeddings_tax64), [150dim](https://huggingface.co/ddrg/web_table_embeddings_tax150))
| W-row | Model of row-wise relations in tables | ([64dim](https://huggingface.co/ddrg/web_table_embeddings_row64), [150dim](https://huggingface.co/ddrg/web_table_embeddings_row150))
| W-combo | Model of row-wise relations and relations between table header and table body | ([64dim](https://huggingface.co/ddrg/web_table_embeddings_combo64), [150dim](https://huggingface.co/ddrg/web_table_embeddings_combo150))
| W-plain | Model of row-wise relations in tables without pre-processing | ([64dim](https://huggingface.co/ddrg/web_table_embeddings_plain64), [150dim](https://huggingface.co/ddrg/web_table_embeddings_plain150))
## More Information
For examples on how to use the models, you can take a look at the [Github repository](https://github.com/guenthermi/table-embeddings)
More information can be found in the paper [Pre-Trained Web Table Embeddings for Table Discovery](https://dl.acm.org/doi/10.1145/3464509.3464892)
```
@inproceedings{gunther2021pre,
title={Pre-Trained Web Table Embeddings for Table Discovery},
author={G{\"u}nther, Michael and Thiele, Maik and Gonsior, Julius and Lehner, Wolfgang},
booktitle={Fourth Workshop in Exploiting AI Techniques for Data Management},
pages={24--31},
year={2021}
}
```
|
denise227/amazon_kindle_sentiment_analysis_final2
|
denise227
| 2024-04-14T10:51:40Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-14T10:01:24Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: amazon_kindle_sentiment_analysis_final2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazon_kindle_sentiment_analysis_final2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0443
- Accuracy: 0.5642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7083 | 0.01 | 10 | 1.6144 | 0.1717 |
| 1.5906 | 0.02 | 20 | 1.6512 | 0.2275 |
| 1.7297 | 0.03 | 30 | 1.6169 | 0.2275 |
| 1.5584 | 0.03 | 40 | 1.5727 | 0.2483 |
| 1.4692 | 0.04 | 50 | 1.4838 | 0.2275 |
| 1.4008 | 0.05 | 60 | 1.3976 | 0.3408 |
| 1.4002 | 0.06 | 70 | 1.4235 | 0.3583 |
| 1.4591 | 0.07 | 80 | 1.4917 | 0.2683 |
| 1.4274 | 0.07 | 90 | 1.3387 | 0.3625 |
| 1.2846 | 0.08 | 100 | 1.3766 | 0.3592 |
| 1.3819 | 0.09 | 110 | 1.2902 | 0.4083 |
| 1.3474 | 0.1 | 120 | 1.3878 | 0.3583 |
| 1.4284 | 0.11 | 130 | 1.3943 | 0.3633 |
| 1.354 | 0.12 | 140 | 1.2411 | 0.4192 |
| 1.2689 | 0.12 | 150 | 1.2426 | 0.4367 |
| 1.2411 | 0.13 | 160 | 1.2059 | 0.4467 |
| 1.2793 | 0.14 | 170 | 1.1991 | 0.4133 |
| 1.2645 | 0.15 | 180 | 1.1891 | 0.455 |
| 1.066 | 0.16 | 190 | 1.1861 | 0.4517 |
| 1.4118 | 0.17 | 200 | 1.1363 | 0.4825 |
| 1.053 | 0.17 | 210 | 1.1809 | 0.4825 |
| 1.132 | 0.18 | 220 | 1.2207 | 0.4625 |
| 1.3701 | 0.19 | 230 | 1.2085 | 0.4533 |
| 1.2855 | 0.2 | 240 | 1.1530 | 0.4833 |
| 1.2135 | 0.21 | 250 | 1.1995 | 0.425 |
| 1.3131 | 0.22 | 260 | 1.3802 | 0.41 |
| 1.3903 | 0.23 | 270 | 1.2323 | 0.425 |
| 1.2236 | 0.23 | 280 | 1.1701 | 0.455 |
| 1.1219 | 0.24 | 290 | 1.1358 | 0.4808 |
| 1.1726 | 0.25 | 300 | 1.1636 | 0.4967 |
| 1.0688 | 0.26 | 310 | 1.1949 | 0.4875 |
| 1.2532 | 0.27 | 320 | 1.1612 | 0.47 |
| 1.1284 | 0.28 | 330 | 1.1504 | 0.4775 |
| 1.1337 | 0.28 | 340 | 1.2039 | 0.4425 |
| 1.0154 | 0.29 | 350 | 1.1692 | 0.4483 |
| 1.1537 | 0.3 | 360 | 1.1651 | 0.4667 |
| 0.9974 | 0.31 | 370 | 1.1726 | 0.4658 |
| 1.0735 | 0.32 | 380 | 1.2130 | 0.42 |
| 1.1672 | 0.33 | 390 | 1.1878 | 0.4867 |
| 1.1754 | 0.33 | 400 | 1.1254 | 0.4975 |
| 1.2113 | 0.34 | 410 | 1.1292 | 0.4975 |
| 1.1614 | 0.35 | 420 | 1.1272 | 0.4892 |
| 1.0832 | 0.36 | 430 | 1.1734 | 0.48 |
| 0.9343 | 0.37 | 440 | 1.1752 | 0.4758 |
| 1.1487 | 0.38 | 450 | 1.2200 | 0.4575 |
| 1.0019 | 0.38 | 460 | 1.2132 | 0.5058 |
| 1.1595 | 0.39 | 470 | 1.1283 | 0.4892 |
| 1.1167 | 0.4 | 480 | 1.0732 | 0.5292 |
| 1.0909 | 0.41 | 490 | 1.0985 | 0.515 |
| 1.075 | 0.42 | 500 | 1.1422 | 0.4758 |
| 1.0783 | 0.42 | 510 | 1.0963 | 0.4958 |
| 1.0152 | 0.43 | 520 | 1.1149 | 0.5067 |
| 1.0848 | 0.44 | 530 | 1.0881 | 0.4992 |
| 1.1063 | 0.45 | 540 | 1.1775 | 0.48 |
| 1.1489 | 0.46 | 550 | 1.1050 | 0.5117 |
| 1.1119 | 0.47 | 560 | 1.1096 | 0.5117 |
| 1.0861 | 0.47 | 570 | 1.1163 | 0.5225 |
| 0.9947 | 0.48 | 580 | 1.1678 | 0.4867 |
| 1.2151 | 0.49 | 590 | 1.1195 | 0.5125 |
| 1.0058 | 0.5 | 600 | 1.1072 | 0.5033 |
| 0.9734 | 0.51 | 610 | 1.1075 | 0.5033 |
| 1.1503 | 0.52 | 620 | 1.0904 | 0.5142 |
| 1.0962 | 0.53 | 630 | 1.1025 | 0.5108 |
| 1.0602 | 0.53 | 640 | 1.1027 | 0.5042 |
| 1.0047 | 0.54 | 650 | 1.1270 | 0.4742 |
| 0.9597 | 0.55 | 660 | 1.0693 | 0.5142 |
| 1.1418 | 0.56 | 670 | 1.0756 | 0.5158 |
| 1.2486 | 0.57 | 680 | 1.1020 | 0.5225 |
| 1.1175 | 0.57 | 690 | 1.1087 | 0.4858 |
| 1.1113 | 0.58 | 700 | 1.1100 | 0.4908 |
| 1.0758 | 0.59 | 710 | 1.0799 | 0.495 |
| 1.0898 | 0.6 | 720 | 1.0641 | 0.4933 |
| 0.9546 | 0.61 | 730 | 1.0490 | 0.5225 |
| 0.9024 | 0.62 | 740 | 1.0850 | 0.5117 |
| 1.078 | 0.62 | 750 | 1.2353 | 0.4583 |
| 1.1165 | 0.63 | 760 | 1.2252 | 0.4767 |
| 1.0986 | 0.64 | 770 | 1.0457 | 0.545 |
| 0.9825 | 0.65 | 780 | 1.1015 | 0.5108 |
| 0.9494 | 0.66 | 790 | 1.0954 | 0.5067 |
| 1.053 | 0.67 | 800 | 1.0581 | 0.5292 |
| 0.8009 | 0.68 | 810 | 1.0961 | 0.5 |
| 0.8794 | 0.68 | 820 | 1.0865 | 0.5075 |
| 1.0287 | 0.69 | 830 | 1.0652 | 0.5183 |
| 1.027 | 0.7 | 840 | 1.0529 | 0.5442 |
| 1.0287 | 0.71 | 850 | 1.0323 | 0.5433 |
| 1.1179 | 0.72 | 860 | 1.0451 | 0.5342 |
| 1.0573 | 0.72 | 870 | 1.0456 | 0.5217 |
| 1.0779 | 0.73 | 880 | 1.0737 | 0.5242 |
| 0.9964 | 0.74 | 890 | 1.0532 | 0.5233 |
| 1.242 | 0.75 | 900 | 1.1209 | 0.4983 |
| 0.9247 | 0.76 | 910 | 1.0632 | 0.5192 |
| 0.9705 | 0.77 | 920 | 1.0608 | 0.5142 |
| 0.8295 | 0.78 | 930 | 1.0833 | 0.5075 |
| 1.1295 | 0.78 | 940 | 1.0854 | 0.5183 |
| 1.0577 | 0.79 | 950 | 1.0595 | 0.5092 |
| 0.945 | 0.8 | 960 | 1.0474 | 0.5167 |
| 0.9852 | 0.81 | 970 | 1.0423 | 0.5217 |
| 1.0776 | 0.82 | 980 | 1.0463 | 0.53 |
| 1.1153 | 0.82 | 990 | 1.0843 | 0.5225 |
| 1.1605 | 0.83 | 1000 | 1.0336 | 0.53 |
| 0.8384 | 0.84 | 1010 | 1.0878 | 0.5308 |
| 1.2439 | 0.85 | 1020 | 1.0159 | 0.5458 |
| 0.9853 | 0.86 | 1030 | 1.0560 | 0.5075 |
| 1.0497 | 0.87 | 1040 | 1.0687 | 0.5267 |
| 1.0442 | 0.88 | 1050 | 1.0486 | 0.5458 |
| 0.9709 | 0.88 | 1060 | 1.0251 | 0.5375 |
| 0.9732 | 0.89 | 1070 | 1.0286 | 0.54 |
| 0.9221 | 0.9 | 1080 | 1.0323 | 0.5483 |
| 0.9142 | 0.91 | 1090 | 1.0670 | 0.5383 |
| 1.0644 | 0.92 | 1100 | 1.0359 | 0.5408 |
| 1.1072 | 0.93 | 1110 | 1.0680 | 0.5217 |
| 1.037 | 0.93 | 1120 | 1.0297 | 0.5367 |
| 1.1299 | 0.94 | 1130 | 1.1113 | 0.4967 |
| 1.0973 | 0.95 | 1140 | 1.0066 | 0.5325 |
| 0.997 | 0.96 | 1150 | 1.0150 | 0.54 |
| 1.1171 | 0.97 | 1160 | 1.0362 | 0.5283 |
| 0.896 | 0.97 | 1170 | 1.0706 | 0.5225 |
| 0.9641 | 0.98 | 1180 | 1.0546 | 0.5308 |
| 0.9264 | 0.99 | 1190 | 1.0419 | 0.5575 |
| 0.8795 | 1.0 | 1200 | 1.0625 | 0.5283 |
| 1.0062 | 1.01 | 1210 | 1.0304 | 0.5358 |
| 0.7481 | 1.02 | 1220 | 1.0825 | 0.5367 |
| 0.7035 | 1.02 | 1230 | 1.1020 | 0.53 |
| 0.7329 | 1.03 | 1240 | 1.0634 | 0.5358 |
| 0.996 | 1.04 | 1250 | 1.0568 | 0.5367 |
| 0.9858 | 1.05 | 1260 | 1.0754 | 0.54 |
| 0.805 | 1.06 | 1270 | 1.0492 | 0.5458 |
| 0.7799 | 1.07 | 1280 | 1.0725 | 0.5375 |
| 0.8801 | 1.07 | 1290 | 1.0554 | 0.5575 |
| 0.8422 | 1.08 | 1300 | 1.0318 | 0.5567 |
| 0.829 | 1.09 | 1310 | 1.0570 | 0.5575 |
| 0.7253 | 1.1 | 1320 | 1.0564 | 0.5408 |
| 0.8773 | 1.11 | 1330 | 1.0719 | 0.545 |
| 0.6686 | 1.12 | 1340 | 1.0798 | 0.5475 |
| 0.8547 | 1.12 | 1350 | 1.0649 | 0.5475 |
| 0.6687 | 1.13 | 1360 | 1.0944 | 0.5392 |
| 0.8448 | 1.14 | 1370 | 1.1050 | 0.5383 |
| 0.8619 | 1.15 | 1380 | 1.0785 | 0.5508 |
| 0.7689 | 1.16 | 1390 | 1.0481 | 0.55 |
| 0.7737 | 1.17 | 1400 | 1.1036 | 0.5192 |
| 0.9337 | 1.18 | 1410 | 1.0986 | 0.5333 |
| 0.7568 | 1.18 | 1420 | 1.0693 | 0.55 |
| 0.7257 | 1.19 | 1430 | 1.0553 | 0.5467 |
| 0.8328 | 1.2 | 1440 | 1.0566 | 0.5525 |
| 0.7617 | 1.21 | 1450 | 1.0600 | 0.5367 |
| 0.6889 | 1.22 | 1460 | 1.1296 | 0.525 |
| 0.8422 | 1.23 | 1470 | 1.0609 | 0.5542 |
| 0.643 | 1.23 | 1480 | 1.0624 | 0.5458 |
| 0.7943 | 1.24 | 1490 | 1.0775 | 0.5442 |
| 0.5499 | 1.25 | 1500 | 1.1079 | 0.5483 |
| 0.8923 | 1.26 | 1510 | 1.1229 | 0.5492 |
| 0.6692 | 1.27 | 1520 | 1.1289 | 0.5317 |
| 0.8338 | 1.27 | 1530 | 1.1320 | 0.5242 |
| 0.791 | 1.28 | 1540 | 1.0880 | 0.5525 |
| 0.7467 | 1.29 | 1550 | 1.1239 | 0.5558 |
| 0.8007 | 1.3 | 1560 | 1.1040 | 0.5575 |
| 0.8549 | 1.31 | 1570 | 1.0732 | 0.56 |
| 0.6978 | 1.32 | 1580 | 1.0845 | 0.5533 |
| 0.6798 | 1.32 | 1590 | 1.1070 | 0.5508 |
| 0.6138 | 1.33 | 1600 | 1.1186 | 0.5567 |
| 0.7253 | 1.34 | 1610 | 1.1152 | 0.5367 |
| 0.7374 | 1.35 | 1620 | 1.1149 | 0.545 |
| 0.7872 | 1.36 | 1630 | 1.1173 | 0.5492 |
| 0.8663 | 1.37 | 1640 | 1.1013 | 0.5558 |
| 0.8264 | 1.38 | 1650 | 1.0915 | 0.5517 |
| 0.719 | 1.38 | 1660 | 1.0822 | 0.5508 |
| 0.8035 | 1.39 | 1670 | 1.0804 | 0.55 |
| 0.818 | 1.4 | 1680 | 1.0892 | 0.55 |
| 0.7964 | 1.41 | 1690 | 1.0756 | 0.55 |
| 0.7614 | 1.42 | 1700 | 1.0879 | 0.5533 |
| 0.876 | 1.43 | 1710 | 1.1014 | 0.5492 |
| 0.9673 | 1.43 | 1720 | 1.0742 | 0.5558 |
| 0.7492 | 1.44 | 1730 | 1.0719 | 0.5392 |
| 0.8312 | 1.45 | 1740 | 1.0864 | 0.555 |
| 0.6262 | 1.46 | 1750 | 1.0972 | 0.5525 |
| 0.8121 | 1.47 | 1760 | 1.0873 | 0.5525 |
| 0.8858 | 1.48 | 1770 | 1.1205 | 0.5375 |
| 0.7894 | 1.48 | 1780 | 1.1073 | 0.5458 |
| 0.6622 | 1.49 | 1790 | 1.1175 | 0.5558 |
| 0.6912 | 1.5 | 1800 | 1.1313 | 0.5525 |
| 0.7298 | 1.51 | 1810 | 1.1328 | 0.5508 |
| 0.6818 | 1.52 | 1820 | 1.1508 | 0.5475 |
| 0.7875 | 1.52 | 1830 | 1.1259 | 0.5542 |
| 0.6855 | 1.53 | 1840 | 1.1062 | 0.5558 |
| 0.814 | 1.54 | 1850 | 1.1238 | 0.5592 |
| 0.652 | 1.55 | 1860 | 1.1088 | 0.5483 |
| 0.8903 | 1.56 | 1870 | 1.0729 | 0.5533 |
| 0.8013 | 1.57 | 1880 | 1.0824 | 0.55 |
| 0.8752 | 1.57 | 1890 | 1.0761 | 0.5508 |
| 0.7781 | 1.58 | 1900 | 1.0688 | 0.5558 |
| 0.7411 | 1.59 | 1910 | 1.0884 | 0.5492 |
| 0.8728 | 1.6 | 1920 | 1.0688 | 0.5583 |
| 0.6122 | 1.61 | 1930 | 1.0644 | 0.5633 |
| 0.7275 | 1.62 | 1940 | 1.0678 | 0.5567 |
| 0.6848 | 1.62 | 1950 | 1.0591 | 0.5567 |
| 0.8582 | 1.63 | 1960 | 1.0555 | 0.5575 |
| 0.8876 | 1.64 | 1970 | 1.0636 | 0.5567 |
| 0.703 | 1.65 | 1980 | 1.0460 | 0.5575 |
| 0.8294 | 1.66 | 1990 | 1.0403 | 0.5575 |
| 0.761 | 1.67 | 2000 | 1.0493 | 0.5483 |
| 0.8271 | 1.68 | 2010 | 1.0502 | 0.5475 |
| 0.7152 | 1.68 | 2020 | 1.0481 | 0.5558 |
| 0.8359 | 1.69 | 2030 | 1.0419 | 0.5517 |
| 0.776 | 1.7 | 2040 | 1.0413 | 0.5492 |
| 0.7477 | 1.71 | 2050 | 1.0444 | 0.5475 |
| 0.7971 | 1.72 | 2060 | 1.0497 | 0.5483 |
| 0.7846 | 1.73 | 2070 | 1.0618 | 0.5433 |
| 0.9562 | 1.73 | 2080 | 1.0433 | 0.5417 |
| 0.7496 | 1.74 | 2090 | 1.0337 | 0.5558 |
| 0.8417 | 1.75 | 2100 | 1.0380 | 0.5592 |
| 0.7283 | 1.76 | 2110 | 1.0334 | 0.5583 |
| 0.7424 | 1.77 | 2120 | 1.0320 | 0.5592 |
| 0.7982 | 1.77 | 2130 | 1.0394 | 0.555 |
| 0.89 | 1.78 | 2140 | 1.0296 | 0.5525 |
| 0.7348 | 1.79 | 2150 | 1.0265 | 0.5475 |
| 0.9452 | 1.8 | 2160 | 1.0232 | 0.5542 |
| 0.6655 | 1.81 | 2170 | 1.0281 | 0.555 |
| 0.804 | 1.82 | 2180 | 1.0321 | 0.565 |
| 0.7228 | 1.82 | 2190 | 1.0313 | 0.56 |
| 0.7241 | 1.83 | 2200 | 1.0296 | 0.5592 |
| 0.6842 | 1.84 | 2210 | 1.0325 | 0.5542 |
| 0.691 | 1.85 | 2220 | 1.0336 | 0.5558 |
| 0.6258 | 1.86 | 2230 | 1.0334 | 0.5608 |
| 0.7299 | 1.87 | 2240 | 1.0342 | 0.5575 |
| 0.8158 | 1.88 | 2250 | 1.0344 | 0.5567 |
| 0.5722 | 1.88 | 2260 | 1.0387 | 0.5575 |
| 0.7289 | 1.89 | 2270 | 1.0467 | 0.5533 |
| 0.7729 | 1.9 | 2280 | 1.0447 | 0.56 |
| 0.6128 | 1.91 | 2290 | 1.0447 | 0.5575 |
| 0.6053 | 1.92 | 2300 | 1.0435 | 0.555 |
| 0.5973 | 1.93 | 2310 | 1.0426 | 0.56 |
| 0.7355 | 1.93 | 2320 | 1.0414 | 0.5625 |
| 0.6967 | 1.94 | 2330 | 1.0422 | 0.5617 |
| 0.5348 | 1.95 | 2340 | 1.0426 | 0.5642 |
| 0.7911 | 1.96 | 2350 | 1.0432 | 0.5617 |
| 0.6604 | 1.97 | 2360 | 1.0440 | 0.5608 |
| 0.655 | 1.98 | 2370 | 1.0440 | 0.5625 |
| 0.8269 | 1.98 | 2380 | 1.0441 | 0.5667 |
| 0.613 | 1.99 | 2390 | 1.0442 | 0.5633 |
| 0.6792 | 2.0 | 2400 | 1.0443 | 0.5642 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ddrg/web_table_embeddings_row64
|
ddrg
| 2024-04-14T10:50:15Z | 0 | 0 | null |
[
"schema",
"word-embeddings",
"embeddings",
"unsupervised-learning",
"tables",
"web-table",
"schema-data",
"en",
"license:mit",
"region:us"
] | null | 2024-04-05T18:13:16Z |
---
license: mit
language:
- en
tags:
- schema
- word-embeddings
- embeddings
- unsupervised-learning
- tables
- web-table
- schema-data
---
# Pre-trained Web Table Embeddings
The models here represent schema terms and instance data terms in a semantic vector space making them especially useful for representing schema and class information as well as for ML tasks on tabular text data.
The code for executing and evaluating the models is located in the [table-embeddings Github repository](https://github.com/guenthermi/table-embeddings)
## Quick Start
You can install the table_embeddings package to encode text from tables by running the following commands:
```bash
pip install cython
pip install git+https://github.com/guenthermi/table-embeddings.git
```
After that you can encode text with the following Python snippet:
```python
from table_embeddings import TableEmbeddingModel
model = TableEmbeddingModel.load_model('ddrg/web_table_embeddings_row64')
embedding = model.get_header_vector('headline')
```
## Model Types
| Model Type | Description | Download-Links |
| ---------- | ----------- | -------------- |
| W-tax | Model of relations between table header and table body | ([64dim](https://huggingface.co/ddrg/web_table_embeddings_tax64), [150dim](https://huggingface.co/ddrg/web_table_embeddings_tax150))
| W-row | Model of row-wise relations in tables | ([64dim](https://huggingface.co/ddrg/web_table_embeddings_row64), [150dim](https://huggingface.co/ddrg/web_table_embeddings_row150))
| W-combo | Model of row-wise relations and relations between table header and table body | ([64dim](https://huggingface.co/ddrg/web_table_embeddings_combo64), [150dim](https://huggingface.co/ddrg/web_table_embeddings_combo150))
| W-plain | Model of row-wise relations in tables without pre-processing | ([64dim](https://huggingface.co/ddrg/web_table_embeddings_plain64), [150dim](https://huggingface.co/ddrg/web_table_embeddings_plain150))
## More Information
For examples on how to use the models, you can take a look at the [Github repository](https://github.com/guenthermi/table-embeddings)
More information can be found in the paper [Pre-Trained Web Table Embeddings for Table Discovery](https://dl.acm.org/doi/10.1145/3464509.3464892)
```
@inproceedings{gunther2021pre,
title={Pre-Trained Web Table Embeddings for Table Discovery},
author={G{\"u}nther, Michael and Thiele, Maik and Gonsior, Julius and Lehner, Wolfgang},
booktitle={Fourth Workshop in Exploiting AI Techniques for Data Management},
pages={24--31},
year={2021}
}
```
|
YagoubChatBot/results_packing
|
YagoubChatBot
| 2024-04-14T10:38:36Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-14T05:51:22Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: results_packing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_packing
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1757 | 0.1 | 50 | 1.5336 |
| 1.5129 | 0.21 | 100 | 1.3137 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ShenaoZ/0.0001_idpo_same_3itersn_iter_3
|
ShenaoZ
| 2024-04-14T10:25:46Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0001_idpo_same_3itersn_iter_2",
"base_model:finetune:ShenaoZ/0.0001_idpo_same_3itersn_iter_2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T09:13:00Z |
---
license: mit
base_model: ShenaoZ/0.0001_idpo_same_3itersn_iter_2
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.0001_idpo_same_3itersn_iter_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0001_idpo_same_3itersn_iter_3
This model is a fine-tuned version of [ShenaoZ/0.0001_idpo_same_3itersn_iter_2](https://huggingface.co/ShenaoZ/0.0001_idpo_same_3itersn_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
esg-x/esg-phi2-sft
|
esg-x
| 2024-04-14T10:18:31Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"llama-factory",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-12T15:18:03Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tarpalsus/q-Taxi-v3
|
tarpalsus
| 2024-04-14T10:14:54Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-14T10:14:52Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tarpalsus/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Holarissun/dpo_helfulhelpful_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr1e-05
|
Holarissun
| 2024-04-14T10:13:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-14T10:13:28Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: dpo_helfulhelpful_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_helfulhelpful_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr1e-05
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
olpop/roberta-large-polyhope-multiclass-english
|
olpop
| 2024-04-14T10:10:16Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-14T08:45:06Z |
---
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
model-index:
- name: roberta-large-polyhope-multiclass-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-polyhope-multiclass-english
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1562 | 1.0 | 774 | 1.0325 |
| 1.0038 | 2.0 | 1548 | 0.9082 |
| 0.9901 | 3.0 | 2322 | 0.9801 |
| 0.7897 | 4.0 | 3096 | 0.8522 |
| 0.4418 | 5.0 | 3870 | 0.8531 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Tokenizers 0.15.2
|
Lykon/DreamShaper
|
Lykon
| 2024-04-14T10:07:25Z | 150,475 | 960 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"art",
"artistic",
"anime",
"en",
"doi:10.57967/hf/0453",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-12T09:14:06Z |
---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- art
- artistic
- diffusers
- anime
inference: false
---
# Dream Shaper
## Official Repository
Read more about this model here: https://civitai.com/models/4384/dreamshaper
Also please support by giving 5 stars and a heart, which will notify new updates.
Please consider supporting me on Patreon or buy me a coffee
- https://www.patreon.com/Lykon275
- https://snipfeed.co/lykon
You can run this model on:
- https://huggingface.co/spaces/Lykon/DreamShaper-webui
- Mage.space, sinkin.ai and more
|
meghanaraok/HiLAT_50
|
meghanaraok
| 2024-04-14T09:54:55Z | 56 | 0 |
transformers
|
[
"transformers",
"safetensors",
"endpoints_compatible",
"region:us"
] | null | 2024-03-19T09:34:40Z |
This model has been referenced from "hierarchical label-wise attention transformer model for explainable ICD coding - ScienceDirect" By Leibo Liu et al.,
We trained the model on mimic-iii top-50 icd datasets for approximately 10 epochs.
|
LLM4APR/StarCoder-15B_for_NMT
|
LLM4APR
| 2024-04-14T09:50:37Z | 0 | 0 | null |
[
"code",
"automated program repair",
"text-generation",
"license:bigscience-openrail-m",
"region:us"
] |
text-generation
| 2024-03-21T07:09:38Z |
---
license: bigscience-openrail-m
pipeline_tag: text-generation
tags:
- code
- automated program repair
---
# StarCoder-15B_for_NTR
We fine-tuned [StarCoder-15B](https://huggingface.co/bigcode/starcoder) on [Transfer_dataset](https://drive.google.com/drive/folders/1Z-2xcLSmh643BfX_j0yQW2GmdPoru6j3?usp=drive_link) under the NMT workflow [[Jiang et al.](https://github.com/lin-tan/clm), [Huang et al.](https://github.com/LLMC-APR/STUDY)] for APR research.
## Model Use
To use this model, please make sure to install transformers, peft, bitsandbytes, and accelerate.
```bash
pip install transformers
pip install peft
pip install bitsandbytes
pip install accelerate
```
Then, please run the following script to merge the adapter into the CodeLlama.
```bash
bash merge.sh
```
Finally, you can load the model to generate patches for buggy code.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training
import torch
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('bigcode/starcoderbase', use_auth_token=True)
model = AutoModelForCausalLM.from_pretrained(
"StarCoder-15B_for_NMT/Epoch_1/-merged",
use_auth_token=True,
use_cache=True,
load_in_8bit=True,
device_map="auto"
)
model = prepare_model_for_int8_training(model)
lora_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules = ["c_proj", "c_attn", "q_attn"]
)
model = get_peft_model(model, lora_config)
# a bug-fix pairs
buggy_code = """
public MultiplePiePlot(CategoryDataset dataset){
super();
// bug_start
this.dataset=dataset;
// bug_end
PiePlot piePlot=new PiePlot(null);
this.pieChart=new JFreeChart(piePlot);
this.pieChart.removeLegend();
this.dataExtractOrder=TableOrder.BY_COLUMN;
this.pieChart.setBackgroundPaint(null);
TextTitle seriesTitle=new TextTitle("Series Title",new Font("SansSerif",Font.BOLD,12));
seriesTitle.setPosition(RectangleEdge.BOTTOM);
this.pieChart.setTitle(seriesTitle);
this.aggregatedItemsKey="Other";
this.aggregatedItemsPaint=Color.lightGray;
this.sectionPaints=new HashMap();
}
"""
fixed_code = """
// fix_start
setDataset(dataset);
// fix_end
"""
# model inference
input_text = '<commit_before>\n' + buggy_code + '\n<commit_after>\n'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
eos_id = tokenizer.convert_tokens_to_ids(tokenizer.eos_token)
generated_ids = model.generate(
input_ids=input_ids,
max_new_tokens=256,
num_beams=10,
num_return_sequences=10,
early_stopping=True,
pad_token_id=eos_id,
eos_token_id=eos_id
)
for generated_id in generated_ids:
generated_text = tokenizer.decode(generated_id, skip_special_tokens=False)
patch = generated_text.split('\n<commit_after>\n')[1]
patch = patch.replace('<|endoftext|>','')
print(patch)
```
## Model Details
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|
JoaoPinto/ppo-Huggy
|
JoaoPinto
| 2024-04-14T09:48:11Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-04-14T09:45:28Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JoaoPinto/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DuongTrongChi/opt-350m-chat
|
DuongTrongChi
| 2024-04-14T09:43:13Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T09:40:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stablediffusionapi/vxpanimaponyv_xl
|
stablediffusionapi
| 2024-04-14T09:38:27Z | 29 | 1 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-04-14T09:35:43Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# vxpanimaponyv_xl API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "vxpanimaponyv_xl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/vxpanimaponyv_xl)
Model link: [View model](https://modelslab.com/models/vxpanimaponyv_xl)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "vxpanimaponyv_xl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
DavidAU/PiVoT-10.7B-Mistral-v0.2-Q6_K-GGUF
|
DavidAU
| 2024-04-14T09:37:47Z | 5 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ko",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-04-14T09:37:25Z |
---
language:
- en
- ko
license: cc-by-sa-4.0
tags:
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
---
# DavidAU/PiVoT-10.7B-Mistral-v0.2-Q6_K-GGUF
This model was converted to GGUF format from [`maywell/PiVoT-10.7B-Mistral-v0.2`](https://huggingface.co/maywell/PiVoT-10.7B-Mistral-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maywell/PiVoT-10.7B-Mistral-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/PiVoT-10.7B-Mistral-v0.2-Q6_K-GGUF --model pivot-10.7b-mistral-v0.2.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/PiVoT-10.7B-Mistral-v0.2-Q6_K-GGUF --model pivot-10.7b-mistral-v0.2.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pivot-10.7b-mistral-v0.2.Q6_K.gguf -n 128
```
|
DavidAU/PiVoT-10.7B-Mistral-v0.2-RP-Q6_K-GGUF
|
DavidAU
| 2024-04-14T09:36:03Z | 7 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-04-14T09:35:37Z |
---
language:
- en
license: cc-by-sa-4.0
tags:
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
---
# DavidAU/PiVoT-10.7B-Mistral-v0.2-RP-Q6_K-GGUF
This model was converted to GGUF format from [`maywell/PiVoT-10.7B-Mistral-v0.2-RP`](https://huggingface.co/maywell/PiVoT-10.7B-Mistral-v0.2-RP) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maywell/PiVoT-10.7B-Mistral-v0.2-RP) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/PiVoT-10.7B-Mistral-v0.2-RP-Q6_K-GGUF --model pivot-10.7b-mistral-v0.2-rp.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/PiVoT-10.7B-Mistral-v0.2-RP-Q6_K-GGUF --model pivot-10.7b-mistral-v0.2-rp.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pivot-10.7b-mistral-v0.2-rp.Q6_K.gguf -n 128
```
|
nzdb70/dqn-SpaceInvadersNoFrameskip-v4
|
nzdb70
| 2024-04-14T09:34:44Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-14T09:34:05Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 755.50 +/- 301.96
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nzdb70 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga nzdb70 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga nzdb70
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
pacozaa/tinyllama-alpaca-lora
|
pacozaa
| 2024-04-14T09:33:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"ollama",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:finetune:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-25T04:19:37Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- ollama
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** pacozaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
- Run with Ollama - `ollama run pacozaa/tinyllama-alpaca-lora`
- Ollama Model Page - https://ollama.com/pacozaa/tinyllama-alpaca-lora
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nandini82/ft-adapters
|
Nandini82
| 2024-04-14T09:26:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-04-14T09:23:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
tomaszki/stablelm-32-a
|
tomaszki
| 2024-04-14T09:23:02Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T09:21:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Enagamirzayev/whisper-small-llm-lingo-adapters_n
|
Enagamirzayev
| 2024-04-14T09:21:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-14T09:21:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomaszki/stablelm-32
|
tomaszki
| 2024-04-14T09:19:55Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T09:18:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KarthikAlagarsamy/distilbertfinetuneHS3E8BHLR
|
KarthikAlagarsamy
| 2024-04-14T09:13:07Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-04-14T09:01:16Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbertfinetuneHS3E8BHLR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbertfinetuneHS3E8BHLR
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9416 | 1.0 | 500 | 1.7406 |
| 1.4428 | 2.0 | 1000 | 1.5059 |
| 1.0388 | 3.0 | 1500 | 1.5382 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
DriveMyScream/mistral-finetuned-news_summarization
|
DriveMyScream
| 2024-04-14T09:08:25Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-04-14T08:17:18Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-finetuned-news_summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-news_summarization
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Shekswess/gemma-1.1-7b-it-bnb-4bit-medical
|
Shekswess
| 2024-04-14T09:07:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"medical",
"en",
"dataset:Shekswess/medical_gemma_instruct_dataset_short",
"base_model:unsloth/gemma-1.1-7b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-1.1-7b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-12T12:13:08Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- medical
datasets:
- Shekswess/medical_gemma_instruct_dataset_short
base_model: unsloth/gemma-1.1-7b-it-bnb-4bit
---
- **Developed by:** Shekswess
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-1.1-7b-it-bnb-4bit
To utilize the fine-tuning of the model, you need to use the gemma instruction prompt template for this medical version of the model :
```
<start_of_turn>user Answer the question truthfully, you are a medical professional. This is the question: {question}<end_of_turn>
```
Metrics:
- train_runtime: 2470.9842
- train_samples_per_second: 0.809
- train_steps_per_second: 0.101
- total_flos: 3.168381674611507e+16
- train_loss: 1.843041015625
- steps: 250
- epoch: 1.0

|
mikarn/distilbert-base-uncased-finetuned-emotion
|
mikarn
| 2024-04-14T09:04:22Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-13T12:49:58Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.942
- name: F1
type: f1
value: 0.9421167357895796
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1686
- Accuracy: 0.942
- F1: 0.9421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0609 | 1.0 | 250 | 0.1693 | 0.939 | 0.9391 |
| 0.0544 | 2.0 | 500 | 0.1686 | 0.942 | 0.9421 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
phymbert/dbrx-16x12b-instruct-q8_0-gguf
|
phymbert
| 2024-04-14T09:01:48Z | 2 | 0 | null |
[
"gguf",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-12T21:08:58Z |
---
license: other
license_name: databricks-open-model-license
license_link: https://www.databricks.com/legal/open-model-license
---
This is the Q8_0 quantum model for llama.cpp:
https://github.com/ggerganov/llama.cpp/pull/6515
|
mergekit-community/mergekit-slerp-sclthpf
|
mergekit-community
| 2024-04-14T09:01:05Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:merge:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:WizardLMTeam/WizardMath-7B-V1.1",
"base_model:merge:WizardLMTeam/WizardMath-7B-V1.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T08:58:02Z |
---
base_model:
- NousResearch/Hermes-2-Pro-Mistral-7B
- WizardLM/WizardMath-7B-V1.1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: WizardLM/WizardMath-7B-V1.1
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
UnfilteredAI/NSFW-GEN-ANIME
|
UnfilteredAI
| 2024-04-14T09:01:00Z | 2,646 | 79 |
diffusers
|
[
"diffusers",
"pytorch",
"safetensors",
"NSFW",
"UnfilteredAI",
"Anime",
"Text-to-Image",
"text-to-image",
"en",
"base_model:OEvortex/PixelGen",
"base_model:finetune:OEvortex/PixelGen",
"doi:10.57967/hf/2129",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-04-14T08:25:02Z |
---
base_model:
- OEvortex/PixelGen
- UnfilteredAI/NSFW-gen
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- NSFW
- UnfilteredAI
- Anime
- Text-to-Image
---
**Model Name:** NSFW-GEN-ANIME
**Type:** Anime Text-to-Image Generator
**Description:** NSFW-GEN-ANIME is a text-to-anime image generator developed by UnfilteredAI. This model is designed to generate various kinds of images, including explicit and NSFW (Not Safe For Work) content, from textual inputs.
**Features:**
- **Anime Output:** The model produces uncensored and potentially explicit anime-style images based on textual inputs.
- **Tensor Type:** Operates with FP16 tensor type for optimized performance and efficiency.
- **Large Model Size:** With 3.47 billion parameters, the model offers a vast capacity for learning and generating diverse anime imagery.
- **Community Engagement:** As part of UnfilteredAI's open-source initiatives, the model encourages collaboration and contributions from the AI community.
**Usage Guidelines:**
- **Responsible Use:** Users are advised to exercise discretion and responsibility when generating content with this model.
- **Age Restriction:** Due to the explicit nature of the generated content, usage is restricted to individuals over the legal age in their jurisdiction.
- **Ethical Considerations:** Avoid using the model to create harmful or offensive anime imagery.
**Get Involved:**
- **Contribute:** Help enhance the capabilities and ethical considerations of the model by contributing to its development on UnfilteredAI's open-source platform.
- **Explore:** Dive into the anime imagery produced by the model to explore its creative potential and applications.
- **Connect:** Engage with the UnfilteredAI community to share insights, feedback, and ideas related to NSFW anime content generation and AI ethics.
|
StDestiny/DialogLED-base-16384-dialogsum-finetuned-10epochs
|
StDestiny
| 2024-04-14T08:57:55Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"led",
"text2text-generation",
"generated_from_trainer",
"dataset:knkarthick/dialogsum",
"base_model:MingZhong/DialogLED-base-16384",
"base_model:finetune:MingZhong/DialogLED-base-16384",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-14T04:49:05Z |
---
base_model: MingZhong/DialogLED-base-16384
tags:
- generated_from_trainer
model-index:
- name: DialogLED-base-16384-dialogsum-finetuned-10epochs
results: []
datasets:
- knkarthick/dialogsum
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DialogLED-base-16384-dialogsum-finetuned-10epochs
This model is a fine-tuned version of [MingZhong/DialogLED-base-16384](https://huggingface.co/MingZhong/DialogLED-base-16384) on the dialogsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1611 | 2.57 | 500 | 1.2166 |
| 0.769 | 5.14 | 1000 | 1.2457 |
| 0.6162 | 7.7 | 1500 | 1.3006 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
JoaoPinto/Taxi-v3
|
JoaoPinto
| 2024-04-14T08:55:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-14T08:55:25Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JoaoPinto/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Lilith88/mergekit-ties-qrxobrq
|
Lilith88
| 2024-04-14T08:54:24Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:merge:NousResearch/Llama-2-7b-hf",
"base_model:arcee-ai/Patent-Instruct-7b",
"base_model:merge:arcee-ai/Patent-Instruct-7b",
"base_model:microsoft/Orca-2-7b",
"base_model:merge:microsoft/Orca-2-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T08:51:17Z |
---
base_model:
- arcee-ai/Patent-Instruct-7b
- NousResearch/Llama-2-7b-hf
- microsoft/Orca-2-7b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) as a base.
### Models Merged
The following models were included in the merge:
* [arcee-ai/Patent-Instruct-7b](https://huggingface.co/arcee-ai/Patent-Instruct-7b)
* [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: arcee-ai/Patent-Instruct-7b
parameters:
density: 0.5
weight: 0.5
- model: microsoft/Orca-2-7b
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: NousResearch/Llama-2-7b-hf
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
GreenBitAI/Mistral-7B-Instruct-v0.2-layer-mix-bpw-3.0-mlx
|
GreenBitAI
| 2024-04-14T08:51:49Z | 4 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"license:apache-2.0",
"region:us"
] | null | 2024-04-06T21:03:28Z |
---
license: apache-2.0
tags:
- mlx
---
# GreenBitAI/Mistral-7B-Instruct-v0.2-layer-mix-bpw-3.0-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Mistral-7B-Instruct-v0.2-layer-mix-bpw-3.0`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Mistral-7B-Instruct-v0.2-layer-mix-bpw-3.0) for more details on the model.
## Use with mlx
```bash
pip install gbx-lm
```
```python
from gbx_lm import load, generate
model, tokenizer = load("GreenBitAI/Mistral-7B-Instruct-v0.2-layer-mix-bpw-3.0-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
ntvcie/Gemma2bVinhntV5_16bit
|
ntvcie
| 2024-04-14T08:51:47Z | 133 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-14T08:49:40Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** ntvcie
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KarthikAlagarsamy/distilbertfinetuneHS3E8B
|
KarthikAlagarsamy
| 2024-04-14T08:50:16Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-04-14T08:38:57Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbertfinetuneHS3E8B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbertfinetuneHS3E8B
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6901 | 1.0 | 500 | 2.7515 |
| 2.2977 | 2.0 | 1000 | 2.2558 |
| 1.8627 | 3.0 | 1500 | 2.1544 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.